I use html5 canvas elements to resize images im my browser. It turns out that the quality is very low. I found this: Disable Interpolation when Scaling a <canvas> but it does not help to increase the quality.
Below is my css and js code as well as the image scalled with Photoshop and scaled in the canvas API.
What do I have to do to get optimal quality when scaling an image in the browser?
Note: I want to scale down a large image to a small one, modify color in a canvas and send the result from the canvas to the server.
CSS:
canvas, img {
image-rendering: optimizeQuality;
image-rendering: -moz-crisp-edges;
image-rendering: -webkit-optimize-contrast;
image-rendering: optimize-contrast;
-ms-interpolation-mode: nearest-neighbor;
}
JS:
var $img = $('<img>');
var $originalCanvas = $('<canvas>');
$img.load(function() {
var originalContext = $originalCanvas[0].getContext('2d');
originalContext.imageSmoothingEnabled = false;
originalContext.webkitImageSmoothingEnabled = false;
originalContext.mozImageSmoothingEnabled = false;
originalContext.drawImage(this, 0, 0, 379, 500);
});
The image resized with photoshop:
The image resized on canvas:
Edit:
I tried to make downscaling in more than one steps as proposed in:
Resizing an image in an HTML5 canvas and Html5 canvas drawImage: how to apply antialiasing
This is the function I have used:
function resizeCanvasImage(img, canvas, maxWidth, maxHeight) {
var imgWidth = img.width,
imgHeight = img.height;
var ratio = 1, ratio1 = 1, ratio2 = 1;
ratio1 = maxWidth / imgWidth;
ratio2 = maxHeight / imgHeight;
// Use the smallest ratio that the image best fit into the maxWidth x maxHeight box.
if (ratio1 < ratio2) {
ratio = ratio1;
}
else {
ratio = ratio2;
}
var canvasContext = canvas.getContext("2d");
var canvasCopy = document.createElement("canvas");
var copyContext = canvasCopy.getContext("2d");
var canvasCopy2 = document.createElement("canvas");
var copyContext2 = canvasCopy2.getContext("2d");
canvasCopy.width = imgWidth;
canvasCopy.height = imgHeight;
copyContext.drawImage(img, 0, 0);
// init
canvasCopy2.width = imgWidth;
canvasCopy2.height = imgHeight;
copyContext2.drawImage(canvasCopy, 0, 0, canvasCopy.width, canvasCopy.height, 0, 0, canvasCopy2.width, canvasCopy2.height);
var rounds = 2;
var roundRatio = ratio * rounds;
for (var i = 1; i <= rounds; i++) {
console.log("Step: "+i);
// tmp
canvasCopy.width = imgWidth * roundRatio / i;
canvasCopy.height = imgHeight * roundRatio / i;
copyContext.drawImage(canvasCopy2, 0, 0, canvasCopy2.width, canvasCopy2.height, 0, 0, canvasCopy.width, canvasCopy.height);
// copy back
canvasCopy2.width = imgWidth * roundRatio / i;
canvasCopy2.height = imgHeight * roundRatio / i;
copyContext2.drawImage(canvasCopy, 0, 0, canvasCopy.width, canvasCopy.height, 0, 0, canvasCopy2.width, canvasCopy2.height);
} // end for
// copy back to canvas
canvas.width = imgWidth * roundRatio / rounds;
canvas.height = imgHeight * roundRatio / rounds;
canvasContext.drawImage(canvasCopy2, 0, 0, canvasCopy2.width, canvasCopy2.height, 0, 0, canvas.width, canvas.height);
}
Here is the result if I use a 2 step down sizing:
Here is the result if I use a 3 step down sizing:
Here is the result if I use a 4 step down sizing:
Here is the result if I use a 20 step down sizing:
Note: It turns out that from 1 step to 2 steps there is a large improvement in image quality but the more steps you add to the process the more fuzzy the image becomes.
Is there a way to solve the problem that the image gets more fuzzy the more steps you add?
Edit 2013-10-04: I tried the algorithm of GameAlchemist. Here is the result compared to Photoshop.
PhotoShop Image:
GameAlchemist's Algorithm:
Here is a reusable Angular service for high quality image / canvas resizing: https://gist.github.com/fisch0920/37bac5e741eaec60e983
The service supports lanczos convolution and step-wise downscaling. The convolution approach is higher quality at the cost of being slower, whereas the step-wise downscaling approach produces reasonably antialiased results and is significantly faster.
Example usage:
Not the right answer for people who really need to resize the image itself, but just to shrink the file size.
I had a problem with "directly from the camera" pictures, that my customers often uploaded in "uncompressed" JPEG.
Not so well known is, that the canvas supports (in most browsers 2017) to change the quality of JPEG
With this trick I could reduce 4k x 3k pics with >10Mb to 1 or 2Mb, sure it depends on your needs.
look here
I found a solution that doesn't need to access directly the pixel data and loop through it to perform the downsampling. Depending on the size of the image this can be very resource intensive, and it would be better to use the browser's internal algorithms.
The drawImage() function is using a linear-interpolation, nearest-neighbor resampling method. That works well when you are not resizing down more than half the original size.
If you loop to only resize max one half at a time, the results would be quite good, and much faster than accessing pixel data.
This function downsample to half at a time until reaching the desired size:
Credits to this post
Suggestion 1 - extend the process pipe-line
You can use step-down as I describe in the links you refer to but you appear to use them in a wrong way.
Step down is not needed to scale images to ratios above 1:2 (typically, but not limited to). It is where you need to do a drastic down-scaling you need to split it up in two (and rarely, more) steps depending on content of the image (in particular where high-frequencies such as thin lines occur).
Every time you down-sample an image you will loose details and information. You cannot expect the resulting image to be as clear as the original.
If you are then scaling down the images in many steps you will loose a lot of information in total and the result will be poor as you already noticed.
Try with just one extra step, or at tops two.
Convolutions
In case of Photoshop notice that it applies a convolution after the image has been re-sampled, such as sharpen. It's not just bi-cubic interpolation that takes place so in order to fully emulate Photoshop we need to also add the steps Photoshop is doing (with the default setup).
For this example I will use my original answer that you refer to in your post, but I have added a sharpen convolution to it to improve quality as a post process (see demo at bottom).
Here is code for adding sharpen filter (it's based on a generic convolution filter - I put the weight matrix for sharpen inside it as well as a mix factor to adjust the pronunciation of the effect):
Usage:
The
mixFactor
is a value between [0.0, 1.0] and allow you do downplay the sharpen effect - rule-of-thumb: the less size the less of the effect is needed.Function (based on this snippet):
The result of using this combination will be:
ONLINE DEMO HERE
Depending on how much of the sharpening you want to add to the blend you can get result from default "blurry" to very sharp:
Suggestion 2 - low level algorithm implementation
If you want to get the best result quality-wise you'll need to go low-level and consider to implement for example this brand new algorithm to do this.
See Interpolation-Dependent Image Downsampling (2011) from IEEE.
Here is a link to the paper in full (PDF).
There are no implementations of this algorithm in JavaScript AFAIK of at this time so you're in for a hand-full if you want to throw yourself at this task.
The essence is (excerpts from the paper):
Abstract
(see provided link for all details, formulas etc.)
This is the improved Hermite resize filter that utilises 1 worker so that the window doesn't freeze.
https://github.com/calvintwr/Hermite-resize
Since your problem is to downscale your image, there is no point in talking about interpolation -which is about creating pixel-. The issue here is downsampling.
To downsample an image, we need to turn each square of p * p pixels in the original image into a single pixel in the destination image.
For performances reasons Browsers do a very simple downsampling : to build the smaller image, they will just pick ONE pixel in the source and use its value for the destination. which 'forgets' some details and adds noise.
Yet there's an exception to that : since the 2X image downsampling is very simple to compute (average 4 pixels to make one) and is used for retina/HiDPI pixels, this case is handled properly -the Browser does make use of 4 pixels to make one-.
BUT... if you use several time a 2X downsampling, you'll face the issue that the successive rounding errors will add too much noise.
What's worse, you won't always resize by a power of two, and resizing to the nearest power + a last resizing is very noisy.
What you seek is a pixel-perfect downsampling, that is : a re-sampling of the image that will take all input pixels into account -whatever the scale-.
To do that we must compute, for each input pixel, its contribution to one, two, or four destination pixels depending wether the scaled projection of the input pixels is right inside a destination pixels, overlaps an X border, an Y border, or both.
( A scheme would be nice here, but i don't have one. )
Here's an example of canvas scale vs my pixel perfect scale on a 1/3 scale of a zombat.
Notice that the picture might get scaled in your Browser, and is .jpegized by S.O..
Yet we see that there's much less noise especially in the grass behind the wombat, and the branches on its right. The noise in the fur makes it more contrasted, but it looks like he's got white hairs -unlike source picture-.
Right image is less catchy but definitively nicer.
Here's the code to do the pixel perfect downscaling :
fiddle result : http://jsfiddle.net/gamealchemist/r6aVp/embedded/result/
fiddle itself : http://jsfiddle.net/gamealchemist/r6aVp/
It is quite memory greedy, since a float buffer is required to store the intermediate values of the destination image (-> if we count the result canvas, we use 6 times the source image's memory in this algorithm).
It is also quite expensive, since each source pixel is used whatever the destination size, and we have to pay for the getImageData / putImageDate, quite slow also.
But there's no way to be faster than process each source value in this case, and situation is not that bad : For my 740 * 556 image of a wombat, processing takes between 30 and 40 ms.