It is not always that an image view and its image have the same size. Often times the size of the images we obtain from a web service don’t match the size of the image views where they are displayed. Or you might present an image picker controller so the user can take a photo and then display this photo in a thumbnail before proceeding to a next step.
Consider the following photo:
This is what you get when you display it in a small image view:
The result is grainy, noisy. You could obviously resize the original image and use the smaller version in this small image view instead. That would give you a smoother result. That sounds like a bit of work though, especially considering that you can obtain an equivalent result with one simple line of code:
imageView.layer.minificationFilter = kCAFilterTrilinear;
And we get:
Now that doesn’t hurt the eyes like the previous image. It looks as smooth as we would like.
Now let’s discuss how all this works.
To draw an image on the screen, the pixels of the image must be mapped to the pixels of the screen. This process is known as filtering. The image can be filtered in a variety of ways and the most popular algorithms have a quality vs speed tradeoff as usual. This filtering, however, is performed by the hardware and the hardware only implements the most basic algorithms that are usually good enough and very fast.
The simplest one is the nearest filter (kCAFilterNearest). For each pixel on the screen it just picks the color of one of the pixels of the image that is located under that screen pixel, usually the image pixel that is closest to the middle of the screen pixel. The following image shows the same photo scaled down to 32×24 pixels using a nearest filter:
The result is a mess, especially because the original image is much bigger than the resized image.
The default filter on iOS is the bilinear filter (kCAFilterLinear). It selects 4 pixels in the image that are closest to the center of the screen pixel and computes an weighted average of the color of these pixels as the output. The article Bilinear Texture Filtering in the Direct3D documentation gives a nice explanation on bilinear filtering. If we scale down the same photo to 32×24 using a bilinear filter we get this:
The result is very similar to the nearest filter because there is a big difference between the original image size and the filtered image size. The bilinear filter only does a really nice job if the filtered image is not much smaller than half the size of the original image. The filtered image becomes grainer as it shrinks.
The filter this post highlights is the trilinear filter (kCAFilterTrilinear). Before the trilinear filter can be used, a mipmap must be generated for the original image. The mipmap is a sequence of smaller versions of the original image, where each subsequent image has half the width and height of the original. The last item in the sequence is a 1×1 image. The mipmap is generated for you by the hardware.
The trilinear filter dynamically selects one image in the mipmap array that is more appropriate for the image view size and then applies a bilinear filter on it. That also makes images look perfect at all times in image views where you run scaling animations. Here is the same photo scaled down to 32×24 using a trilinear filter:
It looks really smooth even for such extreme down scaling.
Storing mipmaps for your images requires more memory, of course. However, that is usually not a problem since the mipmap adds about 1/3 of the size of the original image to the memory. That would only become significant if you’re working with really large images.
Note that if you’re displaying a large image on a small UIButton, you should set the minificationFilter property to kCAFilterTrilinear on its imageView.layer, not on the button itself.