Defining Image Resolution And Image Dimensions

For beginners in graphic design and computer graphics, the concepts of Image Resolution and Image Dimensions can be confusing. The notion that images have a resolution does not always correlate to the dimensions of the image at first, but the concepts are simple and the knowledge is quite necessary for graphic designers as well as PHP Programmers. The graphic designer needs to understand these concepts before starting a project, and the PHP Programmer must understand these concepts to apply automated image manipulation using PHP.


Let’s begin with a review of the definition of “image resolution.” The standard computer monitor has a certain number of dots in the screen. Those dots are little lights or pinholes allowing light to pass through. A standard CRT (Cathode Ray Tube) projects energy particles at the back of the screen, which collide with layers of film. Each particle of energy causes a different color to be generated. These layers of illuminating film sit behind a layer of pinholes, which effectively attenuate the generated light into a somewhat crisp dot of light, which we see on the other side.

The dots of light are pretty close together, and we can state the density of the dots in two dimensions, up and down. A standard monitor has roughly 72 (72.001) dots per inch in each dimension. There are more dense monitors with a larger number of dots in each inch of horizontal or vertical measure. Monitors come in man flavors, but they all have a density measured in dots per inch (DPI) or Lines Per Inch (LPI). We will deal with dpi for this article. Whether the monitor is 640 by 480 (old), 1280 by 1024 (typical), or 2560 by 1600 (new), they all have a dpi value. The vast majority of web users view all content as 72 dpi.


Since most users will view web content at 72 dpi, we must work with your images at 72 dpi before we offer them for global consumption. Using Photoshop and starting a new document, the section for the canvas offers a dpi value, which we’ll set to 72. There are additionally vertical and horizontal measurements to associate with the new document. Our target image for web use is to be 4 inches wide and 3 inches tall (standard digital photo aspect ratio). ASPECT RATIO is simply a division of the two dimensions for various calculations and communication. We can convey dimensions in inches plus resolution, or simply as dots. If we conveyed dots alone, we’d need to know the resolution for programs like Photoshop, but for browsers, it’ll always be 72 dpi.

(4 inches)(72 dots/inch) = 288 dots on the X-Axis
(3 inches)(72 dots/inch) = 216 dots on the Y-Axis


If we shoot a digital photo with a nice camera, it’ll likely save the image at 240 dpi or 300 dpi, depnding on how the camera is configured. When we open a 240 dpi image in Photoshop, it wil be displayed at the correct size (inches on the two axes) since it can interpret the resolution. A 4 inch by 3 inch (0.75 aspect ratio “AR”) has the following dot values.

(4 inches)/(240 dots/inch) = 960 dots on the X-Axis;
(3 inches)/(240 dots/inch) = 720 dots on the Y-Axis

If we simply change the image resolution from 240 dpi to 72 dpi, ignoring the dimensions (width and height), Photoshop will expand the same dots in the image to a lower density, creating effectively larger image dimensions. The 240 dpi image was squeezing the dots closer together (240 in an inch) and when change to 72 dpi, the dots float awy from each other and the dimensions grow. The exact same dots in the 240 dpi image are being displayed at 72 dpi.

(4 inches)(240 dots/inch)/(72 dots/inch) = (4 inches)(3.333) ~= 13.3 inches on the X-Axis;
(3 inches)(240 dots/inch) /(72 dots/inch) = (3 inches)(3.333) ~= 9.9 inches on the Y-Axis

Think of the resolution like a box of ball-shaped sponges. Each sponge can get smaller if squished, or expand to a maximum tolerable size. Our largest tolerable size is 72 sponges per inch. If we want to make the sponge density larger, say 240 sponges per inch, we have to squish the sponges closer together. Each sponge has the same color and shape, but they’re closer together. Now, using that squished box of sponges, we let them expand up to 72 sponges per inch. The box is much larger using the exact same number of sponges.

Our images works similarly to the sponges for size and resolution. We can squeeze the same number of dots in an image into a dense image (240 dpi) or into a less dense image (72 dpi). The concept is pretty ordinary once it sinks in. A question arises from this subject, since we know the standard computer monitor displays everything at 72 dpi. How the heck can we display a 240 dpi image on a 72 dpi monitor?


The standard computer monitor is limited to displaying everything at 72 dpi. Whether we get crazy and compress our image dots to 1500 in an inch or expand them to 5 per inch, the monitor is restricted to 72 dots per inch. The computer sends display information to the monitor at 72 dpi regardless of the actual image resolution. Remember, a web browser displays everything at 72 dpi, so we’re really talking about Photoshop at this point. The computer interprets what the image would look like at 72 dpi even though it’s really 240 dpi. When Photoshop “displays the image at 240 dpi” the computer interprets a version of the image as it would be seen at 72 dpi, so the monitor can display it. Photoshop offers two ways to viw the image at 100%. There is the ACTUAL PIXELS view, which shows the image at 72 dpi, and much larger. The PRINT VIEW shows the image at the size and resolution it is defined with,which will be much smaller. A 72 dpi image would likely go to website use, where a 240 dpi or 300 dpi image would go to a print shop, where those extra dots make sense and make a difference.


A last subject to understand here are the concepts of dot interpolation and dot extrapolation. If we use Photoshop to change the resolution, we end up with a different set of dots in the image. If we want to keep our dimensions at 4″ by 3″ but change the resolution from 240 dpi to 72 dpi, we need to move the dots away from each other and throw away the excess dots. This is a fairly understandable method of reduction. But, what happens when we change resolution the other direction, upward? Using the same dimensions, but changing from 72 dpi to 240 dpi, we need to squeeze many more dots into the image. Where do we get the new dots? This is a process of interpolation. Image Interpolation requires a method of manufacturing new dots to stick between existing dots, but within the same dimensions, effectively increasing the image resolution. Interpolation is the process of inserting interstitial data by performing calculations on the surrounding data. Extrapolation is the process of creating data beyond the defined data set by performing calculations on the known data. So, interpolation sticks new data inside the image where extrapolation would put new data outside the image.

Image Interpolation

Photoshop has a handful of image interpolation methods including Bicubic Interpolation, Nearest Neighbor

Interpolation,and Bilinear interpolation. There are other methods and flavors of each. In essence, Photoshop pushes the dots away from each other to get the new resolution, then makes a guess at the most appropriate dot to put between them, so the image looks the same. The illustration below shows how the original (black) dots are pushed apart, and new (red) dots are interpolated and positioned between the original dots. The result is a larger image.

As the image grows larger on a linear scale, the quantity of new (red) dots increases exponentially/logarithmically. The more the image grows, the more guesswork must be done via interpolation to insert more dots between the original dots. There is a limit o how large an image may grow before it no longer looks like the original, or is too blurry to use. With an understanding of image interpolation and how dots are inserted to allow image dimensions to grow, we can discuss image compression and compression artifacts.


CODECs are image COmpression DECompression definitions, which define how an image is to be compressed and decompressed. JPEG (Joint Picture Encoding Group) has a defined method for compressing image and a partnered method for decompressing them as well. Some formats like GIF (Graphic Interchage Format), BMP (Bitmapped Image), and TIF (Targa Image Format) do not use image compression and are larger in size, but use a specified order of image data in the file. So, what the heck does a CODEC really do and what are those artifacts about?

The CODEC defines a method for removing a quantity of the dots form an image so the resulting image file is much smaller. As opposed to the image interpolation above, we are starting with all original (black) pixels. The CODEC does not try to change the image resolution or dimensions. It simply wants to delete dots without destroying the image. Compression and decompression go hand-in-hand since a compressed image requires a correct method of re-inserting the removed dots, and reconstituting the original image. However, the dots that are removed are lost forever, and the decompression method of the CODEC must guess what dots should be re-inserted, similarly to the image interpolation process above.

When you compress a JPEG image to get a smaller file size, you gain bandwidth efficiency, but lose image quality. It is a balance between efficiency and quality that each must choose for themselves given their individual situations. A website on a server with poor bandwidth may dictate significant image compression for the smallest file size possible. Let’s use the overly compressed image to explain image artifacts. Rather than a decent quantity of compression like 60-70%, we apply a compression quality of 20%, which is too much compression for most images.

The concept is that we’re removing 80% of the image data and hoping the CODEC can re-insert the lost image data efficiently enough to look like the original image. But, we would find that too much data has been lost and the decompression method has insufficient image data to finish the job. Rather than stop and ask directions, the CODEC will do the best it can, inserting image data that is calculated on re-inserted data. When inserted data is 3 or 4 layers deep below already inserted data, there will be image artifacts that look like chunks of garbage data.

The illustration above shows a single pixel/dot spacing change requiring new (red) dots to be inserted between the original (black) dots. Too much compression is like moving the black dots very far from each other. The decompression method must work progressively from the black dot data moving inward to fill the huge gap (removed data). The result would be a gradient if there were only one dimension, but multiple dimensions and a variety of surrounding original data causes the new image data to go crazy. Les compression allows sufficient image data to remain for the decompression method to do a good job.

Now, we take these concepts to the simple ZOOM feature in most programs. We want to look at an image much closer, but there is not a lot of data to work with. The program (browser or Photoshop) must perform a method of image interpolation to display the image at larger dimensions on a 72 dpi monitor. When the image is a decompressed JPEG that has insufficient data to work with, the ZOOM feature will not be efficient, since it cannot efficiently interpolate the image.


When an iage has been compressed like a JPEG at 80% quality, we hae removed a portion of the original image data. The decompression method can do a good job re-inserting the lost data such that the image looks pretty close to the original. But, if we take that decompressed JPEG and attempt to compress it again, we are again removing image data. We don’t have the control to tell the CODEC to Not Remove Original Data and to Only Remove Re-Inserted Data. The CODEC simply treats the image as original and removes both original and re-inserted image data. As this process of re-compression of decompressed image data occurs, the CODEC removes more and more of the original data, effectively diluting the original image to a mostly interpolated image data set. The result is an image that degrades on each compression method. The image builds up chunky lumps (artifacts) and iteratively looks less and less like the original. Applying the ZOOM feature to this image would have horrific effects and would make the image compression artifacts much more visible.


Images that have been decompressed and are sufficiently reconstituted can be salvaged and reused. You may have a Photoshop project underway and need an image downloaded from a website, which is a low resolution and a small size. Drop the image into the Photoshop document, and expect to do image touch-up work to remove artifacts and correct the image as needed. Many poor images can be corrected, enhanced, and cleaned up for production projects. Your skills and ability with image manipulation programs like Photoshop will come into scope and re required for quality end results. A skilled graphic designer or photo touch-up artist will create great results with almost any overly compressed image.

Leave a Reply