Inadequate color vibrancy is a form of color distortion (also referred to as “washed out” photos – like a newspaper that has been out in the sun for many years). A common criticism leveled at digital photography is that it lacks the vibrancy of film. We agree that this is a valid criticism. There’s two reasons digital photos often lack vibrancy. One’s related to camera manufacturers attempting to overcome the camera’s abbreviated spectrum [link] which is addressed in the section Full Spectrum RGB – Auto Correcting the Abbreviated Spectrum and in this camera limitation. The second cause of insufficient color vibrancy in digital photos is actually related to digital cameras’ “second guessing” the photo’s exposure post interpolation.
When the camera detects the photo it has taken is imperfectly exposed, an “if/then” statement in the camera’s software algorithms tells the camera to brighten the image. The results are photos that are lacking in color vibrancy and thus not eye pleasing or accurate. Read more to find out how Perfectly Clear automatically corrects this.
Film captures vibrancy
Many photos taken with digital cameras become washed out or lack color vibrancy because the “smarts” of the camera are attempting to overcome a generalized exposure problem.
This is a valid reason for many photographers to hang onto their film cameras. Silver halide is particularly strong at maintaining color vibrancy in images. But then, of course, film cameras didn’t have the opportunity to “second guess” the exposure that had already been recorded on the film!
The digital camera detects one problem and creates another
Your camera accurately detects the original photo its captured is generally underexposed. Its identified a legitimate problem. The software contained in the camera’s Digital Sensory Processor [DSP] has an “if/then” statement that tells it to brighten the entire image. This is done with a Luminance “enhancement” which algorithm brightens the photo by adding white throughout the photo. White, even in small amounts, reduces the vibrancy of the photo and shifts the original colors arbitrarily. Luminance models the wrong eye function and when you examine the history of Luminance it’s shown that it wasn’t developed for photography. An example of the negative impact a Luminance correction has is shown here.
When Luminance is used to “Enhance” brightness you’re caught in a vicious circle
“Enhancing” a washed out photo engages you in a circle of dependent controls with your software editing tools. For example, you notice your photo lacks vibrancy. It’s washed out from your camera increasing brightness, so you upload it into your photo editing/”enhancement “software to work on it. A natural approach to addressing the apparent lack of vibrancy would be to select a tool within your software to increase color saturation. Today’s software tools increase saturation by applying the saturation increase throughout your entire photo. Although this is industry practice it’s unnatural as to how the eye behaves and how the eye gathers light.
The human eyes aren’t rigid. Vision itself requires eye movement. As your eyes are viewing a scene they’re continually adjusting their dynamic range to see things clearly. This means the original image of the event in your mind’s eye has discerned different levels of saturation. Any proper saturation increase needs to be disproportionate to reflect how our eyes are gathering light. Any general increase in saturation is therefore completely arbitrary and has nothing to do with how your eyes would have responded. Because the increase is general throughout the photo it’ll often also lead to clipping of data in the darker areas of your photo. To combat your software’s unnatural saturation shift, and the resulting color distortion, you may have to try to modify the hues. Now you’ve moved from your photo “enhancement” software distorting your colors to you doing it consciously. The distortion builds and cascades, all because your photo software doesn’t model the reality of how the eye works.
To illustrate these choices, we’ve designed a model. We will illustrate the challenges of enhancing a “photo” taken with a single lens and the impact that each choice has. The model consists of 3 blue dots (representing the sky), two green dots (grass) and four brown dots (earth).
Option #1 – Darkness
The first choice is to error in favor of darkness. This results in the darkest dots being difficult to see as shown in our model above. Only two of the blue dots and one of the green dots are properly exposed. The rest have been captured with too little light. It is like grass in the shade – in the real world it appears brilliant, but on the photograph it appears dull.
Option #2 – Brightness
A second option would be to error on the side of brightness. The dots that were dark in the above example will now be brighter but we’ll lose both the bright dots and the ones that were previously properly exposed. You’ll see the brightest dots have now become white. This is not a true reflection of sky, grass, or earth, and not what the photographer saw in the real world. The results of applying this option are illustrated in the following iteration of our model.
Firstly, in order to correct for the dark dots, the brightness would be increased. Increasing the brightness affects the whole photo, unless the user brightens each pixel, which is not practical. The result is an image that looks all faded. Additionally, the color has been changed – the previously correctly exposed brown and green dots are now a faded yellow. This definitely is not a true reflection of what the photographer saw in the real world. Note below how our model is faded and the brown and green dots are changing color.
Since the photo is now faded, the second step is a step to increase the “color”, (commonly called saturation) of the photo. This is the same as adjusting the color controls on old color TV’s. We’ve now undertaken the process known as color correction, which process requires a talented person who has knowledge of color. As the image of our model below shows, increasing the saturation makes all the colors vibrant again, but this has sacrificed true color. In fact, one green and brown dot that were correctly exposed initially, are now a brilliant yellow. This isn’t even close to what the photographer saw in the real world.
The third step in color correction could be to use the hue control to try to bring the colors back to their true color. Our model below shows what a hue correction of 50 degrees does. Again, the whole image is affected. One could argue that this correction almost brings back the correct color in the sky (blue dots) and the grass (green dots), but the earth (brown dots) are now an olive color. Again, this is not what the photographer saw in the real world.
What else could be done to correct this image? We could start the process over again, changing the different controls forever. It’s our experience that once some of the dots are damaged there’s no way to restore them perfectly, and there is no way of restoring them approximately without damaging the other dots.
If we could build a smart camera that had a lens for every dot, then we could capture each dot in a perfect way – true color and optimal light.
In real life, to see all aspects of a scene more clearly your iris is dynamically adjusting to gather different amounts of light in different parts of the scene. In effect the light is increasing throughout the image in differing amounts from location to location, and so will the saturation. The brightness and saturation increases need to vary relative to the darkness or brightness of a given pixel. The human eye will automatically increase its dynamic range to see the darker areas more clearly and much less in the brighter areas. The key is a proper photo correction must improve your photo by emulating how the irises of your eyes are constantly adjusting and only then will your photo better reflect the original image of the event in the mind’s eye.
The problem is the “vicious circle” results from using a suite of approximate solutions, often involving individual color channels, as opposed to one nice “reasonably exact” solution. The challenges the vicious circle presents manifests in having to spend hours to learn various software tools to deal with the problem, having to spend time applying that knowledge to “enhance” your specific photo, and, your photo moving further and further away from what the eye witnessed of the original event. Because distortion is the order of the day you’re moving further and further from Photo Accuracy and matching the original images in the mind’s eye.
There’s an Easy Way Out of this Vicious Circle – it’s automatic and it’s accurate
Perfectly Clear automatically detects a lack of vibrancy in your photos and corrects it by:
• detecting the arbitrary amount of white the camera has added to brighten the image
• removing the white to return the image to the original colors, and improper exposure
• applying Full Spectrum RGB to accurately re-map the colors
• applying Perfect Exposure to improve the exposure in a natural way like the human eye while retaining the accurate color, the Real Colors
• boosting the colors in a patented manner that emulates how the eye adjusts saturation on a pixel by pixel basis, relative to the brightness of each pixel, thereby maintaining the accurate color relationships
When Perfectly Clear provides manual ‘tweaking” controls, such as in Perfectly Clear Pro Software, the control is an “independent” control. The power of the Perfectly Clear correction is that it recognizes the truism:
“In reality, saturation doesn’t increase in a constant way when more light is being gathered. In real life, when light increases, the saturation increase varies in a manner which is relative to the darkness or brightness of a given pixel.”
This honors the truism by adjusting the saturation relative to the light of each pixel on a pixel by pixel basis with dark pixels receiving light and saturation disproportionately to the increase in brightness and saturation for a bright pixel. Therefore, the Perfectly Clear vibrancy correction is NOT an averaged adjustment (‘filter equivalent’) applied to your entire photo.
So what exactly do other solutions look like?
As you can see in the below examples, Perfectly Clear will brings out the lacking color vibrancy, whereas other automatic solutions will keep the colors washed out, or make them even worse!
The beautiful and accurate corrections of Perfectly Clear is the sort of thing that “getting the physics right” can do for you.
How does the Vibrancy Correction affect the accuracy of my colors?
The Vibrancy Correction in Perfectly Clear is constrained by the patented invention of Perfect Exposure which limits light adjustments to the physics principles of how the eye gathers light ensuring that the correction reproduces photos with true, accurate colors … Real Color.
Perfectly Clear overcomes your camera’s limitation of washing out or reducing the vibrancy of your photos, a very common problem of digital cameras. Unlike other photo “enhancements”, the Perfectly Clear correction operates to add vibrancy in a manner consistent with how the human eye gathers light. This includes replacing the essential missing eye function the camera lacks.
The results are Accurate Photos that match the original image in the mind’s eye and serve to preserve Precious Memories perfectly. Accurate Photos are Superior Photos and science shows superior photos have the greatest emotional impact with viewers.