PART A -- Perfectly Clear Photo Corrections
CHAPTER 1 -- Full Spectrum RGB
Have you noticed how the colors from your digital camera are distorted?
Point your digital camera at a purple object. Compare the image in the view finder with the original purple object in front of you. Often the image of the blue object on screen will be different from the purple (Fedex box or purple bride’s maid dress for example) object itself, as shown in the following figure:
The first step in removing color distortion is to overcome the limited RGB spectrum of your camera. Digital cameras do NOT reproduce colors the way the human eye sees them.
This is because your eyes see this spectrum: but most cameras see this spectrum:
Although the distortion is across the entire color spectrum, it’s particularly noticeable in the greens and in the higher frequencies of indigo, purple and violet, as shown in the below photos.
Before - Very lacking in color and vibrancy After - Notice how the greens and purples are much more vibrant
Most digital cameras employ an abbreviated color spectrum because of how the RGB color space is defined and implemented. Perfectly Clear provides the only solution for accurate colors in your photos.
The eyes see the colors so why can’t the cameras see the colors?
Your eyes see the full spectrum of visual light. Most cameras see a more limited spectrum, which is one of the 15 Ways Your Camera Distorts Color. This means the colors of your photos are distorted so they don’t match the colors of the original image of the event in your minds eye. Science shows inaccurate Photos have low emotional impact because they’re insufficient memory triggers to re-ignite the Precious Memory you’re looking to preserve. So why does your camera see the more limited spectrum and produce the image on the left, when your human eyes see - and Perfectly Clear creates - the spectrum and image on the right?
Simply looking at the image and their two spectrums tells you digital cameras are NOT reproducing colors the way the human eye sees them. This is a vitally important observation because we know colors are like an express train into the Precious Memories of the human brain. Science has shown that when you distort color you distort the Precious Memories, which is why Photo Accuracy is so important.
Color is a fundamental part of all Precious Memories we remember
When you and those you’re taking photos for want to re-experience your Precious memories perfectly, then those memories need to be perfectly preserved.
How can a Precious Memory be perfectly preserved if the colors are changed?
We know that the cones of the human eye respond with precision to the frequencies of specific colors. And each color detected by the cone cells of the human eye creates a very specific electrical and chemical response in the brain.
CIE 1931 Colour matching functions for 2° observer
The image formed and then stored as the original image in the mind’s eye of the human brain is a result of these electrical and chemical responses generating wave response.
When your camera changes the colors of what you see (the original image of what you see and store in your mind’s eye), and when your photo “enhancement” software distorts the colors of your images, then the Precious Memories represented by your photos will be distorted. Your wife’s favorite sweater is no longer the same color. How should we expect her to respond upon seeing the color of her sweater distorted? What if it’s the color of someone’s eyes that are distorted?
Original Perfectly Clear
Perfect color – Real Color – means a perfect reproduction of the colors you see and seek to capture in your photos. Perfect color means perfect Precious Memories; therefore, Real Color means perfect preservation of Precious Memories. To reproduce and preserve Precious Memories perfectly, the colors seen when the picture is taken have to be reproduced perfectly in your photos. Only then is your photo reproducing the original image of the event in the mind’s eye and having the maximum emotional impact.
Prove Your Camera’s Limited Spectrum to Yourself
We’d be pleased if you’d participate in an experiment with us and point your camera at the screen. What color do you see?
One of two things just happened, either:
- Your camera wrongly rendered the purple woman as blue, or
- Your camera correctly rendered the woman as purple, and wrongly de-saturated the colors so your photos are going to contain less vibrancy than film.
Which outcome you experience is a result of how your camera manufacturer implemented the RGB standard. How do we know this?
All cameras use RGB, and technically that’s a problem for capturing colors accurately
Cameras utilize the RGB color space. In 1931 the RGB color space was defined by the Commission Internationale de l’ Eclairage (“CIE”). The CIE definition included negative numbers in the red channel. This does NOT represent negative light, but it’s a convenient way to represent that there’ll be situations where the color red has to be added to the color being matched, and not to the mixture. Red cone cells do actually have a response to high frequencies [in the blue range]. As surprising as this is, this response had to be included in the RGB definition. So how does a camera manufacturer implement negative numbers? They can’t! So the manufacturer only has two choices:
- apply a brute force logic to the RGB definition and set the negative numbers to zero. This abbreviates the spectrum in which case the purple in the blue end of the camera’s spectrum remains perfectly blue, and the green/turquoise portion of the spectrum is changed even more significantly (although it won’t be noticed by the novice eye as quickly), or
- adjust the camera’s RGB color spectrum with differing amounts of de-saturation (adding some amounts of white). This is done in a genuine attempt to approximate the natural fuller spectra, but the de-saturation will always be present. De-saturation reduces the vibrancy of your photos. For further information on samples of the de-saturation required to achieve all the hues of the spectrum, please go to http://www.techmind.org/colour/spectra.html.
Tribeca Imaging Laboratories discovered the limited spectrum of the camera, which occurs when the camera manufacturer slaves the negative numbers to zero. The discovery happened at Cornell University where they were working to digitally preserve Cornell’s precious art works. Working within a controlled environment, they discovered that despite perfect lighting and access to all varieties of digital cameras, they couldn’t reproduce the actual colors of the art works as seen by the human eye. Tribeca overcame this challenge by designing a rigorous empirical calibration methodology, which is patent-pending.
When this calibration methodology, called Full Spectrum RGB, is encoded in software it automatically re-maps the limited color spectrum from your photo (as recorded by your camera) back into what was seen by the human eye.
This remapping is the rigor required for Real Color.
Full Spectrum Colour (Tribeca) Spectra
Full Spectrum RGB delivers you photos with real life vibrancy
Application of Full Spectrum RGB technology reproduces your photos with the vibrant colors you see (original image) at the time you take the picture. This is going to include the many complex greens and even those high frequency difficult hues such as the deep indigo, violets and purples.
So What’s the Essence of Full Spectrum RGB?
Full Spectrum RGB is a digital color model based upon the full color spectrum of natural light.
Full Spectrum RGB uses digital light to simulate the dynamic nature and depth of the component colors of daylight.
How are the colors of daylight different than the colors of digital?
You use your digital camera to record light and your monitor displays images using light. But the light you record with your camera and the light created by your monitor are different. They have different component colors, and the component colors of the two systems interact differently. Daylight contains all the visible wavelengths of illumination. The human eye perceives discrete wavelengths as colors. We commonly classify the component colors of the visible spectrum as red, orange, yellow, green, blue, indigo and violet. Digital light simulates white light with only three individual wavelengths of color: red, green and blue (which is why the digital color model is referred to as RGB). The differences between daylight and digital light can be seen in the purity of their respective component colors.
How Does Full Spectrum RGB Work?
Full Spectrum RGB imparts the behavior of the component colors of daylight to the colors of RGB, modulating them relative to their intensity so that they are deeper, richer, and more life-like. It does this by automatically re-mapping an empirically derived full spectrum onto your camera’s abbreviated spectrum and then uses the Full Spectrum RGB to re-map the colors of your photos congruently. The technology is based upon an empirical analysis and synthesis of the limited RGB spectrum of 17 major camera brands. A specific mapping for your camera would be even more accurate.
Perfectly Clear incorporates Full Spectrum RGB for real color fidelity
Perfectly Clear incorporates Full Spectrum RGB to give your photos the benefit of the fidelity of all the colors that were present at the time of capture, an imperative to accurately reproduce the colors of your photos. Real Color.
Original Perfectly Clear
But what if the camera manufacturer didn’t slave the negative numbers to zero?
Of course, it’s possible your camera manufacturer chose to not slave the negative numbers to zero, in which case they will have added white to the spectrum to simulate a full spectrum. This is overviewed in the discussion of this camera limitation. The result is that the photos from such camera lack crisp vibrancy. Another reason photo colors can lack vibrancy is caused by another camera limitation where the camera tries to overcome an exposure problem. In either of these cases the solution wouldn’t be Full Spectrum RGB, but rather Perfectly Clear’s Vibrancy correction.
Delivering you accurate photos with optimum exposure, detail preservation and correct colors… Real Color, is absolutely necessary to reproduce your photos to match the original images stored in the mind’s eyes of the photographer and participants.
Accurate Photos are Superior Photos that preserve Precious Memories better and have the greatest emotional impact on the viewer.
CHAPTER 2 -- Perfect Exposure
Improper exposure in any portion of your photos results in color distortion and disappointment.
It’s probably fair to say that the biggest challenge in photography is trying to get perfect exposure throughout every aspect of your entire photo. This is because of the camera’s inability to reproduce what the human eye sees. The human eye is dynamically adjusting for exposure – whereas the camera only contains a single aperture.
Even in a great photo, some portion will be improperly exposed relative to what the eye sees. Inaccurate exposure means inaccurate lighting - and inaccurate lighting means distorted colors.
Compounding this is the traditional method used to increase brightness - adjusting Luminance – which guarantees even further color distortion.
Perfectly Clear is the only solution that emulates in your photos the essential and correct missing eye function the camera lacks, while rigorously adhering to the physics principles of light. The result is Superior Photos and superior emotional impact on those who view your photos.
Proper Exposure is the Largest Challenge in Photography
Statistical studies have shown that as many as 90% of all photos taken have some sort of exposure challenges. Is this because the photographer taking the pictures is incompetent? Absolutely not. It’s simply a result of one of the 15 ways your camera distorts color.
How the camera’s Light Gathering Compromises Exposure
The challenge with exposure is intrinsic to camera design and has been from the beginning of photography. There’ll always be endemic exposure challenges as long as cameras have a single aperture and shutter settings. It’s impossible for a single aperture to open and close in a precise instant and properly expose everything in a photo. Photos include subjects/objects at different focal distances. Each often exposed to different amounts of light. Even the character of the light they’re exposed to could be different. How can a single aperture with a single shutter speed accurately represent this diverse information?
The camera operator, knowing the limitation of his single aperture, focuses the aperture on a particular subject/object in the photo and selects a unique shutter setting.
Although this is perfect behavior, the result of these actions means that some portion of the photo will be properly exposed and other parts not. So exposure in photography, digital or film camera, is a compromise.
This is the problem
The insurmountable challenge for the camera is that single aperture means that the camera has a fixed and restricted dynamic range. But the advantage of digital over film is that poor exposure can be overcome after the fact.
Your eye gathers light perfectly and so does Perfectly Clear
The iris - the eyes equivalent of the aperture, is constantly adjusting itself to optimally expose the object that has caught your interest (the current subject). As you’re looking around, your eye’s irises are adjusting to optimally expose objects that catch your interest. The dynamic adjusting of the dynamic range of your eyes means everything that catches your focus is recorded with the equivalent of a good exposure setting.
These continual focal adjustments are why your eyes see an event differently than your camera does. Your eyes are actually seeing the event over a period of time. Your eyes are literally sending thousands of images to the brain to render a composite – and thorough – original image of the original event in the mind’s eye. In effect, unlike the fixed dynamic range of the camera your human eye is dynamically making ongoing dynamic range adjustments to see everything clearly.
Perfect Exposure for a Superior Photo means optimal light intensity, or, in other words, optimal exposure, in each and every pixel. A Perfect Exposure photo correction requires:
- optimally increasing the light intensity in every pixel in a manner consistent with the physics principles of how the human eye gathers light so all color attributes are responding to the natural attributes of increasing light, and
- preservation of Real Color, the original hue of all of the colors in the photo with a patented approach to the RGB Triplet, which will best match the colors of the original event.
To achieve these two principles requires an emulation in your photos of the essential and correct missing eye function your camera lacks.
What if you had a lens emulated at every pixel as Perfectly Clear does as shown in this figure?
“This is the patented solution for optimum light gathering”
This patented innovation of emulating a lens at every pixel is an enabler for the solution to analyze light at the pixel level and is a critical piece of Perfectly Clear’s Perfect Exposure. What magic is the solution doing at the pixel level? It’s achieving the two key components necessary to meet the two objectives set out above. Firstly, it’s maintaining the original hue of the color on a pixel by pixel basis. That is, it’s maintaining the hue of each pixel’s location within the CIE color space, by applying a patented RGB Triplet innovation. This is an integral aspect of accurate color and is covered fully in the section on Real Color. Secondly, it’s addressing the challenge of how color is affected as more light is gathered. Fundamentally how light is added to a photo, and the impact it has on color is the single most important issue in photography. To get the accurate outcome Perfectly Clear is modeling the logarithmic function of how the eye gathers light on a pixel by pixel basis, the whole time adhering to the physics principles of incoherent light. Taken together, these produce optimal lighting in every pixel.
The result is a high degree of Photo Accuracy so that your photos are beautiful and match the original image in the mind’s eye. These have been found to be the best memory triggers of the original Precious Memory. Taken together these actions effectively reproduce in your photos the essential and correct missing eye functions your camera lacks. A Superior Photo has the greatest emotional impact on the viewer.
What Others Do
The common “enhancement” methodologies for addressing exposure/brightness challenges of adding white, changing Luminance, dodging and burning, and memory colors results in clipping, arbitrary color shifts, washed out photos and distortions. This example shows the original corrected with a Luminance correction (notice how it’s washed out) and then with Perfectly Clear bringing out the vivid colors with proper exposure.
Some systems resort to memory colors to overcome these obvious shortcomings, but even the software database containing these memory colors frequently fails. This is ironic because they’re supposed to create blue skies and green foliage regardless of the exact hue in the original scene. However, memory colors often turn the blue sky to grey and the underexposed blue water to grey, whereas Perfectly Clear will maintained the vividness of the blue sky and water. In the example below notice how Perfectly Clear brings out the blue water and whites in the sails and sailboat as they appeared that day whereas the memory approach keeps everything very dull and grey.
A further challenge with memory colors and fuzzy logic is that, although they’re supposed to be like “paint by numbers “, they often don’t get the numbers correct! You’ll note below how, in an attempt to turn the sky to a darker blue, a large portion of the sky is missed close to the house and the leaves. This creates a glowing “transition zone.”
These distortions occur because these other methods either don’t model any eye function or, if they do, they model the wrong eye function. Learn more about how todays editing and “enhancement” software takes your Camera’s Distortions and adds “Enhancement” Software Distortions.
Other”Enhancement” methods of increasing brightness distort colors: How does a Perfect Exposure correction measure up?
Perfectly Clear overcomes your camera’s limitation of poor exposure and lighting in your photos, a very common problem of digital cameras. Unlike other photo “enhancements”, the Perfectly Clear correction operates to add light in a natural manner consistent with how the human eye gathers light. This is achieved with a series of algorithms that replace the essential missing eye function the camera lacks. The results are Accurate Photos that match the original image in the mind’s eye and serve to preserve Precious Memories perfectly. Accurate Photos are Superior Photos and science shows superior photos have the greatest emotional impact with viewers.
The Foundation of Real Color
Combining Real Color with Perfect Exposure reproduces photos with true, accurate colors and, optimum exposure throughout. By correcting two major camera’s limitations [link] significant color distortions [link] are corrected and your photos better match the original image in the mind’s eye. Photo Accuracy [link] demands Real Color.
CHAPTER 3 -- Vibrancy Correction
Inadequate color vibrancy is a form of color distortion (also referred to as “washed out” photos – like a newspaper that has been out in the sun for many years). A common criticism leveled at digital photography is that it lacks the vibrancy of film. We agree that this is a valid criticism. There’s two reasons digital photos often lack vibrancy. One’s related to camera manufacturers attempting to overcome the camera’s abbreviated spectrum [link] which is addressed in the section Full Spectrum RGB - Auto Correcting the Abbreviated Spectrum and in this camera limitation. The second cause of insufficient color vibrancy in digital photos is actually related to digital cameras’ “second guessing” the photo’s exposure post interpolation.
When the camera detects the photo it has taken is imperfectly exposed, an “if/then” statement in the camera’s software algorithms tells the camera to brighten the image. The results are photos that are lacking in color vibrancy and thus not eye pleasing or accurate. Read more to find out how Perfectly Clear automatically corrects this.
Original Perfectly Clear
Film captures vibrancy
Many photos taken with digital cameras become washed out or lack color vibrancy because the “smarts” of the camera are attempting to overcome a generalized exposure problem.
This is a valid reason for many photographers to hang onto their film cameras. Silver halide is particularly strong at maintaining color vibrancy in images. But then, of course, film cameras didn’t have the opportunity to “second guess” the exposure that had already been recorded on the film!
The digital camera detects one problem and creates another
Your camera accurately detects the original photo its captured is generally underexposed. Its identified a legitimate problem. The software contained in the camera’s Digital Sensory Processor [DSP] has an “if/then” statement that tells it to brighten the entire image. This is done with a Luminance “enhancement” which algorithm brightens the photo by adding white throughout the photo. White, even in small amounts, reduces the vibrancy of the photo and shifts the original colors arbitrarily. Luminance models the wrong eye function and when you examine the history of Luminance it’s shown that it wasn’t developed for photography. An example of the negative impact a Luminance correction has is shown here.
When Luminance is used to “Enhance” brightness you’re caught in a vicious circle
“Enhancing” a washed out photo engages you in a circle of dependent controls with your software editing tools. For example, you notice your photo lacks vibrancy. It’s washed out from your camera increasing brightness, so you upload it into your photo editing/”enhancement “software to work on it. A natural approach to addressing the apparent lack of vibrancy would be to select a tool within your software to increase color saturation. Today’s software tools increase saturation by applying the saturation increase throughout your entire photo. Although this is industry practice it’s unnatural as to how the eye behaves and how the eye gathers light.
The human eyes aren’t rigid. Vision itself requires eye movement. As your eyes are viewing a scene they’re continually adjusting their dynamic range to see things clearly. This means the original image of the event in your mind’s eye has discerned different levels of saturation. Any proper saturation increase needs to be disproportionate to reflect how our eyes are gathering light. Any general increase in saturation is therefore completely arbitrary and has nothing to do with how your eyes would have responded. Because the increase is general throughout the photo it’ll often also lead to clipping of data in the darker areas of your photo. To combat your software’s unnatural saturation shift, and the resulting color distortion, you may have to try to modify the hues. Now you’ve moved from your photo “enhancement” software distorting your colors to you doing it consciously. The distortion builds and cascades, all because your photo software doesn’t model the reality of how the eye works.
To illustrate these choices, we’ve designed a model. We will
illustrate the challenges of enhancing a “photo” taken with a single
lens and the impact that each choice has. The model consists of 3 blue
dots (representing the sky), two green dots (grass) and four brown dots
Option #1 – Darkness
The first choice is to error in favor of darkness. This results in the
darkest dots being difficult to see as shown in our model above. Only
two of the blue dots and one of the green dots are properly exposed.
The rest have been captured with too little light. It is like grass in
the shade – in the real world it appears brilliant, but on the
photograph it appears dull.
Option #2 – Brightness
A second option would be to error on the side of brightness. The dots
that were dark in the above example will now be brighter but we’ll lose
both the bright dots and the ones that were previously properly
exposed. You’ll see the brightest dots have now become white. This
is not a true reflection of sky, grass, or earth, and not what the photographer
saw in the real world. The results of applying this option are illustrated
in the following iteration of our model.
Firstly, in order to correct for the dark dots, the brightness would be
increased. Increasing the brightness affects the whole photo, unless the
user brightens each pixel, which is not practical. The result is an image
that looks all faded. Additionally, the color has been changed – the
previously correctly exposed brown and green dots are now a faded yellow.
This definitely is not a true reflection of what the photographer saw
in the real world. Note below how our model is faded and the brown and
green dots are changing color.
Since the photo is now faded, the second step is a step to increase the
“color”, (commonly called saturation) of the photo. This is the same as
adjusting the color controls on old color TV’s. We’ve now undertaken
the process known as color correction, which process requires a
talented person who has knowledge of color. As the image of our model
below shows, increasing the saturation makes all the colors vibrant
again, but this has sacrificed true color. In fact, one green and brown
dot that were correctly exposed initially, are now a brilliant yellow. This isn’t even close to what the photographer saw in the real world.
The third step in color correction could be to use the hue control to
try to bring the colors back to their true color. Our model below shows
what a hue correction of 50 degrees does. Again, the whole image is affected.
One could argue that this correction almost brings back the correct color
in the sky (blue dots) and the grass (green dots), but the earth (brown
dots) are now an olive color. Again, this is not what the photographer
saw in the real world.
What else could be done to correct this image? We could start the process
over again, changing the different controls forever. It’s our experience
that once some of the dots are damaged there’s no way to restore
them perfectly, and there is no way of restoring them approximately without
damaging the other dots.
If we could build a smart camera that had a lens for every dot, then we
could capture each dot in a perfect way – true color and optimal
In real life, to see all aspects of a scene more clearly your iris is dynamically adjusting to gather different amounts of light in different parts of the scene. In effect the light is increasing throughout the image in differing amounts from location to location, and so will the saturation. The brightness and saturation increases need to vary relative to the darkness or brightness of a given pixel. The human eye will automatically increase its dynamic range to see the darker areas more clearly and much less in the brighter areas. The key is a proper photo correction must improve your photo by emulating how the irises of your eyes are constantly adjusting and only then will your photo better reflect the original image of the event in the mind’s eye.
The problem is the “vicious circle” results from using a suite of approximate solutions, often involving individual color channels, as opposed to one nice “reasonably exact” solution. The challenges the vicious circle presents manifests in having to spend hours to learn various software tools to deal with the problem, having to spend time applying that knowledge to “enhance” your specific photo, and, your photo moving further and further away from what the eye witnessed of the original event. Because distortion is the order of the day you’re moving further and further from Photo Accuracy and matching the original images in the mind’s eye.
There’s an Easy Way Out of this Vicious Circle - it’s automatic and it’s accurate
Perfectly Clear automatically detects a lack of vibrancy in your photos and corrects it by:
- detecting the arbitrary amount of white the camera has added to brighten the image,
- removing the white to return the image to the original colors, and improper exposure,
- applying Full Spectrum RGB to accurately re-map the colors
- applying Perfect Exposure to improve the exposure in a natural way like the human eye while retaining the accurate color, the Real Colors,
- boosting the colors in a patented manner that emulates how the eye adjusts saturation on a pixel by pixel basis, relative to the brightness of each pixel, thereby maintaining the accurate color relationships
Original Perfectly Clear
When Perfectly Clear provides manual ‘tweaking” controls, such as in Perfectly Clear Pro Software, the control is an “independent” control. The power of the Perfectly Clear correction is that it recognizes the truism:
“In reality, saturation doesn’t increase in a constant way when more light is being gathered. In real life, when light increases, the saturation increase varies in a manner which is relative to the darkness or brightness of a given pixel.”
This honors the truism by adjusting the saturation relative to the light of each pixel on a pixel by pixel basis with dark pixels receiving light and saturation disproportionately to the increase in brightness and saturation for a bright pixel. Therefore, the Perfectly Clear vibrancy correction is NOT an averaged adjustment (‘filter equivalent’) applied to your entire photo.
So what exactly do other solutions look like?
As you can see in the below examples, Perfectly Clear will brings out the lacking color vibrancy, whereas other automatic solutions will keep the colors washed out, or make them even worse!
The beautiful and accurate corrections of Perfectly Clear is the sort of thing that “getting the physics right” can do for you.
Original Competitor Perfectly Clear
How does the Vibrancy Correction affect the accuracy of my colors?
The Vibrancy Correction in Perfectly Clear is constrained by the patented invention of Perfect Exposure [link] which limits light adjustments to the physics principles of how the eye gathers light ensuring that the correction reproduces photos with true, accurate colors … Real Color.
Perfectly Clear overcomes your camera’s limitation of washing out or reducing the vibrancy of your photos, a very common problem of digital cameras. Unlike other photo “enhancements”, the Perfectly Clear correction operates to add vibrancy in a manner consistent with how the human eye gathers light. This includes replacing the essential missing eye function the camera lacks.
The results are Accurate Photos that match the original image in the mind’s eye and serve to preserve Precious Memories perfectly. Accurate Photos are Superior Photos and science shows superior photos have the greatest emotional impact with viewers.
CHAPTER 4 -- Tint Correction and White Balance
Abnormal tints destroy the colors of your photos and results from a camera limitation associated with white balancing.
With all the high tech science in today’s digital cameras, how is it you still sometimes get these abnormal tints? How come the colors in your photos aren’t true to what you see when taking the image?
Light is fundamental to photography, and, of course, the character and quality of your photo is proportional to the character and quality of the light being captured. In today’s digital world, the character and quality of the source light captured is automatically determined by your camera.
The challenge is your camera can be exposed to multiple light sources simultaneously. In this situation how does your camera choose, which is the primary source light it should use to compute its white balance? If it chooses incorrectly your photo most certainly will not match the photo opportunity you set out to capture.
There’s a solution.
How does your camera generate an Abnormal Tint in your photos?
Digital cameras have an advantage over film because they can compute a digital white balance. What this means is they determine the primary source light for your photos and use the white color in your photo to compute a color balance to match the light source they’ve chosen. However, when your camera incorrectly identifies the light source for your photo, color distortion is guaranteed. If the incorrectly identified light source results in a significant color shift, the severe color shift is called an abnormal tint.
Lighting and the theory of Black Body Radiation
After the camera has determined the primary source of lighting, what science does it use to compute its white balance? White balancing is based upon applying the principles of Black Body Radiation theory. Black Body Radiation theory postulates that color frequencies can be exactly matched to a specific temperature. This can be a camera limitation that results in distorted color so we cover the subject fully here: 15 Ways Your Camera Distorts Color: Camera Automatic Function of White Balancing. In short, the accuracy of the color in your photos is dependent upon your camera assigning an accurate temperature representative of the qualities of the type of light in your photo. The accuracy of the temperature is important because all colors in your photo will be rebalanced by your camera using what it selects as the reference temperature for the source light. A camera is challenged with a significant number of variables when a picture is being taken but none greater than distinguishing the source of the light present.
How your camera determines the Source Light of your photos
To determine what type of light source is present in your photo the software of your camera searches your image for some representative white (or gray) color.
When such a color is identified:
- the temperature of the light for that white is estimated,
- the identified white is assumed to be pure white and its temperature is then recast at 6500 degrees Kelvin, presumably because white is considered pure white at this temperature,
- differential calculations for shifting the representative white (estimated color temperature) to pure white at 6500K are then applied to “rebalance” all colors in your entire photo.
The rebalancing of all colors from a known white is consistent with Black Body Radiation theory, because all colors will be on the same temperature curve - see 15 Ways Your Camera Distorts Color: Camera’s Automatic Function of White Balancing.
There are however at least four challenges with this approach:
- Often the camera’s algorithm selects a non-representative white and then re-balances all the colors in the photo based upon the temperature of this non-representative white;
- Sometimes there’s no white in the photo and the algorithm selects a non-white color to determine the temperature from which to re-balance the photo,
- The camera selects the brightest color value as white; or
- Even if the algorithm selects an appropriate white from which to balance some portion of the photo, the re-balancing is being done throughout the entire photo, and
…these challenges, as stated here, really oversimplify the reality the camera is facing.
There can be a multitude of sources of lighting for a photo
The quality and sources of light present in your photos can vary widely: ranging across a broad spectrum from direct sunlight, skylight, and cloudy situations, to moonlight and artificial lighting conditions such as fluorescent and tungsten lighting and more. Most importantly, there’s often several different light sources present when a photo is being taken. A night photo could have, for example, a combination of moonlight, sodium vapor street lamps and an automatic flash. Which temperature does your camera choose as the representative light source?
Or imagine a sunset in the mountains where a mountain casts a long shadow across a huge expanse of trees. In this case:
- you’ve got reddish light from the sun which is represented by one temperature, being a cool Kelvin temperature, (although we think of the color as warm),
- you’ve got a greenish light from the trees which could be represented by a mid level Kelvin temperature, and
- you’ve got a third temperature of light from the shadows, being a hot Kelvin temperature representing the blue end of the spectrum, (a psychologically cool color).
Which temperature does your camera choose as the representative light source?
In the previous two examples there’s really no one correct answer because there can be no single temperature representative of the multiple sources of light present in these photos. So even if Black Body Radiation Theory were a perfect solution for White Balancing, no camera could ever pick a single perfect temperature curve to represent multiple sources of light in your photos.
… and then there’s the fact that not all sources of light constitute Black Body Radiation
Artificial “man made” lights [such as fluorescent and various lighting sources in street lamps such as sodium vapor] discharge light in narrow spectral bands. These light sources tend to be incomplete spectrums that have color “spikes” so the color captured by your camera’s digital sensors won’t be predictable or controllable.
These types of light sources aren’t Black Body Radiation at all.
Because these challenges are known your camera often gives you a number of options to choose from.
Cameras generally give you different approaches to light source
Today’s cameras give you, usually, three broad ways to manage the white balance challenge; ie determination of the temperature of the representative lighting of your photo:
- Automatic White Balance - here the camera is going to determine automatically what it believes is the temperature of the representative light. Given all the variables this often works surprisingly well but we’ll articulate below why it’ll seldom result in an accurate reproduction of the colors you saw at the time.
- Pre-set White Balance - this option enables you to choose a specific setting that you believe will accurately reflect the light present. These options are going to usually include sunny, cloudy, shadows, tungsten, fluorescent, flash and night time. When you make your selection you are, in actuality, making a manual selection of a single Black Body temperature curve. Your selection will seldom match the circumstances exactly.
- Custom Setting - this option enables you to use the camera to custom create a white balance. This is normally done by taking a photo of something gray in the given light conditions which will be used later as a reference.
Difficulty in accurately determining the source light of your photos leads to Inaccurate Colors
Achieving perfectly accurate colors in your photos is a significant challenge because, as noted:
- there may be more than a single light source in your photos in which case your camera has a hobson’s choice,
- due to the complexity of the possibilities your camera may simply make the wrong choice, or
- your light source may be non Black Body and not fit the theory.
When the error in light source selection is large your camera will ‘rebalance” all of the colors in a drastic and awful way. In effect your camera generates what we refer to as an abnormal tint. What follows are samples of different lighting conditions and the photos the cameras produced, all exhibiting abnormal tint conditions. You will also notice how Perfectly Clear automatically corrected them.
Original Perfectly Clear
Original Perfectly Clear
Original Perfectly Clear
Original Perfectly Clear
Having examined in depth thousands of images, Athentech Imaging’s conclusion is that the majority of abnormally tinted images are the result of your camera sensors capturing the image accurately and the Digital Sensor Processor [which contains a number of sophisticated algorithms] making a poor choice of “white” for balancing the image.
This results in the camera adding a significant color shift which manifests as a “color cast” or “abnormal tint” as shown in the foregoing photos. Please note that this can happen even if you, the operator, have appropriately selected a specific camera “white balance” setting, e.g. tungsten lighting.
Perfectly Clear goes beyond white balance to remove egregious Color Casts
To remove egregious color casts and better reflect the colors you saw at the time of shooting, Perfectly Clear incorporates a new patent pending approach to automatically identify an abnormal tint or color cast. Once identified, Perfectly Clear seeks to apply a more representative temperature of the light present for more accurate color reproduction. The results can be extremely good color corrections, all achieved by automatically detecting and removing the abnormal tint. Please note that the approach shown above is designed to search out and remove egregious color casts only. It has no dependence on white to function. It’s not a general replacement or substitute for the need for White Balancing. White Balancing, despite its drawbacks, is the state-of-the-art in cameras as we know them today.
RAW format can improve the colors in your photos by giving you flexibility in selecting a white balance different from the choice of your camera
By choosing to shoot RAW you’ll not be limited to the white balance chosen by your camera. This is one of many significant advantages to a RAW format workflow.
Except in the simplest of cases, photo “enhancement” tools and other automatic corrections do not address abnormal tints. “Enhancement” systems can’t robustly identify the abnormal tint so their adjustments tend to mischaracterize the problem and “enhance” the brightness of the photo exacerbating the distorted color in place.
These mischaracterizations of the problem is well evidenced in the following example photo:
Accurate color requires removal of Abnormal Tints
Perfectly Clear’s removal of an abnormal tint introduced by the white balance function of your camera reveals a more realistic representation of the colors and lighting you saw at the time of capture. The Abnormal Tint Correction in Perfectly Clear is seeking to ensure that the correction reproduces photos with true, accurate color … the Real Colors. Perfectly Clear can overcome your camera’s white balance limitation of egregious abnormal tints in your photos, a problem arising when you digital camera is seeking to add value to the photography experience by automatically white balancing your images. Unlike other photo “enhancements”, the Perfectly Clear correction operates to remove the abnormal tint in a manner consistent with how the human eye gathers light.
When a tint is successfully removed the results are Accurate Photos that match the original image in the mind’s eye and serve to preserve Precious Memories perfectly. Accurate Photos are Superior Photos and science shows superior photos have the greatest emotional impact with viewers.
CHAPTER 5 -- Contrast Correction
Inadequate contrast in your photos leaves them appearing flat and artificial. Unfortunately, increasing contrast using today’s photo “enhancement” editing programs is going to shift and distort the colors of your photos. It must be remembered that cameras are capturing three dimensional information and displaying it in a two dimensional image. A camera has no empirical way of knowing the distance that the light striking its sensors has traveled. The result is that camera’s images may show no differentiation between an object in the foreground and one further away.
Human neurology is very powerful and includes the ability to estimate distance between objects. When you look at a photo these neurological abilities “create” a three dimensional view from the two dimensional photograph. The human brain does this by pattern recognizing objects in a photo that it has personal experience with, eg, trees are three dimensional etc. Your ability to “create” this three dimensional perspective is enhanced when your photo reproduces its subjects/objects how you’re used to seeing them in nature, because that’s how they’ll be represented in the original image in your mind’s eye.
Original Perfectly Clear
Perfectly Clear provides this realism in a unique way.
Your neurology uses Contrast to construct dimensionality
Cameras capture images in a two dimensional plane. This means there’ll be a lack of depth in the image. People and objects that are different distances from the camera will appear as if they are the same distance, thus creating a “flat” looking photo.
Original Perfectly Clear
We all agree there’s a lack of contrast - we only disagree on how to change it
In traditional photo “enhancement” editing software a flat image is enhanced by increasing the “contrast”, the difference between light and dark or tones. As our eyes rely on shadows to recognize shapes, amplifying the difference between dark and light can reveal edges and nuances between shapes. Unfortunately the implementation to increase contrast in today’s photo editing software packages changes the saturation of the photo’s colors. As we all know, changing saturation changes color. The method of changing the saturation violates the physics principles of light so the shift is a color distortion. The more the saturation is shifted to increase contrast the more the original colors are distorted.
The purpose of contrast is to identify the edges, not shift colors
In effect “contrast” is about clearly showing where the demarcation is between objects/subjects in the photo. It is about providing the ‘cues’ of depth to enable the brain to easily construct a three dimensional image from a two dimensional photo. This can be done without shifting the original colors. The more intensively the dynamic range of the frequencies in your photograph are managed, the more distinctly differences will be discerned between objects/subjects. The mind will perceive these clearer distinctions as demarcations, and construct ‘depth”. The finer the detail level at which this is accomplished the more depth will appear.
Perfectly Clear has a unique approach to detail
Athentech’s Medical Imaging division has a patented process for overcoming the loss of detail.
X-rays are captured in a DICOM image with a tremendous amount of detail, more than 6000 different shades of gray. Effectively, more than 6,000 levels of very minute contrast differences. The challenge is that your visual system can only distinguish 30 to 100 shades of gray from viewing a reflective surface.
Perfectly Clear intensively manages the dynamic range of all 6000 levels of gray scale in a medical x-ray to rapidly reveal information. This brings the details into focus by increasing the contrast at a fine level, allowing fine elements to be readily seen with the human eye. Perfectly Clear incorporates this patented medical imaging technology into its photography solution, to reveal more information in your photos.
Intensive Dynamic Range management gives the appearance of depth
Intensively managing the dynamic range within an image demarcates the “edges” between multiple people or objects in your photographs. This provides visual cues of depth, enabling the brain to easily reconstruct the third dimension.
Original Perfectly Clear
Competitor Comparisons - What impact does intensive Dynamic Range management have on color?
Adjusting contrast in today’s photo “enhancement” edit programs results in color shifts in your photograph. The reasons for this are varied, however:
- if the software program is using memory colors, a change in contrast will shift the saturation of your photo sufficiently to cause the software to re-access its database and swap one arbitrary chosen memory color to a different arbitrary memory color;
- if the software program de-saturates a color by reducing white the balance between the red, green and blue channels will be shifted, resulting in a different color; and
- in yet other software, contrast adjustments will introduce artifacts.
Perfectly Clear’s approach to depth is rigorous and will never introduce a color shift or artifacts. This is because it’s discerning the demarcation between frequencies. The result is Real Color is maintained by preserving the RGB Triplet and the change in adjacent color tones is achieved by increasing and decreasing light in accordance with the physics principles of how the eye gathers light.
The below photo shows how Perfectly Clear brings out the depth in the trees while maintaining true color. It is being compared to both a 1) memory color approach (notice how this approach damages the color of the sky, the trees [they all look the same and are lacking in depth, thus they start to blend together], the clouds (they are turning blue) and the driveway, 2) traditional dodge and burn method designed for film, now being used on digital (notice how faded and lacking in colors the entire photo is and how the color of the sky has been changed to an artificial blue),
In this example notice how our competitor creates extreme artifacts in photo as they attempt to bring out the contrast. In addition you’ll notice how much more color vibrant the overall Perfectly Clear photo is and how much whiter the clouds are.
In effect the light is added and deleted by providing the essential and correct missing eye function the camera lacks.
The invention was designed for the medical imaging world where accurate detail could be a life saving element. That same accuracy needs to be brought into the photography world so that accurate Real Colors are preserved.
Increasing contrast is necessary to better match the Original Image but this must be achieved without color shifts
Increasing contrast is imperative for a realistic representation of what you saw when you took the photograph. Lack of contrast will result from how the original signal of your photo is interpolated. The Contrast Correction in Perfectly Clear is constrained by the patented invention of Perfect Exposure, which limits light adjustments to the physics principles of how the eye gathers light ensuring that the correction reproduces photos with true, accurate color … the true Real Colors.
Perfectly Clear overcomes your camera’s limitation of inadequate contrast in your photos, a very common problem of digital cameras. Unlike other photo “enhancements”, the Perfectly Clear correction operates to add contrast in a manner consistent with how the human eye gathers light. This includes replacing the essential missing eye function the camera lacks. The results are Accurate Photos that match the original image in the mind’s eye and serve to preserve Precious Memories perfectly. Accurate Photos are Superior Photos and science shows superior photos have the greatest emotional impact with viewers.
CHAPTER 6 -- Sharpness (Clarity) Correction
Lack of clarity or sharpness in your photos results from a camera limitation and is interpreted by the photo viewer as a “lack of depth” and fuzzyness. A common complaint of digital photographs is that they lack sharpness. In actual fact digital photos do lack sharpness because of how the sensors of a digital camera are arranged. Traditional methods used to sharpen your photos, including the widely adopted ‘unsharp masking’, are going to simultaneously shift and distort the colors of your photos. Perfectly Clear’s proprietary method brings clarity to your images while maintaining Real Color and, it does it without the artifacts and shortcomings traditional sharpening methods would introduce.
Original Perfectly Clear
One reason for a lack of Sharpness is camera manufacturers intentionally blur your photo
Digital photographs often lack sharpness. There’s several reasons for this, one of which is intentional. Are you aware that manufacturers intentionally blur the light being captured by your camera?
There’s a good reason that camera’s blur the very signal they’re capturing
In the Article 15 Ways Your Cameras Distorts Color the subject of aliasing is covered in some detail. Suffice to say here that aliased data is data that can cause unpredictable outcomes when your photo is being created or “enhanced”. Camera manufacturers are very cognizant of these severe problems and for this reason they want to minimize aliasing. Aliasing of color data is exacerbated by the Bayer array arrangement of sensors in your camera. To reduce the aliasing of the energy, there’s an anti-aliasing filter placed in front of your camera’s sensors. This anti-aliasing filter is designed so that incoming energy which would have struck a single sensor is, instead, spread out over several sensors. This gives the interpolation algorithm additional color information to work with for determining the color of a specific pixel but, it does so at the cost of reducing sharpness.
Just what is Clarity?
It’s important for our discussions to define what “clarity” or “sharpness” is. In layman’s terms think of it as the clarity of the “edge” between two objects. You’d think that where one object ends and another begins would be pretty simple … at least in theory. But the theory and the reality can differ widely in digital photography where the world is not defined by lines but rather by “dots” or “pixels”.
Because of how digital camera’s capture information there’s other reasons digital photos lack sharpness
Pixels aren’t lines, they’re dots, so where does one object end and the other begin? In a world of dots, edges can be hard to “distinguish” because:
- The vast majority of cameras arrange their sensors in a Bayer array. Each sensor represents a single pixel and gathers light for only one color - either red, green or blue. To create the final photo, each single colored pixel will be compared mathematically with the pixels surrounding it, and a final mixture of three colors for the pixel will be interpolated from the color information of these surrounding pixels. Interpolation is a fancy word for “averaged”. In effect, the single color of a single pixel is averaged with the different colors of surrounding pixels to get an ‘averaged’ color. This “averaging” leads to “flatness” - rendering edges between objects less distinct. For a good explanation of the de-mosaicking process see this camera limitation discussion and see www.ronbigelow.com,
- some edges of photographed objects will be in the middle of a pixel, and pixels are indivisible,
- if your photo is converted to a JPEG image each pixel will be grouped into a square with 7 other pixels. These 8 pixel JPEG squares are created without reference to “edges” and each 8 pixel square can then have different compression applied. The different compression can create edges where there weren’t any, and destroy or distort edges that were present. This camera limitation is covered here.
There can also be a lack of clarity or sharpness due to camera lens distortion but that’s beyond the scope of the discussion here.
We live in a Three Dimensional world so sharpness is important
The indistinct edges in your photo leaves a flat appearance. The sharper the distinction between edges of objects in your photos the more easily the human brain attributes “depth” and “dimensionality” to the image. As humans we live in a three dimensional world and therefore the more depth and dimensionality there is in our photos, the better those photos will act as memory triggers and help us recall the original images of the event in the mind’s eye. So, like contrast, our minds use sharpness to recreate depth and dimensionality in our photographs.
The challenge with traditional Sharpening methodologies is they distort color and introduce artifacts
There’s several challenges when your camera or you, using traditional photo “enhancement” software, sharpen your images. Firstly, sharpening a photo is traditionally achieved by varying the saturation between adjoining pixels. Most “enhancement” software does this using a variant of an unsharp masking process.
This process requires you, as the user, to define three variables. Once you’ve selected the constraints for the variables they’ll be applied throughout the entire image. Because these variables lack granularity and intelligent adaptability, no matter what constraints you choose they’ll prove to be arbitrary and result in color shifts and distortions in your photo. Secondly, the more you sharpen your images the more likely you’ll create artifacts or “ringing” (sometimes called halos).
The artifacts and “ringing” seen in the above images can arise for several technical reasons, including: amplification of “aliased” color information, the de-mosaicking algorithm unsuccessfully resolving the edge, the revealing of JPEG squares, the JPEG squares interfering with the sharpening algorithm, or from photon effects.
Conventional thinking is that you may wish to sharpen your image two or three times for the desired outcome. Using traditional methods, each of these sharpening actions is going to increase color distortion and risk the introduction of artifacts.
Sharpening photos the traditional ways takes a lot of time
The unsharp masking methodology was spawned in the dark room of film developers. Sharpening varies the local contrast along the edges in your photos. With an Unsharp Masking tool the sharpening process requires that you set three variables for the tool.
- an Amount which signifies to the tool the amount of saturation change you wish to make between pixels (amplitude of change desired),
- a Radius which signifies to the tool how many pixels away from the “edge” you wish included in the sharpening process, and
- a Threshold which tells the tool what tonal difference between pixels (0-255) is to trigger sharpening.
These are all dependent variables meaning that changing any one variable impacts the operations and outcomes of the other two. This means that achieving the most appealing sharpening now becomes an iterative process, requiring you to adjust and readjust all these variables until the earlier of getting a satisfactory result and fatigue of the process itself. While you’re doing this just remember that you’re continually shifting saturation in an arbitrary way and distorting the colors of your photo the entire time.
Perfectly Clear clarifies photos while maintaining Real Color and never introduces artifacts
With all these challenges in mind Perfectly Clear developed a proprietary methodology that sharpens images optimally and will NOT create ringing, will NOT reveal JPEG squares, and will always maintain the Real Colors.
Original Perfectly Clear
Original Perfectly Clear
This is made possible because Perfectly Clear’s sharpening correction stands on the shoulders of its patented Perfect Exposure process which increasing and decreasing light by providing the essential and correct missing eye function your camera lacks.
In essence, the technology adjusts adjoining pixels using the principles of how the eye gathers more or less light. By effectively increasing the light on some pixels and reducing it on others that adjoin, but doing it just as the human eye would do it, a sharp edge appears. This innovative approach is also bounded by the patented approach to maintaining the Real Color with the RGB Triplet Ratio. The technology avoids all information represented by the smallest signals.
Perfectly Clear also constrains its sharpening correction with the physics principles of light to avoid sharpening any incidents of small signal because this data is:
- apt to be characterized by photon effects,
- have the least precision, and
- be most prone to come from a low signal to noise sensor response.
If one were to sharpen data with any of these characteristics it’s most likely going to result in amplifying noise, create artifacts and distort edges. When Perfectly Clear provides a tweaking control to improve its auto correction, as in Perfectly Clear Pro Software, the slider is always an independent single (ie. - easy to use) control.
Maintaining the accurate colors of your photo while providing crisp sharpening is necessary for a high quality photo
Increasing sharpness is imperative for a realistic representation of what you saw when you took the photograph. As we’ve seen there’s several causes for a lack of sharpness including how the original signal of your photo is interpolated. This is a camera limitation. However it’s also introduced by traditional “enhancements” such as brightness and contrast. Increasing sharpness by traditional methods takes considerable time and causes significant color distortion. The Sharpness Correction in Perfectly Clear is constrained by the patented invention of Perfect Exposure, which limits light adjustments to the physics principles of how the eye gathers light ensuring that the correction reproduces photos with true, accurate color … Real Color.
Perfectly Clear overcomes your camera’s limitation of inadequate sharpness in your photos, a very common problem of digital cameras. Unlike other photo “enhancements”, the Perfectly Clear correction operates to add sharpness in a manner consistent with how the human eye gathers light. This includes replacing the essential missing eye function the camera lacks.
The results are Accurate Photos that match the original image in the mind’s eye and serve to preserve Precious Memories perfectly. Accurate Photos are Superior Photos and science shows superior photos have the greatest emotional impact with viewers.
CHAPTER 7 -- Red Eye/Golden Eye Correction
Red eyes in your photos are unnatural and are caused by the light from a flash source bouncing off the back of the retina of the person in the photo. Generally speaking, this happens when the light of the flash enters the person’s eyes in a straight line [which is why red eye is most prevalent when people are looking straight into the camera]. Today’s camera’s have the flash located very close to the lens, increasing the likelihood that the people in your photos will have light from the flash enter their eyes directly and strike the back of their retinas. Needless to say, red eyes are color distortion, but it’s a distortion easily fixed by Perfectly Clear.
Original Perfectly Clear
It starts with the Flash
Normally the pupil constricts when subjected to light. That is why a very small amount of light will reach the retina under normal lighting conditions. For this reason the eyes do not appear in natural conditions. With flash strobes, the red-eye effect appears because the pupil does not have the time to constrict so the retina is very well lit.
Original Perfectly Clear
Original Perfectly Clear
Red Eye is related to the Angle of the light
However the retinal region that is lit by flash must coincide with the one that is projected to the camera sensor in order for the red-eye effect to occur. This translates into the existence of a critical angle between the flash position, retina and camera lens, above which the effect does not occur. The value of this angle is apporoximately 3 degrees but if varies with the subject's physiological characteristics as well as gaze angle.
Three parameters define this critical angle: the flash to lens distance of the camera, the camera to subject distance and the pupil dilations. Thus, the red-eye effect is most prevalent in photographs taken with compact digital still cameras and camera-enabled cell phones where the flash is normally very close to the lens. Additionally, the incidence of red eye increases with the subject distance and pupil size. The latter relates to the amount of light in the field of view of the subject. In daylight, when the pupil is almost as small as a pinhead the red-eye effect appears very scarcely. Conversely, the pupil is dilated in dark environments so the chance of the effect to occur is high.
Red eye results from a specific limitation in camera design.
Tessera is the world leader in removing Red-Eye
Tessera purchased FotoNation in 2008. We’ve incorporated this automatic red-eye and golden-eye solution into Perfectly Clear. Tessera is a leader in providing mulitple solutions to the photo industry. Although their focus is on embedded solutions - red eye for example is embedded in over 80 million digital comeras - Athentech has arranged for this powerful algorithm to be used by you in the post processing world. The novel Tessera red-eye detection and removal process is protected by multiple patents. Tessera maintains a database of 20,000 images containing challenging red-eye photos. Its solution is run against this extensive database to improve detection and removal.
Athentech Imaging’s intensive testing of multiple red-eye solutions resulted in Tessera’s being chosen because:
- their solution automatically detected the highest percentage of true positive red-eye circumstances;
- While generating the lowest false positives;
- the fastest speed of detection and correction;
- the wide range of eye problems (both red-eye and golden-eye) it addresses
- the fact that the corrected images are realistic and life-like in retaining the highlights on the eyes.
Athentech Imaging’s testing showed the automatic detection to be within the 80 percentile of accuracy.
The eyes and Red Eye
Red & golden eye is the result of a camera limitation [link] and is an obvious color distortion that must be removed to reproduce your original experience. Better matching the original image in the mind’s eye requires accurate colors, exposure and requires your eyes seeing an accurate replication of the eyes of the subjects in your photo. Photo Accuracy [link] demands Real Color.
The Red Eye Correction in Perfectly Clear is designed by Tessera to remove the red-eyes and reproduce photos with true, accurate color … Real Color. Perfectly Clear overcomes your camera’s limitation of red eyes in your photos, a very common problem of digital cameras shooting in low lighting conditions. The results are Accurate Photos that match the original image in the mind’s eye and serve to preserve Precious Memories perfectly. Accurate Photos are Superior Photos and science shows superior photos have the greatest emotional impact with viewers.
CHAPTER 8 -- Noise Reduction
Noise is present in all digital capture and it’s a major cause of color distortion. When the signal detected by your camera’s sensors is low the signal to noise ratio will also be low. The result is the noise becomes visible and becomes a source of both color distortion and obvious artifacts in your photos. The noise is most often seen in the low light areas of your photos where a camera has inadequate dynamic range allocation to capture all of the information in the shadows. Removing and reducing this noise requires an in depth understanding of the causes of noise, and a unique solution to automatically detect for noise and to remove without blurring the photo.
Original Perfectly Clear
There are many sources of Noise in digital photos
Noise in digital photography can be thought of as the equivalent of grain in film photography, therefore it would be unfair to view film photos as noiseless. Noise can be introduced into your digital photos by a variety of factors including:
- your camera having inadequate dynamic range available to capture all of the information available at the time of the photo,
- your camera’s sensors inadequate sampling, giving rise to aliased signal,
- the small footprint of your camera resulting in smaller and more tightly packed sensors [this is done to facilitate a smaller camera size and in an attempt to reduce aliasing] reducing accurate sensor response,
- small sensors in point and shoot cameras record less signal so the signal to noise ratio of each sensor is much lower than the signal to noise ratio of a digital SLR where the sensors will be roughly twice as large, and
- the arrangement of your camera sensors requiring a de-mosaicking process to generate a picture.
In addition to noise stemming from mechanical camera limitations, noise arises from the conditions of capture itself. For example, sports, indoor scenes and other situations that require high ISO camera settings [rendering the sensors highly sensitive to light variances] are typically affected by visible amounts of light and color noise which can degrade the picture fidelity of the original image.
Because there’s so many potential causes of noise…
It’s important to note that in any given photo there’ll most likely be multiple causes of noise and therefore the noise will take multiple forms. To deal with noise effectively, noise correction algorithms must be able to distinguish between luminance noise and color noise. Color noise is synonymous with color distortion. Luminance noise is corrected by methods that generally create color distortion. So whether the noise is color noise or luminance noise the order of the day will be color distortion.
Perfectly Clear incorporates the power of Stoik automatic noise reduction
Perfectly Clear incorporates the automatic noise detection and removal solution from Stoik Imaging. In rigorous testing performed by Athentech Imaging the Stoik engine was revealed to be:
- The fastest solution on the market
- the best solution for preserving image detail
- the only solution with true automatic noise deduction
- the only solution to accurately apply varying noise corrections depending on the ISO noise level
Stoik Noise Autofix is the first photo noise reduction algorithm that allows for fully automatic operation. The algorithm will first analyze the photo to determine if noise exists, and only then apply the proper amount of noise removal. Specific algorithms and settings are applied depending on whether the photo was taken with a digital camera, camera phone, or scanner. The algorithm includes modules of noise detection, noise analysis and noise filtration which are statistically trained to provide optimal balance between photo noise reduction and preservation of image details. Unlike other noise removal algorithms that blur photos after removing noise, the Stoik algorithm is unique in preserving the crisp details. The noise in digital photos is reduced by 2 - 3 stops, so that the noise level of the photo shot at ISO 1600 is effectively reduced to ISO 200 - 400 levels. It is possible to select various automatic noise presets creating special softening of the skin for portrait photos, and extra powerful noise removal for camera phones, plus a special night shot noise removal.
Perfectly Clear with Stoik Noise
Noise and the human eye
Noise results from camera limitations. Arguably the human eye has noise because the eye can’t see infinite levels of detail. Having said this the human eye sees the fractal world as being continuous, therefore the eye does not create artifacts. It’s for this reason that noise in your photos is so immediately recognized as egregious. This color distortion and artifacts must be removed effectively to reproduce your original experience.
The Noise Reduction Correction in Perfectly Clear is designed by Stoik Imaging to reduce the noise and reproduce photos with true, accurate color … the true Real Colors. Perfectly Clear overcomes your camera’s limitation of noise creation in your photos, a very common problem of digital cameras because of their fixed dynamic range. The results are Accurate Photos that match the original image in the mind’s eye and serve to preserve Precious Memories perfectly. Accurate Photos are Superior Photos and science shows superior photos have the greatest emotional impact with viewers.
CHAPTER 9 -- Skin Tone Adjustments 1 - Automatic removal of the Infra Red Effect
Abnormal skin tones are immediately recognized as a manifestation of color distortion. Skin tone distortion can result from an unseen portion of the electromagnetic spectrum. When a camera sensor detects energy it generates a voltage response representing that energy. The sensor itself does NOT distinguish color. Today’s sensors are sensitive enough to respond to wavelengths beyond the visual spectrum such as infra red frequencies. When your camera allows infra red light to generate a voltage response the color red will be boosted in the skin tones of the image. Perfectly Clear has a solution.
Original Perfectly Clear
Today’s camera sensors are sensitive enough to respond to Infra Red Energy
A common challenge in digital photography is what we’ll call the over responsiveness of the cameras sensors to a range of long wavelength frequencies unseen by the human eye.
Heat is emitted from people as infra red radiation. Your camera has three color filters which overlay the camera’s sensors: red, green and blue (an exception to this can be Nikon camera’s use of CYM filters and Foveon’s proprietary non-filtered approach). Infra-red light passes through the red filter and therefore generates a voltage response for the “red” sensor. As infra-red is not seen by the human eye this voltage response is effectively boosting the “red” channel. This is a camera limitation that is going to result in color distortion. Because the longer wavelengths of light are most often represented in skin tones, this is where the distortion will occur.
Original Perfectly Clear
Perfectly Clear will automatically search for and detect excessive red in skin tones and reduce it. In the “after” image note how the skin tones have been reduced but the other reds in the image have been maintained.
Skin and human physiology
The body regulates the rate of heat loss by dilating or constricting blood vessels at the surface of the skin. When our body is hot or is producing excessive heat, as by exercise, exertion or exposure to sunlight, our arteries open up (dilate) and blood comes more to the surface of the skin to release the bodies heat. This accounts for the reddening or flushing of the skin.
A more prevalent reason for “flushing” of a persons’ complexion, and completely unrelated to working out or excessive heat, is dehydration. The human brain is 1/50th of our body mass but receives 25% of the blood flow. The brain is 85% water and when your daily consumption of water is inadequate your brain will demand priority on the water in your system. Dehydration activates the hormone histamine to ration the water in your system. The facial flushing is a result of the brain demanding water. So when you want your skin to fit within the preferred skin tones increase your water (and sea salt) intake! It’ll take 3 months to rehydrate your body. We wish you good health!
Skin tones in your photos will have to be balanced between camera sensors capturing to non-visual light, skin tone preferences of what you may prefer skin to look like, and the reality of the human physiology of the moment recorded in the original image. Perfectly Clear allows you the choice of Real Color and your preference.
Skin Tones and the human eye
Abnormal and improper red skin tones are an unwanted color distortion in your photos. In such a case their effective removal is necessary to reproduce your original experience. The challenge remains that the human eye is a response mechanism recording the actual skin tone color and not that which you might “prefer”. Better matching of the original image in the mind’s eye requires accurate colors for you to achieve the benefits that an accurate photo delivers. Photo Accuracy [link] demands Real Color.
CHAPTER 10 -- Skin Tone Adjustments 2 – skin tones for your culture
Skin tones have been and will continue to be a “sensitive” topic as different countries and cultures have a preference on the specific look of their skin.
As a result, the Perfectly Clear bias algorithm has the ability to provide you with several different options on the look of your skin tone. The examples below show how you can create slightly whiter skin tones, or much whiter (as preferred in the Asian cultures).
Original Perfectly Clear with Skin Tone 50% Perfectly Clear with Skin Tone 100%
Portrait photos are suboptimal when the subjects are exposed to hard lighting situations. Hard lighting is unnatural and results in color distortion. Perfectly Clear applies a process to automatically soften the lighting. This results in superior Portraits with more accurate color. More
The Skin Tones we sometimes prefer may actually differ from reality
Athentech Imaging strives relentlessly for accurate color reproduction. See Photo Accuracy. [link] People are very sensitive to skin tones in their photos, and we respect that. Research has shown that people generally have a range for the skin tones they prefer. This preferred range often falls outside what may be the actual skin tones of the people in the photo. However, in an attempt to provide you with the maximum control in your auto corrections, Perfectly Clear has settings to emphasize people and the whiteness of your skin. For example, in Japan, ladies strive to have porcelain like skin (very smooth and very light). We have a special setting for our Asian customers so they can accomplish this. We also have a toned down setting for our North American customers who prefer a subtle skin adjustment. The difference in these correction s are shown above in the Introduction.
Original Perfectly Clear with Light Diffusion
Perfectly Clear Perfectly Clear with Light Diffusion
Original Perfectly Clear with Light Diffusion