Saturday, April 24, 2010

The Impact of Digital v. Analog Management on Display Image Quality

By Norman Hairston

“Is this the real life? Is this just fantasy?
Caught in a landslide, No escape from reality
Open your eyes, look up to the skies and see….”
Queen “Bohemian Rhapsody” Lyrics


Visual artifacts have always plagued the display industry. In the CRT era there were two; each one tied to the digitization of the image. MoirĂ© resulted from the “beating” of the pixilation of the displayed image with any close periodicity in the captured image, e.g. plaid suits. Temporally, the time slicing digitization of the image frequently resulted in backward spinning wheels on autos or other anomalous imaging resulting from temporally resonant activity. Interestingly, even though the advertising industry is so obsessive that it demands that the ends of any length of spaghetti be buried in any image, it permits the backward spinning wheels on million dollar auto commercials because there is nothing they can do.

In the modern era, as digitization has progressed, more importantly as digital image manipulation progresses, the number and extent of digital artifacts has grown. This article discusses digital artifacts and their proliferation as the display industry has migrated from being analog/visual (human factors centered), to being computing intensive. Instead of building a better display, the industry now looks to software modifications to improve the image.

In the history of the color CRT, there was the invention of the Trinitron, Black Matrix, high voltage phosphors, and high black level glass; improvements to the native display. Early in the development of LCD technology there was the development of IPS (by RCA) and its resurrection (by Hitachi); also fundamental improvements to the basic display device. Although other technologies have subsequently been developed that match IPS, it was IPS that showed the potential of LCD technology to equal or exceed the CRT. Interestingly, at the time Hitachi LCD was a part of Hitachi Electron Devices (HED), the old Hitachi CRT organization, rather than a part of the Hitachi semiconductor or any other fundamentally digital organization. Although there have been subsequent improvements in image quality, and certainly drastic improvements in cost and product quality, this was the last quantum leap in LCD visual quality.

The issue with LCD display technology development has been an increasing reliance on software modifications and further digitization to improve the image. The industry increasingly relies on “driver firmware upgrade 9.991” rather than looking at how to improve the base technology. The result has been incremental improvements in the image when displaying unchallenging HDTV content, but increasing visual artifacts when displaying challenging images. Further, the LCD TV image modification software makes pre-judgments about what the user likes which may or may not correspond to what specific uses want to see.


Regarding static images, as I have noted in other missives, LCD TV firmware has facial recognition processing that images faces differently than the rest of the picture. This software can have two negative effects, each tied to the two things that the facial recognition processing does.

First there is the ideal color correction. As I mentioned in another article, what is ideal for one culture may be anathema for another. The ideal color correction for a Japanese person (northwest on the CIE X-Y plane, toward the white point) is almost 180 degrees opposite for a western Caucasian (toward red). For a mid-range Black American, zone 7 for those of us with photography backgrounds, no color correction at all may be closer to the ideal. Inevitably, making the color correction will leave some unhappy with the facial images, forcing them to globally compensate across the image. This can result in the facial images of other races being severely discolored. The racial impact of facial recognition software was recently brought home to camera makers with their implementation of facial software to do “red eye” correction. It seems that some cameras were noticeably unable to recognize the eyes of oriental people, bringing charges of racism upon the makers even though the makers were all from oriental countries. Obviously the problem was not racism but myopia in their development organizations.

Along with facial color correction, LCD TV firmware often includes a soft focus. This gives an overall, more pleasing image of faces when the software is working well, taking out detail such as small wrinkles and other skin imperfections or even 5 O’clock shadow. However, when a face is in motion or the software is otherwise having trouble recognizing the face as a face, the soft focus is applied intermittently. This can result in things such as moles and facial stubble appearing and disappearing as the person moves or turns their head. As with many of the dynamic impacts of image processing, it is not so much the effect, as the effect turning itself on and of as the image changes that causes irritation to the viewer. These disappearing and reappearing facial features are most common on non-HD content leading me to further believe in the myopia of the development organizations, that there is not the thorough testing on standard resolution content (even though most content is still standard resolution) that there should be. Additionally, with standard resolution content showing up as soft focus on an HDTV anyway due to the smearing out of the pixels, adding a further soft focus to faces can give them a cartoonish appearance, devoid of normal skin texture.

Beyond faces, there is the impact of color management in general. Due to the digitization of the image, most TVs process at least 36 bit color, if not higher, to avoid the impact of digitization on displayed color levels. However, most LCD TVs still exhibit considerable posterization in images with subtle color shifts. Thirty-six bits should be more than enough to cover any loss of information in the digitization process. My suspicion is that the posterization is a result of the “blue stretch”, “green stretch”, or other parts of the color management software; but I do not know this.

Returning to the disappearing and reappearing detail, the LCD has traditionally struggled with loss of detail with motion. A large part of this has been due to LCD gray to gray response times. However, it is important to remember that the HDTV signal itself looses detail with high speed motion as a consequence of its data compression. LCD TV makers have addressed the response time issue by increasing the refresh rate from 60 Hz, the same as the input image, to 120, 240, and now 408 Hz. Though the higher refresh rates do improve the detail in fast motion images, they create some detail as well, as does the edge enhancement firmware.

Further, all of the visual processing, particularly waiting for subsequent images to draw artificial intermediate images, creates considerable time delay. In higher end sets, the user is able to turn off some or all of the visual processing to improve gaming response of the TV set. The need for a “gaming mode” is, perhaps, a sign that digital image process is not a sinecure for visual shortcomings of the technology as it now stands. In general, the image processing firmware on many LCD TVs is like any other piece of bad software, it guarantees the negative outcomes, like system crashes, that it was designed to prevent.

Beyond the time delay, disappearing and reappearing image details, and color distortion, there is also gross image distortion of non-HD content. Being short and stubby myself, it is somewhat pleasing to see everyone on TV become horizontally expanded… but not really.


In short, it is important to remember that every digital manipulation creates its own set of digital artifacts. While improvements in image processing firmware may offer improvements in ideal images, the overall improvement they make in image quality is limited and may actually be negative in some categories. In the CRT era, improving the image meant improving the display; the industry no longer seems to be emphasizing this.

There are at least two intermediate solutions to these problems. At Intel, the company developed a display technology improvement to advance the energy efficiency of notebook LCDs, one that traded temporal resolution for spatial resolution in static images. Coupled with this the team developed throttling technology, the ability to slow down or turn off the effect based on what the image was doing. For the TV set industry, it would be beneficial to recognize the type of content being displayed, the aspect ratio, and have some concept of object consistency longer than two consecutive frames and use this to restrain some of the image modification firmware.

A simpler, and perhaps more effective solution would be to enable the viewer to easily select or turn off (as is the case with gaming mode) some of the image processing. Such an implementation would be simple, costless, and direct. Beyond this, the display device makers need to reduce the emphasis on firmware revision and focus on native display performance… genuine innovation. The active matrix LCD was invented in the US and many of the subsequent fundamental improvements, such as IPS, were pioneered here. There are a few US based display startups, such as Unipixel, that are developing fundamentally newer and better displays, but they are not attached to large ongoing display production operations as was the case with Westinghouse (developer of active matrix LCDs) and RCA’s Sarnoff Labs. While the electronics companies that populate the industry today are excellent at driving down costs and improving quality, they are largely yet to prove themselves in fundamental innovation. Without this fundamental innovation we get slightly better HD LCD TV pictures at the cost of increasing visual artifacts and sometimes poorer performance with standard definition content.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.