Saturday, April 24, 2010

The Standard TV Set

Introduction

TV screen sizes are measured diagonally. A 25V CRT TV has rounded corners, whereas a flat faced 27V has square corners. Most of the difference between the sizes of the two models is the extra distance obtained by squaring out the corners. In 1964, the first model of color TV to be widely introduced was a 25” round design. The average TV set sold for $400. By the mid 1990s, the price of the average set had declined to about $380, which could have purchased a low priced for a 27V. Although there was some size escalation between the mid 1990s and when CRTs finally fell from favor about 2006, the industry managed to sell almost exactly the same product design, at almost exactly the same price, for the better part of 40 years. This article is about how they did that and how this evolved to the follow-on 32W, which happens to be almost exactly the same size (in terms of height) as the 27V and 25 round. The discussion below concerns mid-range TV sets, understanding that the premium market can have different dynamics.



Higher Value

Although there is a dizzying amount of bells and whistles you can get on a modern TV, the fundamental improvements are what have driven sales. These can be classified into two categories: improvements in reliability and improvements in picture/audio quality. The move from tubes to solid-state circuitry was one of the more visible aspects of reliability improvement to the consumer, since consumers no longer had to periodically disassemble their TV sets and take the vacuum tubes to the drug store for testing. The higher reliability also contributed to increased set sales, as it became standard practice to replace a malfunctioning unit rather than repair it.

Another, albeit less appreciated reliability improvement was the move to high voltage phosphors. Within the industry there was a rule of thumb called Coulomb’s Law; in general, the higher the energy to activate a phosphor, the longer the phosphor would last. Over the course of 40 years the industry moved to higher and higher voltages until phosphor degradation was no longer a factor. The higher voltages, combined with improvements in the shadow mask, also brought the consumer a brighter image over time.

Concerning reliability, the TV industry has also provided the consumer with an increasingly improved image over time. Early on, with LCD technology, a better image had primarily meant improvements in contrast and viewing angle. This has been because LCD technology began its commercial life with issues related to both of these metrics. This was never a significant problem for CRT technology, consequently, the focus there, from an early time, has been upon black level. There have been three significant inventions for CRTs, black matrix in the 1960s, pigmented phosphors in the 1970s, and 36% transmission glass in the 1990s. All of these were significant enough improvements in picture quality that consumers viewed them as a reason for replacing an older set.

Beyond the quality and reliability improvements, there has been the addition of embedded applications such as the introduction of combo sets and supplemental content. At first glance, a combo set seems like a giant step backwards in terms of reliability. At the point when they were introduced, VCRs had an expected lifetime of about 3 years while TVs had a lifetime on the order of 15 years. Combining a high maintenance item with a zero maintenance item would not ordinarily seem to be the best thing to do; however, it provided a significant increase in selling opportunities. Consumers responded well to the ‘convenience’ of an all-in-one product, without wires to connect and 1 remote control. For quite some time, they were willing to pay a price premium for this. In the stores, it was not so much a way of selling up the consumer from a non-combo TV, as it was a way of selling up a consumer looking to replace their VCR. Experience has shown that this replacement happened 5 times more often.

Looking at all of the improvements in value, each provided a reason for the consumer to update their older TV sets. Besides, they were all implemented at virtually zero cost or, in some cases, as a cost reduction. Utilizing solid state construction, improving a few grams of phosphor in a CRT or a few grams of black matrix applied to the screen, adding a few tenths of a percent of cobalt to darken the glass collectively had inconsequential impact upon manufacturing costs or in some cases were a cost reduction. In the LCD world, the introduction of IPS was similar. At the time it was implemented, it was actually one fewer mask step but resulted in a greatly improved visual image. Although adding a VCR to a TV was a significant cost, it led to significantly expand selling opportunities for the retailer.

In circulating a draft of this article among some of my peers, I did get a lot of feedback concerning the limited nature of consumer perceptions. In short, there is a widely held opinion that the only things consumers actually notice is price, size, and brightness. My own perception is that each of the innovations that I have discussed were important at the time they were made but by the early 1990’s the CRT picture had been improved to the point where there was no further consumer benefit. Of course, LCDs at the time they supplanted the CRT, were a step backwards in two significant ways, response time and color crosstalk; these are addressed in the Looking Forward section below. The foregoing also did not spend much time on electronic features such as remote control, on-screen menus, electronic tuning, etc. While it would be difficult to sell a TV set today without some of these features, it is my feeling that they were never, in themselves, a reason to buy a new TV set with the exception of electronic tuning, which enabled easy reception of UHF TV stations.


Higher Perceived Value

The practice of sizing a screen by stating its diagonal came from the original picture tube days when the screen was written on the bottom of a round bottle and masked off into a roughly square screen. The diameter of the bottle was roughly the size of the screen. In Europe, Japan and early on in the US, the outside diameter of the bottle was used; in the US, subsequent to the 25 round (actually a 24” diagonal screen) the inside diameter was used. This is more representative of the actual screen size. This meant that a US 19” and a European 20” were roughly equivalent. Further, in the US, the screen would typically be written up into the bend of the glass bottle (where the sides become the faceplate) i.e. the screen was enlarged and had rounded corners rather than a straight square mask. The result was that a US 19” could actually have a larger screen than a European 20”. However, the terminology in the US was to call the 19” a 19V; V for viewable diagonal. (The 4:3 aspect ratio came from the desire for a wide image, but at the same time being constrained by the circular bottle. Given a fixed diameter, the difference between a 4:4 screen, which maximizes use of the available screen area, and a 4:3 is only a 3% loss of screen area.)

When picture tubes went from round corners to square corners and flat faces in the 1990s, squaring off the corners of a traditional US 19” design allowed a 20” diagonal for something that was virtually the same size. The 19V became the 20 and the 25V became the 26V; the 26 was later dropped in favor of the 27 at the preference of the retailers who needed a bigger gap between the 25 and the current offering. The industry kept selling the 19V and the 25V as loss leaders. However, the stores would always try to sell up consumers to the next size as it “fits in the same space as the smaller screen”. Typically a 20V would have a 10% mark up over a 19. The 27 mark-up over the 25 could be even larger, but there were usually some additional features on the 27 to further justify the price.

Finally, manufacturers improved the sound. An experiment conducted by the MIT media lab two decades ago consumers were asked to compare the pictures of two different TV sets. One was an analog HD set of the era; the second was a 4:3 NTSC set masked off to give a 16:9 picture. The consumers were almost unanimous in picking the set with the best picture as the set, which had the best sound, independent of the aspect ratio of the picture. The “Home Theater” concept was, essentially, an effort to capitalize on sound/picture quality confusion.



Lower Costs

A VP at Intel often tells the story about a dinner party which he attended where 12 guests showed up, but there were only 10 steaks. When asked about what happened, he said, “I don’t know, but I got my two.” Although the industry was never wildly profitable, and a key element of the BOM (the CRT glass bottle) had a cost that was roughly tied to energy prices, the industry did manage to reduce costs on the same scale as general inflation. The main contributor to cost reduction across this time period was lower labor costs resulting from the implementation of more automated production, moving set production to low wage Mexican plants, and streamlining of the sales distribution chain.

In general the glass cost was about 1/2 of the cost of the TV tube and the tube about 1/3rd the price of a finished set in the store. While a key input to the glass was the natural gas to run the glass making furnaces, the industry did benefit from improvements in production scale such as moving to larger furnaces and robotics throughout the process. Tube making also benefited from a move to larger factories, but the real cost reduction happened beyond the finished tube level. Set making was never very capital intensive. Indeed, Heathkit offered TVs that the consumers could build themselves. Setmaking did benefit from the disposal of the furniture of a console TV set and the resulting decline in shipping and inventory costs. However, one of the larger reductions in TV set costs resulted from the consolidation of the sales distribution chain. As the sale of electronics moved from Mom and Pop shops to the superstores of today, there was an entire level of distribution cut out of the chain. This elimination of the distributors, along with their costs and profit margins, occurred in the early 1990s, so it was the distributors that didn’t get any steak.

And finally, the CRT makers did one other significant thing for cost reduction. They voluntarily implemented post consumer product recycling. This program extended beyond their own branded products to include CRTs generically. Although the industry program was initiated from outside the industry by Digital Equipment (DEC), it was Sony that insisted that the program be implemented as a cost reduction. CRT tube making never climbed much above a 95% yield. As a consequence, tube makers had to deal with the disposal of 5% of their production as hazardous waste. In addition to the cost, the CRT waste was a barrier to the full implementation Sony’s zero emissions objectives. The program that was created required Corning to pay for post consumer waste directly, assuming the tube-manufacturing fall out from their customers at no cost. The absence of hazardous waste disposal fees allowed the glass to move in regular shipping containers rather than as hazardous waste.


Trinitron

It would not be appropriate to have a discussion about TV technology or TV pricing without at least mentioning the Trinitron. The Trinitron was left out of the foregoing discussion because, although it was a significant improvement in the TV picture, it was shortly eclipsed by black matrix. Although Trinitron offered a continuing advantage in being able to be run at brighter levels and it made for an inherently flatter tube (a cylindrical rather than a spherical face plate), it was an innovation with some significant costs and which sold at a premium to the rest of the market. This was not a factor for the mid-ranged product, which has been the focus of this discussion.



Looking Forward

In many of the more relaxed civilizations on the Outer Eastern Rim of the Galaxy, the Hitch Hiker's Guide has already supplanted the great Encyclopedia Galactica as the standard repository of all knowledge and wisdom, for though it has many omissions and contains much that is apocryphal, or at least wildly inaccurate, it scores over the older, more pedestrian work in two important respects. First, it is slightly cheaper; and secondly it has the words Don't Panic inscribed in large friendly letters on its cover..” …. Hitchhikers Guide to the Galaxy.

The CRT TV was able to survive 40 years of boom and recession at essentially the same size and price points and current US TV sales seem to be recovering ahead of the economy. Although CRT makers did introduce larger sizes, going up to a 36V for a conventional consumer set, larger sizes brought with it more cost. For the most part, by making multiple improvements to the product at virtually zero cost, TV manufacturers gave the consumer a continuing stream of more value and new reasons to replace their old TV set, even if it were still working. This note began with a discussion of the 20V. In some sense the 20V exemplifies innovation in the TV industry. Nothing new went on the standard/midrange priced platform unless you could give it away for free. Of course there were other changes to the product beyond that which is discussed here (the inclusion of Second Audio Program, V Chip, etc) but these were both of minor cost and brought only minor impact. The introduction of flat faceplates did add some cost to the TV as it meant a heavier tube; but this was accomplished at the same time the industry ceased using distribution, hence costs essentially remained the same.

The market belongs to LCDs now essentially because the CRT makers passed it on to them as their final move to maintain pricing. Because the total beam deflection angle of a CRT is generally limited to about 110 degrees, for every inch wider a CRT became, it becomes about an inch deeper. Thus a 27V (with a screen that is 16.2” in height) in HDTV format becomes a 33W, but also adds over 7” of depth in the back, plus some additional bow in the front of the tube. While the wide aspect ratio was the thing that made HDTV significantly different in the consumers’ perception, it was never in the cards that HDTV in 16:9 format would be realized in CRTs. The simple design constraints made a 16:9 CRT too bulky and too expensive to fabricate. Even so, the consumer electronics industry pushed for a 16:9 version of HDTV over the objections of TV broadcasters. By doing so the CRT makers voted themselves out of existence. However, the move to wide screens for TV has been accompanied by the move to wide screens for notebooks, instantly makes an entire generation of product look old and obsolete.

A caveat to the foregoing, the first CRT color TVs actually had a 70-degree funnel and considerable bow in the screen itself, but this switched to a 90-degree funnel almost immediately and the bow was gradually reduced to until the early 1990’s when the “Flat/Square” TVs were introduced with virtually no screen bow. Samsung has introduced a line of CRT TVs in Asia with 120-degree funnels. TV set sales of this type continue to do well in Asia indicating that the desire for a totally flat device has some limitations and in the US, development of scan addressed technology is continuing and shows promise.



From the introduction of the 25 round to today, the standard TV set has always had a screen about 15” tall and has always been priced about $400.


This is the first recession where TV has meant LCDs and not CRTs. As it is now configured the industry is positioned behind a completely different set of manufacturing organizations with a very different product, so the marketing methodology will now be different from before. LCD makers have responded to the recession by offering bigger screens at lower prices, despite recent dramatic improvements in the LCD picture quality. The LCD response time problem is essentially fixed. And, the recent introduction of LED backlights gives a much more colorful picture than could be produced by a conventional CRT. So far in the 2008-2009 recession, the industry seems to have also cut production more than has been required. In the midst of the doldrums, the industry is seeing spot shortages of 26W and 32W LCD TVs, the standard (at least by height) size TV of the last 40 years; even as the standard size closes in on its formerly standard price. Of course, an HDTV needs to be more than twice as tall as an NTSC TV, when viewed in the same setting, in order to get the full benefit of HD. From 2003 to 2007, as flat panels replaced CRTs, the average price of a TV rose from $400 to $750, where it peaked. $750 is the current typical price of a 32W. It should be noted that the lowest price for a 32W is now about $450, not very different from the 25 Round pricing of 44 years ago.

While the current situation is both very reminiscent of the last 40 years, it is in some ways unique. The principal thing that is the same is the consumer. They have an idea of what an adequate size TV set should be and are essentially buying a 16:9 version of the same 25 round from 1964; specifically, a 32” 16:9 TV has the same picture height as a 26” 4:3 set; a 40” 16:9 TV has the same picture height as a 32” 4:3 set. What is also the same is the industry’s cost position. The industry is selling at cost and while the industry can and does offer larger sized product, the additional cost that comes with that larger size significantly narrows the market.

The main thing that is different is that the consumer’s needs have changed. The 25 round was an adequate size to show a NTSC image but to get full benefit of a HD picture, the screen needs to be more than twice as tall. So, the sets that consumers are buying are actually inadequate for the main purpose, for which they are buying, to serve as a living room TV. The other thing that has been different has been a slow down in innovation during the recession. Early in the industry’s history, slow downs were met with innovation. The 68-69 set sales decline brought Trinitron and Black Matrix to the market; 74-75 brought pigmented phosphors; still later the home theater concept was born within a recession. The industry has been selling contrast ratio as the LCD figure of merit for some time. While as 19V/20V pricing demonstrated that consumers could be sold up based on a meaningless spec, the manner in which contrast ratios are being measured is meaningless when considered in a practical home living room setting. Besides, human beings can’t see the incremental value of 10,000:1.

On page 19 of the February 2009 LCDTV Association newsletter Paul Semenza shows a diagram that shows that the 32 W outsells any other size category by 2:1, nor is the fall-off particularly smooth. It could very well be that the 32W just happens to sit in a sweet spot in the pricing, it could be that that size is what sits well in consumers’ existing AV furniture, or it could be that 40 years of habit have conditioned the consumer to expect TVs to be a very particular size. The distribution did not shift much as prices dropped or as the global recession set in.

If, in-fact, there is some conditioning of the consumer to expect a particular size in TV sets then that would directly point to an opportunity/methodology to move consumers to larger sizes. In the attached article, I refer to the "Standard TV" which is between 15" and 16" tall. The standard encompasses the original TV set as well as the 32W, which size-wise is about the same size/height as a 25” round, only wide. However, as you are aware, with the transition to HDTV, the standard is actually too small to serve as a living room set. So it occurs to me, that if the industry designated a new "Standard," a size that they can make now in volume and referred to it as a "Standard" to the consumer, then it would be much easier to sell larger sizes. The 32W would become a 1/2 or a 3/4 standard, a natural positioning to trade up from. Implementing the "Standard" terminology might not be the best thing without some consumer research. What I am asking you to consider is to JV with me do an industry study on trading up the consumer to larger sized TVs.

With the recession, and especially (whenever it happens) coming out of the recession and with the transition to HDTV, I think there is a prime opportunity to change the way the consumer thinks about TV set sizes. Again, per the attached article, the inconsequential difference between a 19V and a 20V was 10% and was an easy trade up. The difference between a “3/4 standard” and a “standard” will involve some real costs but it does provide a natural trade up route.

Image Post Processing in a Entertainment LCD

On the non-gaming side, the objective of video display architecture is to resolve the display limitations and solutions with minimal inputs of additional cost, energy, and time (time here refers to the response time of the display not necessarily the time to develop the solution). The solution is, of course, different depending on the display and the application. Even within a particular type of display, a multi-domain TFT LCD has different characteristics from an IPS mode LCD, so developing a “one size fits all” solution is difficult and producing a completely optimized solution is impossible without supplementing the display Electronic Digital ID (EDID) with additional information about display type and performance.
In TV display architectures, when a given image is provided, the normal process is to convert that image from whatever format into a yUV image to do cosmetic corrections. The usual process is to first do:

• Noise combing,
• edge identification for edge enhancement,
• motion estimation:
......for response judder recognition and elimination and
......response time improvement,
• facial identification:
......for ideal facial color mapping and
......facial blurring,
• blues stretch (enabling perceived blues outside of the normal color gamut of the display),
• green stretch,
• dynamic contrast enhancement (backlight dimming).

After the image modification is complete then it is converted to an rgb image and sent via the LVDS to the digital display. While all of this is going on the audio is being processed then held in a delay loop (the video processing for a real image takes long enough that there would be a noticeable timing difference between the audio and the video so the audio is held until the video is ready... gamers will notice that without a gaming mode that turns of the image processing and audio delay, an LCD TV responds at least 1/2 second behind their inputs, spoiling their performance). Once the video is received by the display, some further response time improvement is provided by the overdrive in the display’s embedded processor. Because different displays have different response times and because there are two processing circuits (one on the device side, one on the display side... that do not communicate with each other) handling response time improvement, there is necessarily some tuning that must be done to ensure that the combination of the device side response time improvement and the display side response time improvement do not produce any notable visual artifacts. This is particularly true if the device would have to interface with both computing LCDs (normally off-state white) and entertainment displays (usually off-state black). Finally, if the display has a “blinking backlight” or some form of dynamic contrast embedded in its own processing, then this becomes a further area of conflict between the device and the display that must be tuned further.

The Impact of Digital v. Analog Management on Display Image Quality

By Norman Hairston
normh@alum.mit.edu


“Is this the real life? Is this just fantasy?
Caught in a landslide, No escape from reality
Open your eyes, look up to the skies and see….”
Queen “Bohemian Rhapsody” Lyrics

Introduction

Visual artifacts have always plagued the display industry. In the CRT era there were two; each one tied to the digitization of the image. MoirĂ© resulted from the “beating” of the pixilation of the displayed image with any close periodicity in the captured image, e.g. plaid suits. Temporally, the time slicing digitization of the image frequently resulted in backward spinning wheels on autos or other anomalous imaging resulting from temporally resonant activity. Interestingly, even though the advertising industry is so obsessive that it demands that the ends of any length of spaghetti be buried in any image, it permits the backward spinning wheels on million dollar auto commercials because there is nothing they can do.

In the modern era, as digitization has progressed, more importantly as digital image manipulation progresses, the number and extent of digital artifacts has grown. This article discusses digital artifacts and their proliferation as the display industry has migrated from being analog/visual (human factors centered), to being computing intensive. Instead of building a better display, the industry now looks to software modifications to improve the image.

In the history of the color CRT, there was the invention of the Trinitron, Black Matrix, high voltage phosphors, and high black level glass; improvements to the native display. Early in the development of LCD technology there was the development of IPS (by RCA) and its resurrection (by Hitachi); also fundamental improvements to the basic display device. Although other technologies have subsequently been developed that match IPS, it was IPS that showed the potential of LCD technology to equal or exceed the CRT. Interestingly, at the time Hitachi LCD was a part of Hitachi Electron Devices (HED), the old Hitachi CRT organization, rather than a part of the Hitachi semiconductor or any other fundamentally digital organization. Although there have been subsequent improvements in image quality, and certainly drastic improvements in cost and product quality, this was the last quantum leap in LCD visual quality.

The issue with LCD display technology development has been an increasing reliance on software modifications and further digitization to improve the image. The industry increasingly relies on “driver firmware upgrade 9.991” rather than looking at how to improve the base technology. The result has been incremental improvements in the image when displaying unchallenging HDTV content, but increasing visual artifacts when displaying challenging images. Further, the LCD TV image modification software makes pre-judgments about what the user likes which may or may not correspond to what specific uses want to see.


LCD/HDTV

Regarding static images, as I have noted in other missives, LCD TV firmware has facial recognition processing that images faces differently than the rest of the picture. This software can have two negative effects, each tied to the two things that the facial recognition processing does.

First there is the ideal color correction. As I mentioned in another article, what is ideal for one culture may be anathema for another. The ideal color correction for a Japanese person (northwest on the CIE X-Y plane, toward the white point) is almost 180 degrees opposite for a western Caucasian (toward red). For a mid-range Black American, zone 7 for those of us with photography backgrounds, no color correction at all may be closer to the ideal. Inevitably, making the color correction will leave some unhappy with the facial images, forcing them to globally compensate across the image. This can result in the facial images of other races being severely discolored. The racial impact of facial recognition software was recently brought home to camera makers with their implementation of facial software to do “red eye” correction. It seems that some cameras were noticeably unable to recognize the eyes of oriental people, bringing charges of racism upon the makers even though the makers were all from oriental countries. Obviously the problem was not racism but myopia in their development organizations.

Along with facial color correction, LCD TV firmware often includes a soft focus. This gives an overall, more pleasing image of faces when the software is working well, taking out detail such as small wrinkles and other skin imperfections or even 5 O’clock shadow. However, when a face is in motion or the software is otherwise having trouble recognizing the face as a face, the soft focus is applied intermittently. This can result in things such as moles and facial stubble appearing and disappearing as the person moves or turns their head. As with many of the dynamic impacts of image processing, it is not so much the effect, as the effect turning itself on and of as the image changes that causes irritation to the viewer. These disappearing and reappearing facial features are most common on non-HD content leading me to further believe in the myopia of the development organizations, that there is not the thorough testing on standard resolution content (even though most content is still standard resolution) that there should be. Additionally, with standard resolution content showing up as soft focus on an HDTV anyway due to the smearing out of the pixels, adding a further soft focus to faces can give them a cartoonish appearance, devoid of normal skin texture.

Beyond faces, there is the impact of color management in general. Due to the digitization of the image, most TVs process at least 36 bit color, if not higher, to avoid the impact of digitization on displayed color levels. However, most LCD TVs still exhibit considerable posterization in images with subtle color shifts. Thirty-six bits should be more than enough to cover any loss of information in the digitization process. My suspicion is that the posterization is a result of the “blue stretch”, “green stretch”, or other parts of the color management software; but I do not know this.

Returning to the disappearing and reappearing detail, the LCD has traditionally struggled with loss of detail with motion. A large part of this has been due to LCD gray to gray response times. However, it is important to remember that the HDTV signal itself looses detail with high speed motion as a consequence of its data compression. LCD TV makers have addressed the response time issue by increasing the refresh rate from 60 Hz, the same as the input image, to 120, 240, and now 408 Hz. Though the higher refresh rates do improve the detail in fast motion images, they create some detail as well, as does the edge enhancement firmware.

Further, all of the visual processing, particularly waiting for subsequent images to draw artificial intermediate images, creates considerable time delay. In higher end sets, the user is able to turn off some or all of the visual processing to improve gaming response of the TV set. The need for a “gaming mode” is, perhaps, a sign that digital image process is not a sinecure for visual shortcomings of the technology as it now stands. In general, the image processing firmware on many LCD TVs is like any other piece of bad software, it guarantees the negative outcomes, like system crashes, that it was designed to prevent.

Beyond the time delay, disappearing and reappearing image details, and color distortion, there is also gross image distortion of non-HD content. Being short and stubby myself, it is somewhat pleasing to see everyone on TV become horizontally expanded… but not really.



Summary

In short, it is important to remember that every digital manipulation creates its own set of digital artifacts. While improvements in image processing firmware may offer improvements in ideal images, the overall improvement they make in image quality is limited and may actually be negative in some categories. In the CRT era, improving the image meant improving the display; the industry no longer seems to be emphasizing this.

There are at least two intermediate solutions to these problems. At Intel, the company developed a display technology improvement to advance the energy efficiency of notebook LCDs, one that traded temporal resolution for spatial resolution in static images. Coupled with this the team developed throttling technology, the ability to slow down or turn off the effect based on what the image was doing. For the TV set industry, it would be beneficial to recognize the type of content being displayed, the aspect ratio, and have some concept of object consistency longer than two consecutive frames and use this to restrain some of the image modification firmware.

A simpler, and perhaps more effective solution would be to enable the viewer to easily select or turn off (as is the case with gaming mode) some of the image processing. Such an implementation would be simple, costless, and direct. Beyond this, the display device makers need to reduce the emphasis on firmware revision and focus on native display performance… genuine innovation. The active matrix LCD was invented in the US and many of the subsequent fundamental improvements, such as IPS, were pioneered here. There are a few US based display startups, such as Unipixel, that are developing fundamentally newer and better displays, but they are not attached to large ongoing display production operations as was the case with Westinghouse (developer of active matrix LCDs) and RCA’s Sarnoff Labs. While the electronics companies that populate the industry today are excellent at driving down costs and improving quality, they are largely yet to prove themselves in fundamental innovation. Without this fundamental innovation we get slightly better HD LCD TV pictures at the cost of increasing visual artifacts and sometimes poorer performance with standard definition content.