Vertical Divider
Video Processor Incorporate AI to Enhance TV Images
TV video processors initial role only was to take the incoming signal and break it into two streams, one for the even rows on the display and one for the odd rows. This process, called interlacing, was used to allow CRTs to paint the odd rows before the even row phosphors had a chance to dissipate the image rather than the system painting the entire screen at once, which would allow the image to fade before it was entirely redrawn (flicker).
With the advent of digital TVs displays and their use of LED backlights set designers developed techniques to reduce the negative effects of bright LEDs by dimming or by turning them off (local dimming), which required video processors to determine which LEDs should be dimmed at what points and to what levels. Since video processors were already looking at the details of the video signal, designers took that information and translated it into signals that would control the backlight LED brightness. With only a relatively small number of LEDs in the direct-lit backlight, the same ‘gray’ and ‘bloom’ problems still occurred. Until the advent of Mini LEDs, high-end TV sets had 10 or 20 zones, which increased the contrast and reduced bloom, but as OLED TVs became popular, LCDs could not replicate their contrast ratio.
Video processors can now break down an image into a map of every pixel and evaluate the dimming area containing the pixel, but there are still many more pixels than Mini LEDs for any TV, so the processor determine a balance of the most appropriate luminance/area using AI to improve the video processing intelligence.
In this case, AI evaluates a frame and ‘remembers’ what it saw. By compiling and remembering data, its gets an understanding (learns) what the pixels surrounding each other would be doing in a particular circumstance, so when it sees a familiar pixel pattern, it does not have to evaluate the entire image and sets the LEDs in the way it has learned is most common. The result is an image that is closer to the target in terms of luminance and color point, but it still can’t replicate the OLED image, which is exact to the target.
Sony, the leader in video processing is touting the next level in video processing, calling it ‘cognitive intelligence’, looking at the ‘whole image’ rather than on a pixel by pixel basis and breaking it down into dimming zones, by trying to optimize the processing power on the key images rather than spreading it across the entire frame. Sony says this mimics the way a human would see the image, focusing on what is most important. Sony’s hopes to separate from the rest of the ‘AI’ crowd and has produced a 45 second promo video:
https://youtu.be/-EhB7dUJ29g
TV video processors initial role only was to take the incoming signal and break it into two streams, one for the even rows on the display and one for the odd rows. This process, called interlacing, was used to allow CRTs to paint the odd rows before the even row phosphors had a chance to dissipate the image rather than the system painting the entire screen at once, which would allow the image to fade before it was entirely redrawn (flicker).
With the advent of digital TVs displays and their use of LED backlights set designers developed techniques to reduce the negative effects of bright LEDs by dimming or by turning them off (local dimming), which required video processors to determine which LEDs should be dimmed at what points and to what levels. Since video processors were already looking at the details of the video signal, designers took that information and translated it into signals that would control the backlight LED brightness. With only a relatively small number of LEDs in the direct-lit backlight, the same ‘gray’ and ‘bloom’ problems still occurred. Until the advent of Mini LEDs, high-end TV sets had 10 or 20 zones, which increased the contrast and reduced bloom, but as OLED TVs became popular, LCDs could not replicate their contrast ratio.
Video processors can now break down an image into a map of every pixel and evaluate the dimming area containing the pixel, but there are still many more pixels than Mini LEDs for any TV, so the processor determine a balance of the most appropriate luminance/area using AI to improve the video processing intelligence.
In this case, AI evaluates a frame and ‘remembers’ what it saw. By compiling and remembering data, its gets an understanding (learns) what the pixels surrounding each other would be doing in a particular circumstance, so when it sees a familiar pixel pattern, it does not have to evaluate the entire image and sets the LEDs in the way it has learned is most common. The result is an image that is closer to the target in terms of luminance and color point, but it still can’t replicate the OLED image, which is exact to the target.
Sony, the leader in video processing is touting the next level in video processing, calling it ‘cognitive intelligence’, looking at the ‘whole image’ rather than on a pixel by pixel basis and breaking it down into dimming zones, by trying to optimize the processing power on the key images rather than spreading it across the entire frame. Sony says this mimics the way a human would see the image, focusing on what is most important. Sony’s hopes to separate from the rest of the ‘AI’ crowd and has produced a 45 second promo video:
https://youtu.be/-EhB7dUJ29g
Contact Us
|
Barry Young
|