Approved
Techniques for Deep Colors in Imaging
Shahid Mehmood () and Ling Ma ()
Start
2000-09-01
Presentation
2010-03-12 10:15
Location:
Finished:
2010-04-29
Master's thesis:
(Contact supervisor)
Abstract
Today most image processing and storing in digital devices is limited to 8 bits per color component that normally referred as True Colors. Technological developments have made it feasible to use of more than 8 bits per color component called Deep Colors. This increases the precision and can extend the gamut. However there is a high price to be paid in terms of hardware complexity which may increase exponentially as compare to True Color. In this thesis, different aspects of 16 bit Deep Colors are examined. To extend the gamut, xvYCC color space can be used which involves more color and has a larger gamut than sRGB. By using xvYCC color space for internal image processing instead of sRGB, more color‟s information can be saved and more details are kept in the output image. But the output image can‟t be shown directly to the current displays because they only support sRGB color space. An extra step, gamut mapping from xvYCC to sRGB, should be taken before sending image to display. For this, six gamut mapping algorithms are investigated and evaluated, including clipping, linear compression, piecewise linear mapping, soft clipping, GCUSP and GCUSP modified. In addition, the hardware architectures are also proposed for these algorithms to reduce the mapping time. To increase precision, techniques for introducing Deep Colors in video and image standards are investigated. For this, color manipulating operations in OpenWF-C standard are selected. OpenWF-C is an open standard composition pipeline that can efficiently provides artifact-free composition of multiple graphical sources including video and 2D content types. Implementation of different color manipulating operations of OpenWF-C is investigated individually with different options such as architectures and bit widths. For each manipulating operation, feasible hardware architecture is proposed with minimum resources, required to get average error (AE) < 1LSB and peak error (PE) < 2LSB for both True Color and Deep Color cases. Then an image composition pipeline based on OpenWF-C standard with an addition of linearization and non-linearization blocks is designed with minimum resources to get AE<2LSB and PE<3LSB. Addition of these two blocks is to increase quality of image composition. Although, we used PE and AE for investigation, but PSNR results also given to find required resources to get specific PSNR as well looking through the simulation results presented in each section. Hardware complexity to switch from True Color to Deep Color is examined in detail.
Supervisor: Per Trane (ST-Ericsson)
Examiner: Joachim Rodrigues (EIT)