This is what the components represent:
* Y = pseudo luminance, or intensity * Co = "orange chrominance" * Cg = "green chrominance"
In contrast to YCbCr this doesn't represent the human kind of viewing.
This colorspace is invented to use similar encoding techniques as for YCbCr but with frames in RGB colorspace. It is possible to losslessly transform from RGB to YCoCg when two more bits than used for RGB are available for the YCoCg representation. This way it is possible to losslessly transform a 30-bit RGB frame into 32 bits in YCoCg 4:4:4 and back.
lossless: 10 bits for each color RGB <=> 10 bits for Y and 11 bits for each chrominance
Sometimes this colorspace is called YCoCg-R because of the lossless reversable transformation. The original and outdated algorithm could not restore the RGB value exactly but used only as much bits for the YCoCg version as the RGB version. The latter algorithm isn't used anymore.
Like with YCbCr it is also possible to use different sized planes for each component. Thus, every pixel in an image of a YCoCg encoded frame is associated with one Y sample, but possibly groups of pixels share Co and Cg samples.
So this encodings are possible for instance
- YCoCg 4:4:4
- YCoCg 4:2:2
The algorithm is described in this paper: YCoCg(-R) Color Space Conversion on the GPU.
And in the dirac spec in section F.1.5.2.
From RGB to YCoCg:
Co = R - B t = B + (Co >> 1) Cg = G - t Y = t + (Cg >> 1)
and back from YCoCg to RGB:
t = Y - (Cg >> 1) G = Cg + t B = t - (Co >> 1) R = Co + B