The RGB Universe

Three images bounded to the respective spaces: Colour, Chromaticity, and Hue.

One image bounded to three respective spaces: Colour, Chromaticity, and Hue.

The Colour-Space: RGB
Everyone is familiar with this, it is the additive model for colours that uses the primaries: red, green & blue. A 3D Model where each unique colour sits at position (x: r, y: g, z: b).

The Chromaticity-Space: RCGCBC
Some people will be familiar with this, it is RGB without luminance, the brightness is removed in a way that doesn’t effect the hue or saturation. It is referred to as rg-Chromaticity because it’s construction from RGB means only two elements are needed to represent all the chromaticity values:

Conversion to Conversion from (kind of*)
R_C= \frac{R}{R+G+B}
G_C= \frac{G}{R+G+B}
B_C= \frac{B}{R+G+B}
 R = \frac{R_C G}{G_C}
G = G
B = \frac{(1 - R_C - G_C) G}{G_C}

It will always be that R_C + G_C + B_C = 1 so by discarding the blue component we can have unique chromaticities as (x: r’, y: g’). This means that rg-Chromaticity is a 2D-Model and when converting to it from RGB we lose the luminance. So it is impossible to convert back. *An in-between for this is the colour-space rgG where the G component preserves luminance in the image.

The Hue-Space: RHGHBH
No one uses this, I just thought it would be fun to apply the same as above and extract the saturation from RCGCBC. Like RCGCBC it is a 2D Model, this seems strange because it is only representing one attribute –hue– but it is because the elements themselves have a ternary relationship (how much red, how much green, how much blue) and so to extrapolate one you must know the other two.

Conversion to 3-tuple Hue Normalise to 2D
M = \text{Max}(R,G,B)
m = \text{Min}(R,G,B)
\delta = 255/(M-m)
R_h = (R-m) \delta
G_h = (G-m) \delta
B_h = (B-m) \delta
R_H = \frac{R_h}{R_h + G_h + B_h}
G_H = \frac{G_h}{R_h + G_h + B_h}
B_H = \frac{B_h}{R_h + G_h + B_h}

Measuring Hue Distance
The HSL colour-space records hue as a single element, H, making measuring distance as easy as \Delta H = \sqrt{{H_a}^2 - {H_b}^2} where as in rg-Hue we have two elements so \Delta H = \sqrt{({R''_a}^2 - {R''_b}^2) + ({G''_a}^2 - {G''_b}^2)} where R'' = R_H and G'' = G_H for readability. What’s interesting here is it works almost the same. Though it should be noted that on a line only two distances are equidistant to zero at one time where as in rg-Hue, on a 2D plane, there are many equidistant points around circles.

Below are images of a RGB testcard where each pixel’s hue has been measured against a colour palette (60° Rainbow) and coloured with the closest match. The rg-Hue measure has a notable consistency to it and shows more red on the right hand side than HSL, but also between the yellow and red there is a tiny slither of purple. I believe this is from the equal distance hues and the nature of looking through a list for the lowest value when there are multiple lowest values:

Hue Distance (HSL) Hue Distance (RHGHBH)
HSL Measure rg-Hue Measure
Advertisements

Luminance Mapping: UV and Thermal

This is a very simple function of WavelengthPro, using the luminance of one image and the hue/saturation of another. All it does is convert from RGB-space to HSL-space then use the L value (lightness) of a different image. In the table below I use a visible light image and an ultraviolet image (I got them from here) and map them in two ways. The first is not a Luminance map, but a GBU map the next is a luminance map which keeps the colours of the visible light and shows them at the lightness of the ultraviolet image.

Wall - VIS Wall - UV
Visible image: can’t see the graffiti well. Ultraviolet image: shows up the graffiti well.
Wall - GBU Wall VIS at UV intensity
GBU map: quite a nice colour palette and shows the graffiti quite well. Luminance map: has a “Human hue” whilst showing up the graffiti really well.

It is great for showing the ultraviolet characteristics of light whilst keeping the ‘true-colour’ feel that we’ve evolved to love. This idea isn’t new though, nightvision sometimes incorporates visible and thermal bands fused together and computer vision sometimes needs more than visible colour data to interpret a scene. One (less scientific) use is to make thermal imaging look a bit more lovely. Below is a visible image, thermal image and the Luminance map (I got them from here):

hand_in_bag hand_in_bagtherm hand_in_bag WITS map

Full Spectrum Photography: Mapping

This post shows three examples of full spectrum mapping methods for multispectral photography. I’ve used some quick shots I took inbetween rain clouds so I apologise for the poor quality – especially the infrared image. All shots taken on a converted D70.

Infrared (720nm filter)small flower - IR Visible light (Hoya cut filter)small flower - VIS Ultraviolet (Baader-U filter)small flower - UV

The only map I see often is the classic 3to3 map. The characteristics are so that vegetation stands out in a very prominent red, nectar guides are clear cut and clear skies are a strong blue. The next map is weighted, roughly, 5to3 on the proportional spectrum each [I,R,G,B,U] component covers as wavelengths. The output shows the nectar guide enough for it to be noticeable, but clearly less so. The last map is my favourite so far, it is a 5to3 map that distributes the [I,R,G,B,U] in equal proportions. It dulls the bright red vegetation caused by infrared in the red channel and shows the nectar guide a little better than the previous map.

Map Type Channels & Output

Classic IR-VIS-UV
R: IR
G: VIS
B: UV
RGB colour map - IVU flower - IVU

Proportional IRGBU
R: (IR + IR + (IR * 0.33)) * 0.42
G: ((IR * 0.66) + R + (G * 0.66)) * 0.42
B: (UV + B + (G * 0.33)) * 0.42
RGB colour map - Proportional Proportional Wavelength Distribution

Equal IRGBU
R: (IR + (R * 0.66)) * 0.60
G: ((R * 0.33) + G + (B * 0.33)) * 0.60
B: (UV + (B * 0.66)) * 0.60
RGB colour map - Equal flower - IRGBU

Testing Infrared Software

This post is to show one of the features of WavelengthPro, some photography software I’m writing at the moment. It’s in early stages at the moment, I hope to add a lot more.

Channel Map Templates
I plan on having a basic and advanced way of mixing channels, so far I’ve done the basic version where you choose template maps. The advanced version will use percent sliders of every channel for every channel just like in Photoshop or GIMP etc. Below is a table showing the three starting images (all taken on a full-spec D70 using 720nm, Hoya UV/IR cut and Baader-U filters) and some of the possible mixtures using the program.

InfraredPan IR VisiblePan VIS UltravioletPan UV
Output image: Mapping information: Applying Auto-WB:
EIR building

IRG 3to3 map

[R:ir, G:r, B:g]

gEIR building
Equal split IRGB building

IRGB 4to3 map

[R:ir+(r*0.33), G:(r*0.66)+(g*0.66), B:(g*0.33)+b] * 0.75

gBoxsplit IRGB building
IRGB building

IRGB 4to3 map

[R:ir, G:(r+g)/2, B:b]

gIRGB building
IR-VIS-UV building

IR-VIS-UV 3to3 map

[R:ir, G:vis, B:uv]

gIR-VIS-UV building
IRGBU building

IRGBU 5to3 map

[R:(ir+r)/2, G:g, B:(b+uv)/2]

gIRGBU building
Boxsplit IRGBU building

IRGBU 5to3 map

[R:ir+(r*0.66), G:(r*0.33)+g+(b*0.33), B:(b*0.66) + uv] * 0.60

gBoxsplit IRGBU building
GBU building

GBU 3to3 map

[R:g, G:b, B:uv]

gGBU building
IR-UV building

IR/UV 2to3 map

[R:ir, G:(ir+uv)/2, B:uv]

gIR-UV building