Digital Orthochromatic Photography

I’m playing with ways of emulating early orthochromatic film, if anyone knows any technical aspects to this that might help – feel free to suggest ideas. So far I’m just winging it.

Light Response of FilmNorthernparty
In terms of light covered it goes: Blue-sensitive, orthochromatic, isochromatic, panchromatic (all visible wavelengths) then super-panchromatic.

On the right is a photo of British explorers taken with orthocromatic film. Note the red on the flag is much darker than the blue area because the film isn’t receptive of red light. Looking at the spectral sensitivity (here) and (here) it’s clear the violet/blue area is the predominant band with it extending to green/yellow as well.

Ship - VISTest Ideas
I took an image from Google, something with the British flag, and made different Channel Maps for it. The greyscaled versions are the output images.

I used the QSE tool on WavelengthPro to make a 400nm image (the violet/blue peak) and a 580nm (the green/yellow peak) and made a 2to3 map of the peaks, the colour response worked well but the data-loss from interpolation was too much. Using the original [R,G,B] channes means I won’t lose any detail, so I tried using mainly those and came up with two maps that were pretty good – GBB and GBBV. Below are those maps and the RGB map for comparison.

BW - RGB BW - GBB G-B-B-V
This is the RGB (panchromatic) version, the red cross on the flag is lighter than the blue parts and the sky is dark. This is a GBBtoRGB version, the cross appears darker. It’s not really true to the actual spectral response of the film. This is a GBBVtoRGB version, the V is the interpolated deep violet image. Getting there…

Luminance Mapping: UV and Thermal

This is a very simple function of WavelengthPro, using the luminance of one image and the hue/saturation of another. All it does is convert from RGB-space to HSL-space then use the L value (lightness) of a different image. In the table below I use a visible light image and an ultraviolet image (I got them from here) and map them in two ways. The first is not a Luminance map, but a GBU map the next is a luminance map which keeps the colours of the visible light and shows them at the lightness of the ultraviolet image.

Wall - VIS Wall - UV
Visible image: can’t see the graffiti well. Ultraviolet image: shows up the graffiti well.
Wall - GBU Wall VIS at UV intensity
GBU map: quite a nice colour palette and shows the graffiti quite well. Luminance map: has a “Human hue” whilst showing up the graffiti really well.

It is great for showing the ultraviolet characteristics of light whilst keeping the ‘true-colour’ feel that we’ve evolved to love. This idea isn’t new though, nightvision sometimes incorporates visible and thermal bands fused together and computer vision sometimes needs more than visible colour data to interpret a scene. One (less scientific) use is to make thermal imaging look a bit more lovely. Below is a visible image, thermal image and the Luminance map (I got them from here):

hand_in_bag hand_in_bagtherm hand_in_bag WITS map

Art or Craft? Or both!?

Have you guys seen the new TED clip about the history of Arts & Craft?

I’ve recently been doing some programming for a Company HobbiesOnTheWebb.co.uk and I found out that there was SO many distinctions and sub categories of arts and crafts, crazy amounts! I recommend hobbyists out there check out the site. They actually stock a lot of books/DVDs cheaper than Amazon.

Full Spectrum Photography: Mapping

This post shows three examples of full spectrum mapping methods for multispectral photography. I’ve used some quick shots I took inbetween rain clouds so I apologise for the poor quality – especially the infrared image. All shots taken on a converted D70.

Infrared (720nm filter)small flower - IR Visible light (Hoya cut filter)small flower - VIS Ultraviolet (Baader-U filter)small flower - UV

The only map I see often is the classic 3to3 map. The characteristics are so that vegetation stands out in a very prominent red, nectar guides are clear cut and clear skies are a strong blue. The next map is weighted, roughly, 5to3 on the proportional spectrum each [I,R,G,B,U] component covers as wavelengths. The output shows the nectar guide enough for it to be noticeable, but clearly less so. The last map is my favourite so far, it is a 5to3 map that distributes the [I,R,G,B,U] in equal proportions. It dulls the bright red vegetation caused by infrared in the red channel and shows the nectar guide a little better than the previous map.

Map Type Channels & Output

Classic IR-VIS-UV
R: IR
G: VIS
B: UV
RGB colour map - IVU flower - IVU

Proportional IRGBU
R: (IR + IR + (IR * 0.33)) * 0.42
G: ((IR * 0.66) + R + (G * 0.66)) * 0.42
B: (UV + B + (G * 0.33)) * 0.42
RGB colour map - Proportional Proportional Wavelength Distribution

Equal IRGBU
R: (IR + (R * 0.66)) * 0.60
G: ((R * 0.33) + G + (B * 0.33)) * 0.60
B: (UV + (B * 0.66)) * 0.60
RGB colour map - Equal flower - IRGBU

Testing Infrared Software

This post is to show one of the features of WavelengthPro, some photography software I’m writing at the moment. It’s in early stages at the moment, I hope to add a lot more.

Channel Map Templates
I plan on having a basic and advanced way of mixing channels, so far I’ve done the basic version where you choose template maps. The advanced version will use percent sliders of every channel for every channel just like in Photoshop or GIMP etc. Below is a table showing the three starting images (all taken on a full-spec D70 using 720nm, Hoya UV/IR cut and Baader-U filters) and some of the possible mixtures using the program.

InfraredPan IR VisiblePan VIS UltravioletPan UV
Output image: Mapping information: Applying Auto-WB:
EIR building

IRG 3to3 map

[R:ir, G:r, B:g]

gEIR building
Equal split IRGB building

IRGB 4to3 map

[R:ir+(r*0.33), G:(r*0.66)+(g*0.66), B:(g*0.33)+b] * 0.75

gBoxsplit IRGB building
IRGB building

IRGB 4to3 map

[R:ir, G:(r+g)/2, B:b]

gIRGB building
IR-VIS-UV building

IR-VIS-UV 3to3 map

[R:ir, G:vis, B:uv]

gIR-VIS-UV building
IRGBU building

IRGBU 5to3 map

[R:(ir+r)/2, G:g, B:(b+uv)/2]

gIRGBU building
Boxsplit IRGBU building

IRGBU 5to3 map

[R:ir+(r*0.66), G:(r*0.33)+g+(b*0.33), B:(b*0.66) + uv] * 0.60

gBoxsplit IRGBU building
GBU building

GBU 3to3 map

[R:g, G:b, B:uv]

gGBU building
IR-UV building

IR/UV 2to3 map

[R:ir, G:(ir+uv)/2, B:uv]

gIR-UV building

AI: Conway Creatures!

Recently I’ve been reading up on Cellular Evolutionary Algorithms and on the use of Genetic Algorithms (GAs) to evolve Cellular Automata (CA). I want to try out all sorts of different things so I will be doing a series of posts where I explore different interpretations of a concept – Conway Creatures.

Get Involved!
I have a few programmers, logicians and AI enthusiasts that follow my blog so I wanted to see if I could make something of the concept. I’d like to see all your interpretations of what a Conway creature is. Given a rough description, which is essentially: creatures (2D,3D, etc) of which the properties are evolved in some way using a mix of CA and GAs. If I get any i’ll do posts dedicated to the entries. I would also like to see varied definitions of the concept such as the use of other works by John Conway (e.g., the growth rate of the Look and Say sequence – Conway’s Constant, the Surreal Numbers, Conway notation for polyhedra, etc), other CA rules and other Evolutionary methods.

The First Detour – Longevity in Game of Life:
I began to write the program and found myself wondering “What are the characteristics of longevity in CAs?” and I’m still not sure. I’ve been trying different takes on mutation and crossover where I take chunks of the board and flip them or turn them all on/off. It doesn’t seem to make much difference, I was hoping it would conserve locality. I also tried out Boltzmann selection (Simulated Annealing) but tournament selection (dueling) worked much better. Any ideas?

This is my program so far, I am taking a little detour to look into longevity, but in Issue 1 there will be a creature where the 3D terrain is. Probably not very complicated, I was thinking of making the creature some sort of shape, like an octahedron.

Almost UV Photography

For ages I have wanted to do full-spectrum photography, which captures light from Infrared (IR) all the way to ultraviolet (UV), but the UV aspect of it is bloody expensive! DSLR sensors, both CCD and CMOS, capture light slightly outside the visible spectrum (VIS) but use things like hot mirrors and UV filters to narrow the band closer to 390-700nm. The sensors use channeling methods like a Bayer filter to give us the very useful RGB channels, in this post we will work with extra channels for IR and UV.

I am always looking for cheap alternatives for UV and I thought I’d test out a bit of a long shot – using a UV filter to maths my way to a UV image. To do this I bought a daylight simulating bulb that emits UVA (400-315nm) and some flowers from the local gas station. It’s a simple idea, the extra light that the UV filter blocks must be UV light so if we subtract all the other light we are left with UV.

No Filter – UV Filter = UV ResidueFlowersG-B-U RGB mapI subtracted each colour separately for each pixel: [r1-r2, g1-g2, b1-b2], it was rather red so I used the red channel for the new R,G and B making a brighter grayscaled image (see below). Then I used that new “UV” image along with the colour image to map channels [GBU to RGB] like the images Infrachrome makes using this technique. For infrared and ultraviolet he uses an adapted camera specifically for full-spectrum, infact he uses two in a fantastical and magical set up. Unfortunately mine didn’t work very well, my first guess was that the lower range of blue light being reflected as there is no sign of a nectar guide. But after consulting a pro UV photographer I was told it is due to infrared-leakage.

Wideband FlowersI thought I’d do a full spectrum map whilst I had the camera set up so I put on a 950nm IR pass filter and took another shot. In the above image the far right is the channel map of the other three.