Accuracy of Generated Fractals

Note: I refer to the Mandelbrot set in general as the M-set for short.

When I was writing the post on Rough Mandelbrot Sets I tried out some variations on the rough set. One variation was to measure the generated M-set against a previously calculated master M-set of high precision (100000 iterations of z = z^2 + C). In the image below the master M-set is in white and the generated M-sets are in green (increasing in accuracy):

50 Against MasterHere -instead of approximating with tiles- I measured the accuracy of the generated sets against the master set by pixel count. Where P = \{ \text{set of all pixels} \} the ratio of P_{master} / P_{generated} produced something that threw me, the generated sets made sudden but periodic jumps in accuracy:

Graph OneLooking at the data I saw the jumps were, very roughly, at multiples of 256. The size of the image being generated was 256 by 256 pixels so I changed it to N by N for N = {120, 360, 680} and the increment was still every ~256. So I’m not really sure why, it might be obvious, if you know tell me in the comments!

I am reminded of the images generated from Fractal Binary and other Complex Bases where large geometric entities can be represented on a plane by iteration through a number system. I’d really like to know what the Mandelbrot Number System is…

Below is a table of the jumps and their iteration index:

Iterations Accuracy measure

Editing Ultraviolet Photography

For people who do multispectral photography (infrared, visible, ultraviolet, etc) sometimes it can be tricky to achieve what you want in a traditional photo editor. That is why I am developing software to cater specifically for multispectral image processing. The software is called WavelengthPro. Below is a quick video, four images and explanations for the results.

The Resulting Images:
These four images are made from only this visible light image and this ultraviolet image. They were both taken with the same camera, Nikon D70 (hot mirror removed), using an IR-UV cut filter and the Baader-U filter. No extra editing was done.

UV-DualProcessed UV-Luminance
Dual Process Luminance Map

One often met problem in ultraviolet photography is getting the right white-balance on your camera – if you can’t achieve it you end up with rather purple pictures. In WavelengthPro there is a tool called Dual Processing where you create the green channel out of the red and blue channels which rids your image of the purpley hue.

The Luminance Map is actually a lightness map, made in HSL colour-space, it is the hue and saturation of the visible light image with the lightness of the UV image. As you can see the specular effect on the leaves completely contrasts the mat flowers. There is a good example of useful application for this method in the post Luminance Mapping: UV and Thermal.

5to3 Map (RGBUrUb) 3to3 Map (GBU)

There are only 3 channels (ignoring alpha) for an image to be encoded into but in WavelengthPro that is attempted to be extended by mapping N channels to the 3 (RGB) output channels. A classic ultraviolet editing method is to make a GBU image, like the image on the right. The image on the left has five channels equally distributed across the three RGB channels. An array of different maps can be seen in an older post called Testing Infrared Software. If you want to download and try out the software (it’s still in alpha stage – don’t expect everything to work) then the link below will have the latest release version.

WavelengthPro Flickr Group

Rough Mandelbrot Sets

I’ve been reading up on Zdzisław Pawlak’s Rough Set Theory recently and wanted to play with them. They are used to address vagueness in data so fractals seem like a good subject.

Super Quick Intro to Rough Sets:
A rough set is a tuple (ordered pair) of sets R(S) = \langle R_*, R^* \rangle which is used to model some target set S. The set R_* has every element definitely in set S and set R^* has every element that is possibly in set S . It’s roughness can be measured by the accuracy function \alpha(S) = \frac{|R_*|}{|R^*|} . So when |R_*| = |R^*| then the set is known as crisp (not vague) with an accuracy of 1.

A more formal example can be found on the wiki page but we’ll move on to the Mandelbrot example because it is visually intuitive:

The tiles are 36x36 pixels, the Mandelbrot set is marked in yellow. The green and white tiles are possibly i the Mandelbrot set, but the white tiles are also definitely in the Mandelbrot set.

The tiles are 36×36 pixels, the Mandelbrot set is marked in yellow. The green and white tiles are possibly in the Mandelbrot set, but the white tiles are also definitely in it.

Here the target set S contains all the pixels inside the Mandelbrot set, but we are going to construct this set in terms of tiles. Let T_1, T_2, T_3,\dots , T_n be the tile sets that contain the pixels. R^* is the set of all tiles T_x where the set T_x contains at least one pixel that is inside the Mandelbrot set, R_* is the set of all tiles T_x that contain only Mandelbrot pixels. So in the above example there are 28 tiles possibly in the set including the 7 tiles definitely in the set. Giving R(S) an accuracy of 0.25.

Tile sizes: 90, 72, 60, 45, 40, 36, 30, 24, 20, 18, 15, 12, 10, 9, 8, 6, 5, 4.

Tile width: 90, 72, 60, 45, 40, 36, 30, 24, 20, 18, 15, 12, 10, 9, 8, 6, 5, 4. There seems to be a lack of symmetry but it’s probably from computational precision loss.

Obviously the smaller the tiles the better the approximation of the set. Here the largest tiles (90×90 pixels) are so big that there are no tiles definitely inside the target set and 10 tiles possibly in the set, making the accuracy 0. On the other hand, the 4×4 tiles give us |R_*| = 1211 and |R^*| = 1506 making a much nicer:

\alpha(S) = 0.8 \overline{04116865869853917662682602921646746347941567065073}

For much more useful applications of Rough Sets see this extensive paper by Pawlak covering the short history of Rough Sets, comparing them to Fuzzy Sets and showing uses in data analysis and Artificial Intelligence.

Digital Orthochromatic Photography

I’m playing with ways of emulating early orthochromatic film, if anyone knows any technical aspects to this that might help – feel free to suggest ideas. So far I’m just winging it.

Light Response of FilmNorthernparty
In terms of light covered it goes: Blue-sensitive, orthochromatic, isochromatic, panchromatic (all visible wavelengths) then super-panchromatic.

On the right is a photo of British explorers taken with orthocromatic film. Note the red on the flag is much darker than the blue area because the film isn’t receptive of red light. Looking at the spectral sensitivity (here) and (here) it’s clear the violet/blue area is the predominant band with it extending to green/yellow as well.

Ship - VISTest Ideas
I took an image from Google, something with the British flag, and made different Channel Maps for it. The greyscaled versions are the output images.

I used the QSE tool on WavelengthPro to make a 400nm image (the violet/blue peak) and a 580nm (the green/yellow peak) and made a 2to3 map of the peaks, the colour response worked well but the data-loss from interpolation was too much. Using the original [R,G,B] channes means I won’t lose any detail, so I tried using mainly those and came up with two maps that were pretty good – GBB and GBBV. Below are those maps and the RGB map for comparison.

This is the RGB (panchromatic) version, the red cross on the flag is lighter than the blue parts and the sky is dark. This is a GBBtoRGB version, the cross appears darker. It’s not really true to the actual spectral response of the film. This is a GBBVtoRGB version, the V is the interpolated deep violet image. Getting there…

Luminance Mapping: UV and Thermal

This is a very simple function of WavelengthPro, using the luminance of one image and the hue/saturation of another. All it does is convert from RGB-space to HSL-space then use the L value (lightness) of a different image. In the table below I use a visible light image and an ultraviolet image (I got them from here) and map them in two ways. The first is not a Luminance map, but a GBU map the next is a luminance map which keeps the colours of the visible light and shows them at the lightness of the ultraviolet image.

Wall - VIS Wall - UV
Visible image: can’t see the graffiti well. Ultraviolet image: shows up the graffiti well.
Wall - GBU Wall VIS at UV intensity
GBU map: quite a nice colour palette and shows the graffiti quite well. Luminance map: has a “Human hue” whilst showing up the graffiti really well.

It is great for showing the ultraviolet characteristics of light whilst keeping the ‘true-colour’ feel that we’ve evolved to love. This idea isn’t new though, nightvision sometimes incorporates visible and thermal bands fused together and computer vision sometimes needs more than visible colour data to interpret a scene. One (less scientific) use is to make thermal imaging look a bit more lovely. Below is a visible image, thermal image and the Luminance map (I got them from here):

hand_in_bag hand_in_bagtherm hand_in_bag WITS map

Art or Craft? Or both!?

Have you guys seen the new TED clip about the history of Arts & Craft?

I’ve recently been doing some programming for a Company and I found out that there was SO many distinctions and sub categories of arts and crafts, crazy amounts! I recommend hobbyists out there check out the site. They actually stock a lot of books/DVDs cheaper than Amazon.

Full Spectrum Photography: Mapping

This post shows three examples of full spectrum mapping methods for multispectral photography. I’ve used some quick shots I took inbetween rain clouds so I apologise for the poor quality – especially the infrared image. All shots taken on a converted D70.

Infrared (720nm filter)small flower - IR Visible light (Hoya cut filter)small flower - VIS Ultraviolet (Baader-U filter)small flower - UV

The only map I see often is the classic 3to3 map. The characteristics are so that vegetation stands out in a very prominent red, nectar guides are clear cut and clear skies are a strong blue. The next map is weighted, roughly, 5to3 on the proportional spectrum each [I,R,G,B,U] component covers as wavelengths. The output shows the nectar guide enough for it to be noticeable, but clearly less so. The last map is my favourite so far, it is a 5to3 map that distributes the [I,R,G,B,U] in equal proportions. It dulls the bright red vegetation caused by infrared in the red channel and shows the nectar guide a little better than the previous map.

Map Type Channels & Output

Classic IR-VIS-UV
RGB colour map - IVU flower - IVU

Proportional IRGBU
R: (IR + IR + (IR * 0.33)) * 0.42
G: ((IR * 0.66) + R + (G * 0.66)) * 0.42
B: (UV + B + (G * 0.33)) * 0.42
RGB colour map - Proportional Proportional Wavelength Distribution

R: (IR + (R * 0.66)) * 0.60
G: ((R * 0.33) + G + (B * 0.33)) * 0.60
B: (UV + (B * 0.66)) * 0.60
RGB colour map - Equal flower - IRGBU