# Rough Mandelbrot Sets

I’ve been reading up on Zdzisław Pawlak’s Rough Set Theory recently and wanted to play with them. They are used to address vagueness in data so fractals seem like a good subject.

Super Quick Intro to Rough Sets:
A rough set is a tuple (ordered pair) of sets $R(S) = \langle R_*, R^* \rangle$ which is used to model some target set S. The set $R_*$ has every element definitely in set $S$ and set $R^*$ has every element that is possibly in set $S$ . It’s roughness can be measured by the accuracy function $\alpha(S) = \frac{|R_*|}{|R^*|}$. So when $|R_*| = |R^*|$ then the set is known as crisp (not vague) with an accuracy of 1.

A more formal example can be found on the wiki page but we’ll move on to the Mandelbrot example because it is visually intuitive:

The tiles are 36×36 pixels, the Mandelbrot set is marked in yellow. The green and white tiles are possibly in the Mandelbrot set, but the white tiles are also definitely in it.

Here the target set $S$ contains all the pixels inside the Mandelbrot set, but we are going to construct this set in terms of tiles. Let $T_1, T_2, T_3,\dots , T_n$ be the tile sets that contain the pixels. $R^*$ is the set of all tiles $T_x$ where the set $T_x$ contains at least one pixel that is inside the Mandelbrot set, $R_*$ is the set of all tiles $T_x$ that contain only Mandelbrot pixels. So in the above example there are 28 tiles possibly in the set including the 7 tiles definitely in the set. Giving $R(S)$ an accuracy of 0.25.

Tile width: 90, 72, 60, 45, 40, 36, 30, 24, 20, 18, 15, 12, 10, 9, 8, 6, 5, 4. There seems to be a lack of symmetry but it’s probably from computational precision loss.

Obviously the smaller the tiles the better the approximation of the set. Here the largest tiles (90×90 pixels) are so big that there are no tiles definitely inside the target set and 10 tiles possibly in the set, making the accuracy 0. On the other hand, the 4×4 tiles give us $|R_*| = 1211$ and $|R^*| = 1506$ making a much nicer:

$\alpha(S)$ = $0.8 \overline{04116865869853917662682602921646746347941567065073}$

For much more useful applications of Rough Sets see this extensive paper by Pawlak covering the short history of Rough Sets, comparing them to Fuzzy Sets and showing uses in data analysis and Artificial Intelligence.

# Digital Orthochromatic Photography

I’m playing with ways of emulating early orthochromatic film, if anyone knows any technical aspects to this that might help – feel free to suggest ideas. So far I’m just winging it.

Light Response of Film
In terms of light covered it goes: Blue-sensitive, orthochromatic, isochromatic, panchromatic (all visible wavelengths) then super-panchromatic.

On the right is a photo of British explorers taken with orthocromatic film. Note the red on the flag is much darker than the blue area because the film isn’t receptive of red light. Looking at the spectral sensitivity (here) and (here) it’s clear the violet/blue area is the predominant band with it extending to green/yellow as well.

Test Ideas
I took an image from Google, something with the British flag, and made different Channel Maps for it. The greyscaled versions are the output images.

I used the QSE tool on WavelengthPro to make a 400nm image (the violet/blue peak) and a 580nm (the green/yellow peak) and made a 2to3 map of the peaks, the colour response worked well but the data-loss from interpolation was too much. Using the original [R,G,B] channes means I won’t lose any detail, so I tried using mainly those and came up with two maps that were pretty good – GBB and GBBV. Below are those maps and the RGB map for comparison.

 This is the RGB (panchromatic) version, the red cross on the flag is lighter than the blue parts and the sky is dark. This is a GBBtoRGB version, the cross appears darker. It’s not really true to the actual spectral response of the film. This is a GBBVtoRGB version, the V is the interpolated deep violet image. Getting there…

# Luminance Mapping: UV and Thermal

This is a very simple function of WavelengthPro, using the luminance of one image and the hue/saturation of another. All it does is convert from RGB-space to HSL-space then use the L value (lightness) of a different image. In the table below I use a visible light image and an ultraviolet image (I got them from here) and map them in two ways. The first is not a Luminance map, but a GBU map the next is a luminance map which keeps the colours of the visible light and shows them at the lightness of the ultraviolet image.

 Visible image: can’t see the graffiti well. Ultraviolet image: shows up the graffiti well. GBU map: quite a nice colour palette and shows the graffiti quite well. Luminance map: has a “Human hue” whilst showing up the graffiti really well.

It is great for showing the ultraviolet characteristics of light whilst keeping the ‘true-colour’ feel that we’ve evolved to love. This idea isn’t new though, nightvision sometimes incorporates visible and thermal bands fused together and computer vision sometimes needs more than visible colour data to interpret a scene. One (less scientific) use is to make thermal imaging look a bit more lovely. Below is a visible image, thermal image and the Luminance map (I got them from here):

# Art or Craft? Or both!?

Have you guys seen the new TED clip about the history of Arts & Craft?

I’ve recently been doing some programming for a Company HobbiesOnTheWebb.co.uk and I found out that there was SO many distinctions and sub categories of arts and crafts, crazy amounts! I recommend hobbyists out there check out the site. They actually stock a lot of books/DVDs cheaper than Amazon.

# Full Spectrum Photography: Mapping

This post shows three examples of full spectrum mapping methods for multispectral photography. I’ve used some quick shots I took inbetween rain clouds so I apologise for the poor quality – especially the infrared image. All shots taken on a converted D70.

 Infrared (720nm filter) Visible light (Hoya cut filter) Ultraviolet (Baader-U filter)

The only map I see often is the classic 3to3 map. The characteristics are so that vegetation stands out in a very prominent red, nectar guides are clear cut and clear skies are a strong blue. The next map is weighted, roughly, 5to3 on the proportional spectrum each [I,R,G,B,U] component covers as wavelengths. The output shows the nectar guide enough for it to be noticeable, but clearly less so. The last map is my favourite so far, it is a 5to3 map that distributes the [I,R,G,B,U] in equal proportions. It dulls the bright red vegetation caused by infrared in the red channel and shows the nectar guide a little better than the previous map.

 Map Type Channels & Output Classic IR-VIS-UV R: IR G: VIS B: UV Proportional IRGBU R: (IR + IR + (IR * 0.33)) * 0.42 G: ((IR * 0.66) + R + (G * 0.66)) * 0.42 B: (UV + B + (G * 0.33)) * 0.42 Equal IRGBU R: (IR + (R * 0.66)) * 0.60 G: ((R * 0.33) + G + (B * 0.33)) * 0.60 B: (UV + (B * 0.66)) * 0.60

# Testing Infrared Software

This post is to show one of the features of WavelengthPro, some photography software I’m writing at the moment. It’s in early stages at the moment, I hope to add a lot more.

Channel Map Templates
I plan on having a basic and advanced way of mixing channels, so far I’ve done the basic version where you choose template maps. The advanced version will use percent sliders of every channel for every channel just like in Photoshop or GIMP etc. Below is a table showing the three starting images (all taken on a full-spec D70 using 720nm, Hoya UV/IR cut and Baader-U filters) and some of the possible mixtures using the program.

 Infrared Visible Ultraviolet Output image: Mapping information: Applying Auto-WB: IRG 3to3 map [R:ir, G:r, B:g] IRGB 4to3 map [R:ir+(r*0.33), G:(r*0.66)+(g*0.66), B:(g*0.33)+b] * 0.75 IRGB 4to3 map [R:ir, G:(r+g)/2, B:b] IR-VIS-UV 3to3 map [R:ir, G:vis, B:uv] IRGBU 5to3 map [R:(ir+r)/2, G:g, B:(b+uv)/2] IRGBU 5to3 map [R:ir+(r*0.66), G:(r*0.33)+g+(b*0.33), B:(b*0.66) + uv] * 0.60 GBU 3to3 map [R:g, G:b, B:uv] IR/UV 2to3 map [R:ir, G:(ir+uv)/2, B:uv]

# AI: Conway Creatures!

Recently I’ve been reading up on Cellular Evolutionary Algorithms and on the use of Genetic Algorithms (GAs) to evolve Cellular Automata (CA). I want to try out all sorts of different things so I will be doing a series of posts where I explore different interpretations of a concept – Conway Creatures.

Get Involved!
I have a few programmers, logicians and AI enthusiasts that follow my blog so I wanted to see if I could make something of the concept. I’d like to see all your interpretations of what a Conway creature is. Given a rough description, which is essentially: creatures (2D,3D, etc) of which the properties are evolved in some way using a mix of CA and GAs. If I get any i’ll do posts dedicated to the entries. I would also like to see varied definitions of the concept such as the use of other works by John Conway (e.g., the growth rate of the Look and Say sequence – Conway’s Constant, the Surreal Numbers, Conway notation for polyhedra, etc), other CA rules and other Evolutionary methods.

The First Detour – Longevity in Game of Life:
I began to write the program and found myself wondering “What are the characteristics of longevity in CAs?” and I’m still not sure. I’ve been trying different takes on mutation and crossover where I take chunks of the board and flip them or turn them all on/off. It doesn’t seem to make much difference, I was hoping it would conserve locality. I also tried out Boltzmann selection (Simulated Annealing) but tournament selection (dueling) worked much better. Any ideas?

This is my program so far, I am taking a little detour to look into longevity, but in Issue 1 there will be a creature where the 3D terrain is. Probably not very complicated, I was thinking of making the creature some sort of shape, like an octahedron.