Mandelbrot: SETL & Lisp

SETL is a general-purpose programming language developed by Jack Schwartz back in the late 1960s. The language is multiparadigm – both Procedural (containing subroutines and control structures like if statements) and Set Theoretical. The latter is not often mentioned when talking about language paradigms – and is not to be confused with Logical Programming. The concept is outlined in the 1974 paper An Introduction to the Set Theoretical Language SETL. Importantly though data in SETL is of three main types: Atoms, Sets, and Sequences (also referred to as tuples or ordered sets).

A good overview of SETL can be found in the GNU SETL Om.

setl-mandelbrot

Pixels inside the Mandelbrot Set are marked in black, the code used to generate them (mandelbrot.setl) can be found here.

Generation of the Mandelbrot Set
On the Wikipedia page for the Mandelbrot Set you will find its formal definition, which looks like so: M = \left\{c\in \mathbb C : \exists s\in \mathbb R, \forall n\in \mathbb N, |P_c^n(0)| \le s \right\}.

The most fantastic thing about SETL is how powerful its Set Builder Notation is. Almost exactly as you would in the formal case (removing ‘there exists some real s’ for the practical application):

M := {c : c in image | forall num in {0..iterations} | cabs(pofc(c, num)) <= 2};

For some reason SETL doesn’t support complex numbers, but they are easily handled by writing the necessary procedures we need like cabs, ctimes, and cplus dealing with tuples in the form [x, y]. The variable ‘image’ is a sequence of these tuples. Another procedure is written, pofc, which is a more practical version of P_c^n(0).

Interaction with the Lisp graphical display
The goal of the SETL program is to produce a set of points that lie inside the Mandelbrot set. To actually display the image I used Common Lisp and displayed a widget with the image using CommonQt. In a very rudimentary way I had mandelbrot.setl take arguments about the image then print the set of pixel co-ordinates. All lisp had to do was swap and ‘{}’ or ‘[]’ for ‘()’ and read it as lisp code then update the image.

(setf in-set (read-from-string (parse-line (uiop:run-program "setl mandelbrot.setl -- -250 -250 250 8" :output :string))))
(draw-mandel instance in-set)

An after-thought was to make a Mandelbrot searcher where you could zoom and move using the mouse but the SETL code is such an inefficient way of doing it that it’s not worth it. As an attempt to mimic the formal definition it was highly successful and fun. Though much quicker SETL code could be written for generating the Mandelbrot Set.

Source file for mandelbrot.setl and the Lisp/CommonQt front end can be found here.

The RGB Universe

Three images bounded to the respective spaces: Colour, Chromaticity, and Hue.

One image bounded to three respective spaces: Colour, Chromaticity, and Hue.

The Colour-Space: RGB
Everyone is familiar with this, it is the additive model for colours that uses the primaries: red, green & blue. A 3D Model where each unique colour sits at position (x: r, y: g, z: b).

The Chromaticity-Space: RCGCBC
Some people will be familiar with this, it is RGB without luminance, the brightness is removed in a way that doesn’t effect the hue or saturation. It is referred to as rg-Chromaticity because it’s construction from RGB means only two elements are needed to represent all the chromaticity values:

Conversion to Conversion from (kind of*)
R_C= \frac{R}{R+G+B}
G_C= \frac{G}{R+G+B}
B_C= \frac{B}{R+G+B}
 R = \frac{R_C G}{G_C}
G = G
B = \frac{(1 - R_C - G_C) G}{G_C}

It will always be that R_C + G_C + B_C = 1 so by discarding the blue component we can have unique chromaticities as (x: r’, y: g’). This means that rg-Chromaticity is a 2D-Model and when converting to it from RGB we lose the luminance. So it is impossible to convert back. *An in-between for this is the colour-space rgG where the G component preserves luminance in the image.

The Hue-Space: RHGHBH
No one uses this, I just thought it would be fun to apply the same as above and extract the saturation from RCGCBC. Like RCGCBC it is a 2D Model, this seems strange because it is only representing one attribute –hue– but it is because the elements themselves have a ternary relationship (how much red, how much green, how much blue) and so to extrapolate one you must know the other two.

Conversion to 3-tuple Hue Normalise to 2D
M = \text{Max}(R,G,B)
m = \text{Min}(R,G,B)
\delta = 255/(M-m)
R_h = (R-m) \delta
G_h = (G-m) \delta
B_h = (B-m) \delta
R_H = \frac{R_h}{R_h + G_h + B_h}
G_H = \frac{G_h}{R_h + G_h + B_h}
B_H = \frac{B_h}{R_h + G_h + B_h}

Measuring Hue Distance
The HSL colour-space records hue as a single element, H, making measuring distance as easy as \Delta H = \sqrt{{H_a}^2 - {H_b}^2} where as in rg-Hue we have two elements so \Delta H = \sqrt{({R''_a}^2 - {R''_b}^2) + ({G''_a}^2 - {G''_b}^2)} where R'' = R_H and G'' = G_H for readability. What’s interesting here is it works almost the same. Though it should be noted that on a line only two distances are equidistant to zero at one time where as in rg-Hue, on a 2D plane, there are many equidistant points around circles.

Below are images of a RGB testcard where each pixel’s hue has been measured against a colour palette (60° Rainbow) and coloured with the closest match. The rg-Hue measure has a notable consistency to it and shows more red on the right hand side than HSL, but also between the yellow and red there is a tiny slither of purple. I believe this is from the equal distance hues and the nature of looking through a list for the lowest value when there are multiple lowest values:

Hue Distance (HSL) Hue Distance (RHGHBH)
HSL Measure rg-Hue Measure

Rough Mandelbrot Sets

I’ve been reading up on Zdzisław Pawlak’s Rough Set Theory recently and wanted to play with them. They are used to address vagueness in data so fractals seem like a good subject.

Super Quick Intro to Rough Sets:
A rough set is a tuple (ordered pair) of sets R(S) = \langle R_*, R^* \rangle which is used to model some target set S. The set R_* has every element definitely in set S and set R^* has every element that is possibly in set S . It’s roughness can be measured by the accuracy function \alpha(S) = \frac{|R_*|}{|R^*|} . So when |R_*| = |R^*| then the set is known as crisp (not vague) with an accuracy of 1.

A more formal example can be found on the wiki page but we’ll move on to the Mandelbrot example because it is visually intuitive:

The tiles are 36x36 pixels, the Mandelbrot set is marked in yellow. The green and white tiles are possibly i the Mandelbrot set, but the white tiles are also definitely in the Mandelbrot set.

The tiles are 36×36 pixels, the Mandelbrot set is marked in yellow. The green and white tiles are possibly in the Mandelbrot set, but the white tiles are also definitely in it.

Here the target set S contains all the pixels inside the Mandelbrot set, but we are going to construct this set in terms of tiles. Let T_1, T_2, T_3,\dots , T_n be the tile sets that contain the pixels. R^* is the set of all tiles T_x where the set T_x contains at least one pixel that is inside the Mandelbrot set, R_* is the set of all tiles T_x that contain only Mandelbrot pixels. So in the above example there are 28 tiles possibly in the set including the 7 tiles definitely in the set. Giving R(S) an accuracy of 0.25.

Tile sizes: 90, 72, 60, 45, 40, 36, 30, 24, 20, 18, 15, 12, 10, 9, 8, 6, 5, 4.

Tile width: 90, 72, 60, 45, 40, 36, 30, 24, 20, 18, 15, 12, 10, 9, 8, 6, 5, 4. There seems to be a lack of symmetry but it’s probably from computational precision loss.

Obviously the smaller the tiles the better the approximation of the set. Here the largest tiles (90×90 pixels) are so big that there are no tiles definitely inside the target set and 10 tiles possibly in the set, making the accuracy 0. On the other hand, the 4×4 tiles give us |R_*| = 1211 and |R^*| = 1506 making a much nicer:

\alpha(S) = 0.8 \overline{04116865869853917662682602921646746347941567065073}

For much more useful applications of Rough Sets see this extensive paper by Pawlak covering the short history of Rough Sets, comparing them to Fuzzy Sets and showing uses in data analysis and Artificial Intelligence.

Fractal Binary

I have previously talked about Complex Bases but I wanted to look again at Base (-1+i). It’s a really hefty number system so the length of the bit-strings increase very quickly, I’d quite like to know if there is a way to assess Radix Economy for complex and negative bases, so if there are any mathematicians out there who know – Please tell me!

Base (-1+i) to Base 10.Visualising Numbers
Today I wrote a little C++ program to act on Base 2 arithmetic but convert to decimal as if it was Base (-1+i), this meant I could increment through the bits in an ‘ordered’ fashion. The image to the left is the text output of the program, it doesn’t have a very obvious pattern to it – infact the pattern-order we derive from it is somewhat an imposed one. This is because complex numbers do not have a linear order (or Total Order) and I’m trying to list them in a linear manner. They can, on the other hand, be Well-Ordered in correspondence with the natural numbers like we’re doing here.
If we take the real and imaginary values of each number and use them as the x and y co-ordinates (like I did for generating the Mandlebrot Set fractal) then the fractal “Twindragon” appears:

Colour maps of number length in Base (-1+i).

The program I wrote runs through binary numbers starting at 0 colouring the pixel (x=r, y=i) discretely depending on number length. The result shows all Gaussian integers representable by all possible 16,12 and 8 bit complex binary strings in base (-1+i). The colour mapping relates to the position of the Most Significant Bit (essentially the bitstring length). 0 and 1 are both of length one and are the dark blue in the center of the fractal. The 12-bit and 8-bit fractal maps have been zoomed in on to emphases  the self-similarity of the shape.

Colouring the fractals like this is a nice way of showing the distribution of numbers in the complex system but, going back to the math, a number system isn’t useful without arithmetic. Luckily the (-1+i) system is closed under addition, subtraction and multiplication. For addition and multiplication it is the same as normal binary with the difference being in the carry. Below is a table of all possible carry situations:

1+1 = 1100
1+1+1 = 1101
1+1+1+1 = 111010000
1+1+1+1+1 = 111010001
1+1+1+1+1+1 = 111011100
1+1+1+1+1+1+1 = 111011101
1+1+1+1+1+1+1+1 = 111000000

Division in the systems is rather complex, an explination of that and examples of addition/subtraction/multiplication can be found in a short paper called “Arithmetic in Complex Basis” by William Gilbert. The paper also talks about an equivalent to decimal which is base (-3+i) using the digits [0,1,2,3,4,5,6,7,8,9].

Soul & Simulation

For a good while now I’ve been reading The Computer and the Brain, and it has had me thinking about consciousness and ultimately the universe. In it, von Neuman talks about the relationships between computer and brain functions. For example, the electrical nature of neurons in the brain give rise to an obvious parrallel with binary systems. Because of such similarities a neuron can be modelled by computation, the first artificial neuron was modelled in 1943 by McCulloch (neuroscientist) & Pitts (logician). Since then exstensive work has been done on artificial neural netowrks, but as development continues, can consciousness be recreated?

Artificial Intelligence
As it is 2012: Year of Alan Turing this post has found itself at just the right time!
Artificial Intelligence is a branch of computer science that is trying to emulate intellect in computers. There are four main approaches are:

Systems that think like humans. Systems that think rationally.
Systems that act like humans. Systems that act rationally.

Back at University I had the idea that cryptography was the key (pun intended) to simulating consciousness, I thought about how private our thoughts are and how disconnected they seemed to the outside world, and that some sort of crypto-intellegent agent could act as the conscious. Later I dropped this hypothesis, it sounded cool but didn’t actually make much sense.

I define intelligence as the application of knowledge, the more intellect – the better you apply prior knowledge. This leads me to a conclusion, the ‘like a human‘ systems may be intellegent but they are not the pinnacle – Intelligence transcends human cognition.

Turing’s approach, act like a human, was initially the most successful, it lead to natural language processing and then chatbots like ELIZA and PARRY, also it brings up the question of consciousness! His Turing Test has yet to be passed by any AI, such a system would be deemed Strong AI. One of the traits of a strong AI system is it’s ability to infer from an uncertain situation and there is a lot of hope in incorperating quantum computing, this would allow decisions to be made on a much more probable basis. Google have been testing quantum computer’s ability to recognise faces and it is a vast improvement on classical computers.

My new idea for how a consciouss AI might be created is to go for a hybrid approach, to mimic the human instinct and thought processes. An act-human interface, with a think-rationally system assessing the incomming ‘thoughts’. Back and forths could happen on uncertain results using some sort of reasoning system. I have a feeling incompleteness and imprecision are the only actual problems with forming strong AI, but for now that’s just a voice in my head. Talking about voices in my head, what about free will?

Free Will

"You see there is only one constant. One universal. It is the only real truth. Causality. Action, reaction. Cause and effect." - Merovingian. The Matrix Reloaded.

My view on free will is that of a reductionist’s, a clean explination can be found in Sam Harris’ video on the lack of free will. In this video Michio Kaku uses quantum mechanics as a gateway to free will. Using uncertainty as an argument for free will is changing the will-type, if uncertainty is affecting your choices, you are exibhiting uncertain will. They are not your choices. They are not free. To believe in free will is to believe in dualism. Some people like Daniel Dennett would disagree, but compatiblists are just lay-dualists and wishful determinists, if the choice can be reduced to the efforts of the brain at any point – there is no choice, just action-reaction.

I can’t buy into dualism. A soul, for example, would need inputs and outputs to be anything it’s claimed to be, it would need to interact with a fundamental force to affect our physical choices or record ethics for later judging. If this was the case it would be detectable. Supernatural entities acting on natural entities must themselves be at least part natural.

Determanism
My ultimate view is that of a determanistic one:

\mbox{ The universe is } \boldsymbol{f(x) = 42} \mbox{, we're just the working out. }

I like to think that philosophy is trying to understand the answer, cosmology is trying to figure out the input and physics is learning about the function. More to this, as long as x is the same the answer to life the universe and everything is always 42. By this I mean if we were to reset the universe the exact same thing would happen without any fluxuation of difference at all.

Randomness (See also: my random post)
My determanistic approach to thinking stems directly from my skepticism of randomness. In chaos theory the randomness is known to come from a lack of exact measurements, but in quantum physics the idea of probability is introduced, I refute the claim that probability is proof for true randomness – again, unpredictability is a lack of exact measurements. Unfortunatly simply recreating the test parameters isn’t the same as exactly re-doing the test itself, so it is a non-testable hypothesis, but a rational and logical one. All evidence so far moves towards the idea of a deterministic universe, science itself is based on the idea of laws predicting events, Bertrand Russell puts it well when he says “Where determinism fails, science fails.” (Determinism and Physics). He also talks about determinism as a scientific dogma or closer to a prerequisite in Science and Religion.

But yeah, The Computer and the Brain? Pretty good book so far..