Mandelbrot: SETL & Lisp

SETL is a general-purpose programming language developed by Jack Schwartz back in the late 1960s. The language is multiparadigm – both Procedural (containing subroutines and control structures like if statements) and Set Theoretical. The latter is not often mentioned when talking about language paradigms – and is not to be confused with Logical Programming. The concept is outlined in the 1974 paper An Introduction to the Set Theoretical Language SETL. Importantly though data in SETL is of three main types: Atoms, Sets, and Sequences (also referred to as tuples or ordered sets).

A good overview of SETL can be found in the GNU SETL Om.

setl-mandelbrot

Pixels inside the Mandelbrot Set are marked in black, the code used to generate them (mandelbrot.setl) can be found here.

Generation of the Mandelbrot Set
On the Wikipedia page for the Mandelbrot Set you will find its formal definition, which looks like so: M = \left\{c\in \mathbb C : \exists s\in \mathbb R, \forall n\in \mathbb N, |P_c^n(0)| \le s \right\}.

The most fantastic thing about SETL is how powerful its Set Builder Notation is. Almost exactly as you would in the formal case (removing ‘there exists some real s’ for the practical application):

M := {c : c in image | forall num in {0..iterations} | cabs(pofc(c, num)) <= 2};

For some reason SETL doesn’t support complex numbers, but they are easily handled by writing the necessary procedures we need like cabs, ctimes, and cplus dealing with tuples in the form [x, y]. The variable ‘image’ is a sequence of these tuples. Another procedure is written, pofc, which is a more practical version of P_c^n(0).

Interaction with the Lisp graphical display
The goal of the SETL program is to produce a set of points that lie inside the Mandelbrot set. To actually display the image I used Common Lisp and displayed a widget with the image using CommonQt. In a very rudimentary way I had mandelbrot.setl take arguments about the image then print the set of pixel co-ordinates. All lisp had to do was swap and ‘{}’ or ‘[]’ for ‘()’ and read it as lisp code then update the image.

(setf in-set (read-from-string (parse-line (uiop:run-program "setl mandelbrot.setl -- -250 -250 250 8" :output :string))))
(draw-mandel instance in-set)

An after-thought was to make a Mandelbrot searcher where you could zoom and move using the mouse but the SETL code is such an inefficient way of doing it that it’s not worth it. As an attempt to mimic the formal definition it was highly successful and fun. Though much quicker SETL code could be written for generating the Mandelbrot Set.

Source file for mandelbrot.setl and the Lisp/CommonQt front end can be found here.

Accuracy of Generated Fractals

Note: I refer to the Mandelbrot set in general as the M-set for short.

When I was writing the post on Rough Mandelbrot Sets I tried out some variations on the rough set. One variation was to measure the generated M-set against a previously calculated master M-set of high precision (100000 iterations of z = z^2 + C). In the image below the master M-set is in white and the generated M-sets are in green (increasing in accuracy):

50 Against MasterHere instead of approximating with tiles I measured the accuracy of the generated sets against the master set by pixel count. Where P = \{ \text{set of all pixels} \} the ratio of P_{master} / P_{generated} produced something that threw me, the generated sets made sudden but periodic jumps in accuracy:

Graph OneLooking at the data I saw the jumps were, very roughly, at multiples of 256. The size of the image being generated was 256 by 256 pixels so I changed it to N by N for N = {120, 360, 680} and the increment was still every ~256. So I’m not really sure why, it might be obvious, if you know tell me in the comments!

I am reminded of the images generated from Fractal Binary and other Complex Bases where large geometric entities can be represented on a plane by iteration through a number system. I’d really like to know what the Mandelbrot Number System is…

Below is a table of the jumps and their iteration index:

Iterations Accuracy measure
255
256
0.241929
0.397073
510
511
0.395135
0.510806
765
766
0.510157
0.579283
1020
1021
0.578861
0.644919
1275
1276
0.644919
0.679819
1530
1531
0.679696
0.718911

Rough Mandelbrot Sets

I’ve been reading up on Zdzisław Pawlak’s Rough Set Theory recently and wanted to play with them. They are used to address vagueness in data so fractals seem like a good subject.

Super Quick Intro to Rough Sets:
A rough set is a tuple (ordered pair) of sets R(S) = \langle R_*, R^* \rangle which is used to model some target set S. The set R_* has every element definitely in set S and set R^* has every element that is possibly in set S . It’s roughness can be measured by the accuracy function \alpha(S) = \frac{|R_*|}{|R^*|} . So when |R_*| = |R^*| then the set is known as crisp (not vague) with an accuracy of 1.

A more formal example can be found on the wiki page but we’ll move on to the Mandelbrot example because it is visually intuitive:

The tiles are 36x36 pixels, the Mandelbrot set is marked in yellow. The green and white tiles are possibly i the Mandelbrot set, but the white tiles are also definitely in the Mandelbrot set.

The tiles are 36×36 pixels, the Mandelbrot set is marked in yellow. The green and white tiles are possibly in the Mandelbrot set, but the white tiles are also definitely in it.

Here the target set S contains all the pixels inside the Mandelbrot set, but we are going to construct this set in terms of tiles. Let T_1, T_2, T_3,\dots , T_n be the tile sets that contain the pixels. R^* is the set of all tiles T_x where the set T_x contains at least one pixel that is inside the Mandelbrot set, R_* is the set of all tiles T_x that contain only Mandelbrot pixels. So in the above example there are 28 tiles possibly in the set including the 7 tiles definitely in the set. Giving R(S) an accuracy of 0.25.

Tile sizes: 90, 72, 60, 45, 40, 36, 30, 24, 20, 18, 15, 12, 10, 9, 8, 6, 5, 4.

Tile width: 90, 72, 60, 45, 40, 36, 30, 24, 20, 18, 15, 12, 10, 9, 8, 6, 5, 4. There seems to be a lack of symmetry but it’s probably from computational precision loss.

Obviously the smaller the tiles the better the approximation of the set. Here the largest tiles (90×90 pixels) are so big that there are no tiles definitely inside the target set and 10 tiles possibly in the set, making the accuracy 0. On the other hand, the 4×4 tiles give us |R_*| = 1211 and |R^*| = 1506 making a much nicer:

\alpha(S) = 0.8 \overline{04116865869853917662682602921646746347941567065073}

For much more useful applications of Rough Sets see this extensive paper by Pawlak covering the short history of Rough Sets, comparing them to Fuzzy Sets and showing uses in data analysis and Artificial Intelligence.

Randolph Diagrams

This is what a genius looks like.

There is something aesthetic and elegant about Randolph diagrams, unfortunately they aren’t commonly used. I found out about them when reading Embodiments of Mind by the glorious and bearded Warren McCulloch.

The Original Proposal:
In the book he refers to them as “Venn functions” and they are briefly explained as being derived from Venn’s diagrams for sets but in McCulloch’s case they were used to express logical statements. If you draw a Venn diagram of two circles intersecting you are left with four spaces ( a/b, a&b, b/a, U ), adding a jot into a space to denote truth or leaving it blank for false gives you the 16 possible logic combinations. They are great examples of the isomorphism between logic and set theory:

He used these as tools to help teach logic to neurologists, psychiatrists and psychologists. Later he developed them into a probablistic logic which he applied to John vonn Neumann‘s logical neuron nets. Which I will discuss in the next post.

Randolph’s Diagrams:

The truth values for three statements.

McCulloch does mention that they could be used to apply more than two statements but doesn’t show how, later John F. Randolph developes the system as an alternative visualisation of set relations neatly coping with more than two sets (something Venn diagrams begin to struggle with after five). For each additional statement/set a new line is introduced in each quadrant. Four statements would be a large cross with four smaller crosses, one in each quadrant.

Wikipedia has an example of the tautological proof for the logical argument, modus ponens, which can be found here, but I thought it would be good to show how three values are handled – so we’ll use syllogism, as in “Socrates is a man, all men are mortal, therefore Socrates is mortal” being reduced in it’s logical form to tautology:

((A implies B) and (B implies C)) implies (A implies C)

Large Numbers, Infinities & God (2/3)

This post is continuing on from my discussion on large numbers and notation. I am going to assume some knowledge of Set Theory for this post as there really is a lot to write about and I have found it hard to cut it down to a nice size.

2. Infinities
Infinity as a concept has scared mathemeticians for a long time, not only was it hard to get your head around, it wasn’t really usable. A recent use is in calculus as, what I would call, a nominal infinity (‘increases without bound’ as opposed to a value or position): \displaystyle{\lim_{x \to \infty}}. Leibniz was facinated by infinity and it’s counterpart infintesimal \frac{1}{\infty} = 0.000\dots 1.

The Isha Upanishad, a Hindu scripture, gives what I think is the polite version of infinity: “if you remove a part from infinity or add a part to infinity, still what remains is infinity” – this philosophical statement explains a mathematical proposition, cardinal arithmetic, and it comes from a branch of mathematics called set theory developed by Georg Cantor and Richard Dedekind in the 1870s.

Cardinal infinites (Aleph numbers):
Say we have a set that holds all the natural numbers: \boldsymbol{\mathbb{N}} = \{0,1,2,3,4,...\} what is the size of such a set? Well \boldsymbol{|\mathbb{N}| = \aleph_0} (aleph-null) the smallest possible infinity and the first of the Aleph numbers. We just delt with the cardinality of the natural numbers, now lets move on to the rational numbers \boldsymbol{\mathbb{Q}} , these are all the numbers that can be expressed as \frac{n}{m} :

It turns out | \boldsymbol{\mathbb{Q}} | = \aleph_0 but how?
By definition a set S is countable if there exists an injective function: f: S \to \mathbb{N} from S to the natural numbers. The fractions would seem uncountable in a linear sense, but Georg Cantor came up with a method for one-to-one pairing with the natural numbers. (See right)

The final task is the real numbers , \boldsymbol{\mathbb{R}} , this contains every possible finite number. So it has 0.000001, √2, e, π, 55, 73896737483 and everything else! This set has no possible one-to-one pairing with the natural numbers. Cantor put forward something called the Continuum Hypothesis that stated no such set exists with a cardinality between that of \boldsymbol{\mathbb{N}} \text{ and } \boldsymbol{\mathbb{R}} . Although the cardinality of the real numbers is stated as: | \boldsymbol{\mathbb{R}} | = \mathfrak c = 2^{\aleph_0} > \aleph_0 \, the Continuum Hypothesis implies: | \boldsymbol{\mathbb{R}} | = \aleph_1 . Gödel and Cohen later showed that the hypothesis can neither be disproved nor be proved in ZF set theory.

In cardinal arithmetic  \aleph_0 + n = \aleph_0, n \aleph_0 = \aleph_0, {\aleph_0}^n = \aleph_0,   just like in the Hindu scripture.

Beth numbers:
The Beth numbers run parrallel to the Aleph numbers and are the successive cardinalities to power sets of the natural numbers. For an example of a power set, see: S=\{x,y,z\} it’s power set is: P(S) = \left\{\{\}, \{x\}, \{y\}, \{z\}, \{x, y\}, \{x, z\}, \{y, z\}, \{x, y, z\}\right\}\,\!.

\beth_0 = \aleph_0
\beth_{n+1} = 2^{\beth_n}
| P(\mathbb{N}) | = \beth_1
| P(P(\mathbb{N})) | = \beth_2 etc..

These numbers are related to the Generalised Continuum Hypothesis.

Ordinal infinities:
In set theory a set which is well-ordered (a is less than b which is less than c, etc) has an ordinal number, the smallest ordinal infinity is omega:

\omega = \{0<1<2<3<4<...\} and | \omega | = \aleph_0

To get the cardinality of an ordinal infinity we ignore the order, this will become important when we assess ordinal arithmetic. For example here are three ordinal additions:

\omega + \boldsymbol{1} = \{0_0<1_0<2_0<3_0<4_0<...\boldsymbol{0_1}\}

\boldsymbol{1} + \omega = \{\boldsymbol{0_0} < 0_1<1_1<2_1<3_1<4_1<...\} = \omega \text{ (after relabeling the elements)}

\omega + \boldsymbol{\omega} = \{0_0<1_0<2_0<3_0<4_0<...< \boldsymbol{0_1<1_1<2_1<3_1<4_1<...}\}

The difference between ω+1 and 1+ω is the placing of the dots, as they imply ad infinitum. In ω+1 there is no direct predecessor before the second 0, just infinity dots.
Now although ω, ω+1 and ω+ω are three seperate ordinal values – they all have the cardinality \aleph_0 and this is because when we get the size, we ignore the order. It also stands that n·ω = ω ≠ ω·n. The first uncountable ordinal, traditionally denoted by ω1 or sometimes by Ω is the supremum of all countable ordinals. The elements of ω1 are the countable ordinals, of which there are uncountably many:

ω1 = sup{ ω, ω + 1, ω + 2, …, ω·2, ω·2 + 1, …, ω2, …, ω3, …, ωω, …, ωωω, …, ε0, … }

ωn = sup{ ωn-1, ωn-1 + 1, ωn-1 + 2, …, ωn-1·2, ωn-1·2 + 1, … } and | \omega_n | = \aleph_n.

Surreal numbers:
The surreal number system was thought up by John Conway, it contains the real numbers as well as infinite and infinitesimal ordinal numbers. They are also known as the extraordinal numbers. They have fathoms of interesting properties but if I carry on i’ll never stop. For more on these amazing numbers I recommend the book ‘Surreal Numbers‘ by Donald Knuth.

\aleph * \infty * \omega

Related links:

The Russell Equation (R tea R)

This post is quite nonsensical, read at your own peril.

Bertrand Russell is a fantastically interesting character, a passive-activist against the first world war, mathematician, logician, philosopher and social critic. Alongside being one of the co-founders of Analytical Philosophy, co-authering Principia Mathematica and adding to fields such as logic, mathematics, set theory, epistemology – Bertrand came up with two important arguments. They are Russell’s Teapot (the burden of proof lies with the person making scientifically unfalsifiable claim) and Russell’s Paradox (the set R contains all sets that are not members of themselves, is R in the set?). I enjoyed reading up on these two, and simply because they both start with “Russell’s” I thought I’d merge them:

Let me introduce the teapot operator into Cantor’s naive set theory. What this teapot symbol is representing in the equation is twofold: 1) There is an indisputable relation between R and R, 2) That the burden of proof [for a solution] lies with the scientific community. In this case, set theorists, to come up with new axioms and produce a better less fallible system (which they did). The teapot operator basically means “The relationship of the left and the right transcends this system, get a new one.” For example, another application could be to state that P tea NP:

Although I may be getting ahead of myself thinking that the P vs NP argument transcends todays computational theory, hopefully you see where I’m comming from. All in all it was a bit of fun to portray the ideology of the scientific method by continuously putting the onus upon themselves to come up with solutions or corrections.

Post-Transcendental Numbers

I might be wrong off the starting bat here, but e can be defined in an infinite series 1/0! + 1/1! + 1/2! + … + 1/n!

So, i was wondering, Does a number exist that:
-> Cannot be expressed as a fraction (Irrational)
-> Cannot be expressed as a polynomial (Transcendental)
-> Cannot be expressed by an infinite series following a set rule (???)