# (A ((Pretty OK) Lisp) Interpreter {in SETL})

This post overviews a Common Lisp interpreter written in the Set Theoretic programming language SETL. Source code is here: https://bitbucket.org/NeuralOutlet/setlisp
A complete list of available functions can be found in the doc folder.

SETL
Originally developed in  the 1960s by Jack Schwartz, SETL (Set Language) is peculiar in that it feels old and new at the same time. You’re writing imperatively with these chunky procedures then suddenly doing aggressive set comprehension like generating a set of unique primorial numbers in one terse but coherent line:

-- This is a comment
pset := {*/{p in {2..n} | forall m in {2..p - 1} | p mod m > 0} : n in {0..20}};
print(pset); --prints: {2 6 30 210 2310 30030 510510 9699690}


David Bacon developed the GNU SETL version between 1989-2009, fantastically nice to use, it is extended somewhat by adding POSIX support. Though it should be noted the parts that feel new were in SETL from its inception, it is said to be the first language to include list comprehension.

SETL spawned many versions of itself (SETLB, SETL2, ISETL, SetlX, etc) and influenced two other Set Theoretical languages called ProSet and Slim. Though out of all these languages the only ones still kicking around are GNU SETL, ISETL, and SetlX. There was a non-Set Theoretic language influenced by SETL – ABC (the predecessor to Python).

A good taste of the language can be found in the GNU SETL Om, for a more in depth understanding there is David Bacon’s PhD thesis. A ton of useful and interesting example programs can be found at Håkan Kjellerstrand’s SETL page. Historically there is also an early paper by Jack Schwartz.

SETLisp
So far setlisp has about 60 functions (SETLisp documentation) and the datatypes: Boolean, Number, String, Symbol, Function, List, and Set.

Sets – Lisp has a function for making lists, it looks like (list 5 4 3 3) which produces the list (5 4 3 3). Pretty simple, the function ‘set’ already exists for setting the value of a symbol so I chose to use ‘setl’ as the set function (setl 5 4 3 3) which yields {3 4 5}. The curly brackets indicate it is a set not a list. In Set Theory the order or duplicity of elements in a set do not matter so SETL orders the elements of a set by value and removes duplicates.

Expressions – The use of sets is extended to set-expressions, these are more of a playful idea than anything useful. The arguments are automatically bagged up into a set instead of being mapped to variable names in a list. Here you can see addition used as an s-expression and a set-expression:

The add function specifically is just the ‘+’ operator cast over a set or list of elements, which means that unlike ‘+’ in Lisp this one will do numeric affirmation aswell as addition. It infact extends further to also perform set union and string or tuple concatenation. I added the functions ‘append’ and ‘concatenate’ but they just call ‘+’. The code itself uses the ‘/’ notation alongside the list comprehension:

-- (+ args...)
return [+/sargs, env];
end proc;

-- {+ args...}
return [+/sargs, env];
end proc;


There is a ‘mapchoice’ function which is a play on the ‘mapcar’ function in Lisp and the Choice Function in Set Theory. The function ‘cardinality’ does the same as ‘length’. Also any s-expressions I add with only one or zero arguments is done for both s/set-expressions. {quit} is the same as (quit) for example.

This does open up overloading, a function can be defined as s-exp or set-exp or both (requires two identical definitions). The defun function works slightly different for each, s-exp are defined just as they should be, but the set-exp hides all parameters in a named set. This needs to be accessed with set functions such as from or with looping:

;; S-Expression
(defun my-func (param1 param2 param3)
(print param1)
(print param2)
(print param3))
(my-func 14 80 32) ; prints 14 80 32

;; SET-Expression
{defun my-func params
(print (from params))  ;Note: 'from' is destructive to the set, but
(print (from params))  ;      (loop for val in params do...) is not
(print (from params))}
{my-func 14 80 32}         ; prints 14 32 80


Behind the Scenes – SETL has another useful type (although it is more of a mechanism) called maps. Obviously straight from Set Theory – maps, sets, and tuple are the core of the language. A map is defined as a set containing only 2-tuples. Below is a map called stuff that has ‘this’ mapped to 5 and ‘that’ mapped to a tuple of the numbers 1 to 100:

stuff := {['this', 5], ['that', [1..100]]};
print( stuff('this') );
print( stuff('that') );


In SETLisp the variables environment is constructed with maps and so is the function type. To create local scope a list of maps is treated as a stack – checking from the top down searching for a value to map to a variable, then popping that definition map off the stack when leaving scope. The addition of an implicit map type in setlisp was my favourite feature to add, check out the Fun with Maps example.

Printing functions: write, print, princ, prin1, format.

Obviously the last one is setlisp specific, it is a wrapper for evaluating a SETL variable or variable construcor. In the setlisp.rc file you can see it is, in conjuction with ‘eval-setl’, used to allow SETL list/set comprehension in the macros ${ … } and$( … ).

Nil vs Om – These are two very similar concepts that appear in a lot of programming situations. In SETL any function without a return value will return om (printed as an asterisk ‘*’), this is the same for nil in Lisp. Lists in Lisp are constructed by cons cells ending with a nil cdr (a . (b . (c . nil))), in SETL a tuple is hypothetically of infinite size with om in every leading index [a b c * * … *]. One problem this brings up is that (list nil nil nil) should evaluate to (NIL NIL NIL) but [om, om, om] creates an empty tuple instead of [* * *]. The boolean types in Common Lisp are ‘t’ and ‘nil’ although anything not nil is considered true – SETL has the standard true/false duality but then om is there being all interesting.

Types – SETL doesn’t have characters, it’s very weird to me but a character is the same as a string of length 1. But what are strings made of? Aren’t they ordered sets of characters? I don’t know why there isn’t a byte-friendly char type in SETL but they exist in Common Lisp so I added a type system for various things. There are types for: Function, List, Set, Character, Complex, and Symbol. The syntax for characters are setup in the runtime configuration file, but I never implemented complex numbers in the form #C(1 2).

Macros – I didn’t add the more powerful ‘defmacro’ to SETLisp but for some of the beloved Lisp syntax such as ‘quote and #’function I added ‘set-macro-character‘ and ‘set-dispatch-macro-character‘. The extra syntax is implemented in the setlisp.rc file (read in before anything else). It begins with adding single line comments:

"Strings are nice, but let's make comments!"
(defun comment-macro (s c)
(values))
(set-macro-character ";" (function comment-macro))
(format t "  Comment macro added.~%")


and builds up to this definition for multi-line comments:

; Multi-line comments
(defun multi-line-macro (s c n)
(if (equal "#" (peek-char s))
(progn
(values))
(multi-line-macro s c n))
(multi-line-macro s c n)))
(set-dispatch-macro-character #\# #\| #'multi-line-macro)


# Mandelbrot: SETL & Lisp

SETL is a general-purpose programming language developed by Jack Schwartz back in the late 1960s. The language is multiparadigm – both Procedural (containing subroutines and control structures like if statements) and Set Theoretical. The latter is not often mentioned when talking about language paradigms – and is not to be confused with Logical Programming. The concept is outlined in the 1974 paper An Introduction to the Set Theoretical Language SETL. Importantly though data in SETL is of three main types: Atoms, Sets, and Sequences (also referred to as tuples or ordered sets).

A good overview of SETL can be found in the GNU SETL Om.

Pixels inside the Mandelbrot Set are marked in black, the code used to generate them (mandelbrot.setl) can be found here.

Generation of the Mandelbrot Set
On the Wikipedia page for the Mandelbrot Set you will find its formal definition, which looks like so: $M = \left\{c\in \mathbb C : \exists s\in \mathbb R, \forall n\in \mathbb N, |P_c^n(0)| \le s \right\}.$

The most fantastic thing about SETL is how powerful its Set Builder Notation is. Almost exactly as you would in the formal case (removing ‘there exists some real s’ for the practical application):

M := {c : c in image | forall num in {0..iterations} | cabs(pofc(c, num)) &lt;= 2};


For some reason SETL doesn’t support complex numbers, but they are easily handled by writing the necessary procedures we need like cabs, ctimes, and cplus dealing with tuples in the form [x, y]. The variable ‘image’ is a sequence of these tuples. Another procedure is written, pofc, which is a more practical version of $P_c^n(0)$.

Interaction with the Lisp graphical display
The goal of the SETL program is to produce a set of points that lie inside the Mandelbrot set. To actually display the image I used Common Lisp and displayed a widget with the image using CommonQt. In a very rudimentary way I had mandelbrot.setl take arguments about the image then print the set of pixel co-ordinates. All lisp had to do was swap and ‘{}’ or ‘[]’ for ‘()’ and read it as lisp code then update the image.

(setf in-set (read-from-string (parse-line (uiop:run-program "setl mandelbrot.setl -- -250 -250 250 8" :output :string))))
(draw-mandel instance in-set)


An after-thought was to make a Mandelbrot searcher where you could zoom and move using the mouse but the SETL code is such an inefficient way of doing it that it’s not worth it. As an attempt to mimic the formal definition it was highly successful and fun. Though much quicker SETL code could be written for generating the Mandelbrot Set.

Source file for mandelbrot.setl and the Lisp/CommonQt front end can be found here.

# The RGB Universe

One image bounded to three respective spaces: Colour, Chromaticity, and Hue.

The Colour-Space: RGB
Everyone is familiar with this, it is the additive model for colours that uses the primaries: red, green & blue. A 3D Model where each unique colour sits at position (x: r, y: g, z: b).

The Chromaticity-Space: RCGCBC
Some people will be familiar with this, it is RGB without luminance, the brightness is removed in a way that doesn’t effect the hue or saturation. It is referred to as rg-Chromaticity because it’s construction from RGB means only two elements are needed to represent all the chromaticity values:

 Conversion to Conversion from (kind of*) $R_C= \frac{R}{R+G+B}$ $G_C= \frac{G}{R+G+B}$ $B_C= \frac{B}{R+G+B}$ $R = \frac{R_C G}{G_C}$ $G = G$ $B = \frac{(1 - R_C - G_C) G}{G_C}$

It will always be that $R_C + G_C + B_C = 1$ so by discarding the blue component we can have unique chromaticities as (x: r’, y: g’). This means that rg-Chromaticity is a 2D-Model and when converting to it from RGB we lose the luminance. So it is impossible to convert back. *An in-between for this is the colour-space rgG where the G component preserves luminance in the image.

The Hue-Space: RHGHBH
No one uses this, I just thought it would be fun to apply the same as above and extract the saturation from RCGCBC. Like RCGCBC it is a 2D Model, this seems strange because it is only representing one attribute –hue– but it is because the elements themselves have a ternary relationship (how much red, how much green, how much blue) and so to extrapolate one you must know the other two.

 Conversion to 3-tuple Hue Normalise to 2D $M = \text{Max}(R,G,B)$ $m = \text{Min}(R,G,B)$ $\delta = 255/(M-m)$ $R_h = (R-m) \delta$ $G_h = (G-m) \delta$ $B_h = (B-m) \delta$ $R_H = \frac{R_h}{R_h + G_h + B_h}$ $G_H = \frac{G_h}{R_h + G_h + B_h}$ $B_H = \frac{B_h}{R_h + G_h + B_h}$

Measuring Hue Distance
The HSL colour-space records hue as a single element, H, making measuring distance as easy as $\Delta H = \sqrt{{H_a}^2 - {H_b}^2}$ where as in rg-Hue we have two elements so $\Delta H = \sqrt{({R''_a}^2 - {R''_b}^2) + ({G''_a}^2 - {G''_b}^2)}$ where $R'' = R_H$ and $G'' = G_H$ for readability. What’s interesting here is it works almost the same. Though it should be noted that on a line only two distances are equidistant to zero at one time where as in rg-Hue, on a 2D plane, there are many equidistant points around circles.

Below are images of a RGB testcard where each pixel’s hue has been measured against a colour palette (60° Rainbow) and coloured with the closest match. The rg-Hue measure has a notable consistency to it and shows more red on the right hand side than HSL, but also between the yellow and red there is a tiny slither of purple. I believe this is from the equal distance hues and the nature of looking through a list for the lowest value when there are multiple lowest values:

 Hue Distance (HSL) Hue Distance (RHGHBH)

# Islam: The Religion of… Logic? (1/3)

Often described as the Religion of Peace, Islam had a golden age spanning from the 8th-12th century (CE). This is the first in a three-part post on logic and rational thought in Islam. Each post will look at a different relationship analytical thought has had with the Arab-speaking populous.

Liberalisation of Science and Philosophy
During the Golden Age great advances were made, importantly the House of Wisdom was set up in Baghdad. From here the first ever international scientific venture in history began; Wherein, large volumes of written knowledge from Persian, Greek, Latin, European and Indian origin were translated to Arabic. The great Arabic polymaths such as Al-Kindi, who had the earliest writings on encryption by frequency analysis and wrote On the Use of the Indian Numerals, worked from the House of Wisdom. The biggest star of the Golden Age was ibn Sina (Avicenna) who made so many contributions – the biggest being The Canon of Medicine which, written in 1025, was used as a medical standard from England to China for about 600 years. Like many thinkers were at the time, Avicenna was also a logician and he disliked Aristotelian logic. For example when looking at ‘if p, then q’ he believed it too presumptuous to assert an such a strong relation between p and q. His response, ‘q while p’, is the beginnings of Temporal Logic. Below are some examples of his Temporalis (While Logic) :

• Whenever the Sun is out, then it is day
• It is never the case that if the Sun is out, then it is night
• It is never the case that either the Sun is out or it is day
• If, whenever the Sun is out, it is day, then either the Sun is out, or it is not day

Ibn Al-Haythem (Alhazen), a Persian Islamic thinker whom worked from the House of Wisdom, is considered the Founder of Optics and with it Experimental Physics. He also devised the first rigorous attempt at testing – making him the Founder of the Scientific Method.

Dialectical Exploration of Theology (kalam)
Baghdad may have had the House but Basra had the Circle, the circle of Al-Hasan Al-Basri (Hasan) to be exact. In this gathering of minds instead of scientific discussion there was theological questions emerging. Questions asked were to do with the nature of God, His attributes, good and evil, how to understand the Qur’an. Not all Muslims supported the idea of kalam but those who did would relish the chance to debate other religions – especially ahl al-kitab (People of the Book). A practitioner of kalam is called a mutakallim, and this word was used for non-Muslims aswell. From these endeavours there later developed Jewish Kalam, schools of thought in this time influenced each other a great deal. There was even athiest mutakallimun such as Ibn al-Rawandi. The first Muslims that practiced early on were the Qaadariyah (believers in Free Will) who were mockingly compared to Zoroastrians and the Jahmites (believers in Quranic Createdness and non-literal interpretations of the Qur’an). These early terms were for people holding certain singular dogmas. Later, as kalam evolved, three distinct schools of thought emerged (the Mu’tazila, Ash’ari and Maturidi). The next post looks specifically at the Mu’tazila.

# Accuracy of Generated Fractals

Note: I refer to the Mandelbrot set in general as the M-set for short.

When I was writing the post on Rough Mandelbrot Sets I tried out some variations on the rough set. One variation was to measure the generated M-set against a previously calculated master M-set of high precision (100000 iterations of $z = z^2 + C$). In the image below the master M-set is in white and the generated M-sets are in green (increasing in accuracy):

Here instead of approximating with tiles I measured the accuracy of the generated sets against the master set by pixel count. Where $P = \{ \text{set of all pixels} \}$ the ratio of $P_{master} / P_{generated}$ produced something that threw me, the generated sets made sudden but periodic jumps in accuracy:

Looking at the data I saw the jumps were, very roughly, at multiples of 256. The size of the image being generated was 256 by 256 pixels so I changed it to N by N for N = {120, 360, 680} and the increment was still every ~256. So I’m not really sure why, it might be obvious, if you know tell me in the comments!

I am reminded of the images generated from Fractal Binary and other Complex Bases where large geometric entities can be represented on a plane by iteration through a number system. I’d really like to know what the Mandelbrot Number System is…

Below is a table of the jumps and their iteration index:

 Iterations Accuracy measure 255 256 0.241929 0.397073 510 511 0.395135 0.510806 765 766 0.510157 0.579283 1020 1021 0.578861 0.644919 1275 1276 0.644919 0.679819 1530 1531 0.679696 0.718911

# Editing Ultraviolet Photography

For people who do multispectral photography (infrared, visible, ultraviolet, etc) sometimes it can be tricky to achieve what you want in a traditional photo editor. That is why I am developing software to cater specifically for multispectral image processing. The software is called WavelengthPro. Below is a quick video, four images and explanations for the results.

The Resulting Images:
These four images are made from only this visible light image and this ultraviolet image. They were both taken with the same camera, Nikon D70 (hot mirror removed), using an IR-UV cut filter and the Baader-U filter. No extra editing was done.

 Dual Process Luminance Map

One often met problem in ultraviolet photography is getting the right white-balance on your camera – if you can’t achieve it you end up with rather purple pictures. In WavelengthPro there is a tool called Dual Processing where you create the green channel out of the red and blue channels which rids your image of the purpley hue.

The Luminance Map is actually a lightness map, made in HSL colour-space, it is the hue and saturation of the visible light image with the lightness of the UV image. As you can see the specular effect on the leaves completely contrasts the mat flowers. There is a good example of useful application for this method in the post Luminance Mapping: UV and Thermal.

 5to3 Map (RGBUrUb) 3to3 Map (GBU)

There are only 3 channels (ignoring alpha) for an image to be encoded into but in WavelengthPro that is attempted to be extended by mapping N channels to the 3 (RGB) output channels. A classic ultraviolet editing method is to make a GBU image, like the image on the right. The image on the left has five channels equally distributed across the three RGB channels. An array of different maps can be seen in an older post called Testing Infrared Software. If you want to download and try out the software (it’s still in alpha stage – don’t expect everything to work) then the link below will have the latest release version.

WavelengthPro Flickr Group

# Rough Mandelbrot Sets

I’ve been reading up on Zdzisław Pawlak’s Rough Set Theory recently and wanted to play with them. They are used to address vagueness in data so fractals seem like a good subject.

Super Quick Intro to Rough Sets:
A rough set is a tuple (ordered pair) of sets $R(S) = \langle R_*, R^* \rangle$ which is used to model some target set S. The set $R_*$ has every element definitely in set $S$ and set $R^*$ has every element that is possibly in set $S$ . It’s roughness can be measured by the accuracy function $\alpha(S) = \frac{|R_*|}{|R^*|}$. So when $|R_*| = |R^*|$ then the set is known as crisp (not vague) with an accuracy of 1.

A more formal example can be found on the wiki page but we’ll move on to the Mandelbrot example because it is visually intuitive:

The tiles are 36×36 pixels, the Mandelbrot set is marked in yellow. The green and white tiles are possibly in the Mandelbrot set, but the white tiles are also definitely in it.

Here the target set $S$ contains all the pixels inside the Mandelbrot set, but we are going to construct this set in terms of tiles. Let $T_1, T_2, T_3,\dots , T_n$ be the tile sets that contain the pixels. $R^*$ is the set of all tiles $T_x$ where the set $T_x$ contains at least one pixel that is inside the Mandelbrot set, $R_*$ is the set of all tiles $T_x$ that contain only Mandelbrot pixels. So in the above example there are 28 tiles possibly in the set including the 7 tiles definitely in the set. Giving $R(S)$ an accuracy of 0.25.

Tile width: 90, 72, 60, 45, 40, 36, 30, 24, 20, 18, 15, 12, 10, 9, 8, 6, 5, 4. There seems to be a lack of symmetry but it’s probably from computational precision loss.

Obviously the smaller the tiles the better the approximation of the set. Here the largest tiles (90×90 pixels) are so big that there are no tiles definitely inside the target set and 10 tiles possibly in the set, making the accuracy 0. On the other hand, the 4×4 tiles give us $|R_*| = 1211$ and $|R^*| = 1506$ making a much nicer:

$\alpha(S)$ = $0.8 \overline{04116865869853917662682602921646746347941567065073}$

For much more useful applications of Rough Sets see this extensive paper by Pawlak covering the short history of Rough Sets, comparing them to Fuzzy Sets and showing uses in data analysis and Artificial Intelligence.

# Digital Orthochromatic Photography

I’m playing with ways of emulating early orthochromatic film, if anyone knows any technical aspects to this that might help – feel free to suggest ideas. So far I’m just winging it.

Light Response of Film
In terms of light covered it goes: Blue-sensitive, orthochromatic, isochromatic, panchromatic (all visible wavelengths) then super-panchromatic.

On the right is a photo of British explorers taken with orthocromatic film. Note the red on the flag is much darker than the blue area because the film isn’t receptive of red light. Looking at the spectral sensitivity (here) and (here) it’s clear the violet/blue area is the predominant band with it extending to green/yellow as well.

Test Ideas
I took an image from Google, something with the British flag, and made different Channel Maps for it. The greyscaled versions are the output images.

I used the QSE tool on WavelengthPro to make a 400nm image (the violet/blue peak) and a 580nm (the green/yellow peak) and made a 2to3 map of the peaks, the colour response worked well but the data-loss from interpolation was too much. Using the original [R,G,B] channes means I won’t lose any detail, so I tried using mainly those and came up with two maps that were pretty good – GBB and GBBV. Below are those maps and the RGB map for comparison.

 This is the RGB (panchromatic) version, the red cross on the flag is lighter than the blue parts and the sky is dark. This is a GBBtoRGB version, the cross appears darker. It’s not really true to the actual spectral response of the film. This is a GBBVtoRGB version, the V is the interpolated deep violet image. Getting there…

# Luminance Mapping: UV and Thermal

This is a very simple function of WavelengthPro, using the luminance of one image and the hue/saturation of another. All it does is convert from RGB-space to HSL-space then use the L value (lightness) of a different image. In the table below I use a visible light image and an ultraviolet image (I got them from here) and map them in two ways. The first is not a Luminance map, but a GBU map the next is a luminance map which keeps the colours of the visible light and shows them at the lightness of the ultraviolet image.

 Visible image: can’t see the graffiti well. Ultraviolet image: shows up the graffiti well. GBU map: quite a nice colour palette and shows the graffiti quite well. Luminance map: has a “Human hue” whilst showing up the graffiti really well.

It is great for showing the ultraviolet characteristics of light whilst keeping the ‘true-colour’ feel that we’ve evolved to love. This idea isn’t new though, nightvision sometimes incorporates visible and thermal bands fused together and computer vision sometimes needs more than visible colour data to interpret a scene. One (less scientific) use is to make thermal imaging look a bit more lovely. Below is a visible image, thermal image and the Luminance map (I got them from here):

# Full Spectrum Photography: Mapping

This post shows three examples of full spectrum mapping methods for multispectral photography. I’ve used some quick shots I took inbetween rain clouds so I apologise for the poor quality – especially the infrared image. All shots taken on a converted D70.

 Infrared (720nm filter) Visible light (Hoya cut filter) Ultraviolet (Baader-U filter)

The only map I see often is the classic 3to3 map. The characteristics are so that vegetation stands out in a very prominent red, nectar guides are clear cut and clear skies are a strong blue. The next map is weighted, roughly, 5to3 on the proportional spectrum each [I,R,G,B,U] component covers as wavelengths. The output shows the nectar guide enough for it to be noticeable, but clearly less so. The last map is my favourite so far, it is a 5to3 map that distributes the [I,R,G,B,U] in equal proportions. It dulls the bright red vegetation caused by infrared in the red channel and shows the nectar guide a little better than the previous map.

 Map Type Channels & Output Classic IR-VIS-UV R: IR G: VIS B: UV Proportional IRGBU R: (IR + IR + (IR * 0.33)) * 0.42 G: ((IR * 0.66) + R + (G * 0.66)) * 0.42 B: (UV + B + (G * 0.33)) * 0.42 Equal IRGBU R: (IR + (R * 0.66)) * 0.60 G: ((R * 0.33) + G + (B * 0.33)) * 0.60 B: (UV + (B * 0.66)) * 0.60