Accuracy of Generated Fractals

Note: I refer to the Mandelbrot set in general as the M-set for short.

When I was writing the post on Rough Mandelbrot Sets I tried out some variations on the rough set. One variation was to measure the generated M-set against a previously calculated master M-set of high precision (100000 iterations of z = z^2 + C). In the image below the master M-set is in white and the generated M-sets are in green (increasing in accuracy):

50 Against MasterHere instead of approximating with tiles I measured the accuracy of the generated sets against the master set by pixel count. Where P = \{ \text{set of all pixels} \} the ratio of P_{master} / P_{generated} produced something that threw me, the generated sets made sudden but periodic jumps in accuracy:

Graph OneLooking at the data I saw the jumps were, very roughly, at multiples of 256. The size of the image being generated was 256 by 256 pixels so I changed it to N by N for N = {120, 360, 680} and the increment was still every ~256. So I’m not really sure why, it might be obvious, if you know tell me in the comments!

I am reminded of the images generated from Fractal Binary and other Complex Bases where large geometric entities can be represented on a plane by iteration through a number system. I’d really like to know what the Mandelbrot Number System is…

Below is a table of the jumps and their iteration index:

Iterations Accuracy measure
255
256
0.241929
0.397073
510
511
0.395135
0.510806
765
766
0.510157
0.579283
1020
1021
0.578861
0.644919
1275
1276
0.644919
0.679819
1530
1531
0.679696
0.718911

Rough Mandelbrot Sets

I’ve been reading up on Zdzisław Pawlak’s Rough Set Theory recently and wanted to play with them. They are used to address vagueness in data so fractals seem like a good subject.

Super Quick Intro to Rough Sets:
A rough set is a tuple (ordered pair) of sets R(S) = \langle R_*, R^* \rangle which is used to model some target set S. The set R_* has every element definitely in set S and set R^* has every element that is possibly in set S . It’s roughness can be measured by the accuracy function \alpha(S) = \frac{|R_*|}{|R^*|} . So when |R_*| = |R^*| then the set is known as crisp (not vague) with an accuracy of 1.

A more formal example can be found on the wiki page but we’ll move on to the Mandelbrot example because it is visually intuitive:

The tiles are 36x36 pixels, the Mandelbrot set is marked in yellow. The green and white tiles are possibly i the Mandelbrot set, but the white tiles are also definitely in the Mandelbrot set.

The tiles are 36×36 pixels, the Mandelbrot set is marked in yellow. The green and white tiles are possibly in the Mandelbrot set, but the white tiles are also definitely in it.

Here the target set S contains all the pixels inside the Mandelbrot set, but we are going to construct this set in terms of tiles. Let T_1, T_2, T_3,\dots , T_n be the tile sets that contain the pixels. R^* is the set of all tiles T_x where the set T_x contains at least one pixel that is inside the Mandelbrot set, R_* is the set of all tiles T_x that contain only Mandelbrot pixels. So in the above example there are 28 tiles possibly in the set including the 7 tiles definitely in the set. Giving R(S) an accuracy of 0.25.

Tile sizes: 90, 72, 60, 45, 40, 36, 30, 24, 20, 18, 15, 12, 10, 9, 8, 6, 5, 4.

Tile width: 90, 72, 60, 45, 40, 36, 30, 24, 20, 18, 15, 12, 10, 9, 8, 6, 5, 4. There seems to be a lack of symmetry but it’s probably from computational precision loss.

Obviously the smaller the tiles the better the approximation of the set. Here the largest tiles (90×90 pixels) are so big that there are no tiles definitely inside the target set and 10 tiles possibly in the set, making the accuracy 0. On the other hand, the 4×4 tiles give us |R_*| = 1211 and |R^*| = 1506 making a much nicer:

\alpha(S) = 0.8 \overline{04116865869853917662682602921646746347941567065073}

For much more useful applications of Rough Sets see this extensive paper by Pawlak covering the short history of Rough Sets, comparing them to Fuzzy Sets and showing uses in data analysis and Artificial Intelligence.

Random post about random stuff

Recently I was given the task of generating six random lottery numbers, Simple enough – I just used the C++ rand() function, but it got me thinking, how would I write my own random number function?

Background info:
Chaos and randomness are two ideas that imply complete disorder, but they actually occur out of extremely controlled systems. Marginal differences in the starting values of a system will grow into seemingly inexplicable anarchy. For example Imagine two sets of cogs connected in two seperate lines: if the first line of cog’s starting position was moved just a millimeter then over time the two lines would become out of sync.

Math example:
We will take two numbers {x = 1, y = 1.0001}
and the recuring equation z = z^2 + C where C = x and C = y:
*
z begins as 0

Iterations z, C = x z, C = y Abs. diff Per. diff
1 1 1.0001 0.0001 0.0001%
2 2 2.0003 0.0003 0.00015%
3 5 5.0013 0.0013 0.00026%
4 26 26.0131 0.0131 0.0005%
5 677 677.682 0.682 0.001%
6 458330 459253 923 0.002%

If the system outputs integers then x and y are indestingushable until the sixth iteration, where suddenly an extra 923 comes out of ‘nowhere’. Another way to look at it is if we can’t percieve the data properly it doesn’t stop the unseen changes from taking place.

The Idea:
Generate a long sequence that starts with a pseudorandom number, we shall call the sequence x = f(x), as the sequence runs we change the amount of decimal places available – therefore changing the outcome.

For the pseudorandom number the [computer] system time would be used. If we use the least significant digit (LSD), a unit of seconds, the value will be between 0 – 9 and constantly changing.

  1. Run ten iterations of x = f(x)
  2. Clock LSD of the time as N
  3. Cap x at N decimal places
  4. Goto step 1

By the end the number would be seemingly random.