# Soul & Simulation

For a good while now I’ve been reading The Computer and the Brain, and it has had me thinking about consciousness and ultimately the universe. In it, von Neuman talks about the relationships between computer and brain functions. For example, the electrical nature of neurons in the brain give rise to an obvious parrallel with binary systems. Because of such similarities a neuron can be modelled by computation, the first artificial neuron was modelled in 1943 by McCulloch (neuroscientist) & Pitts (logician). Since then exstensive work has been done on artificial neural netowrks, but as development continues, can consciousness be recreated?

Artificial Intelligence
As it is 2012: Year of Alan Turing this post has found itself at just the right time!
Artificial Intelligence is a branch of computer science that is trying to emulate intellect in computers. There are four main approaches are:

 Systems that think like humans. Systems that think rationally. Systems that act like humans. Systems that act rationally.

Back at University I had the idea that cryptography was the key (pun intended) to simulating consciousness, I thought about how private our thoughts are and how disconnected they seemed to the outside world, and that some sort of crypto-intellegent agent could act as the conscious. Later I dropped this hypothesis, it sounded cool but didn’t actually make much sense.

I define intelligence as the application of knowledge, the more intellect – the better you apply prior knowledge. This leads me to a conclusion, the ‘like a human‘ systems may be intellegent but they are not the pinnacle – Intelligence transcends human cognition.

Turing’s approach, act like a human, was initially the most successful, it lead to natural language processing and then chatbots like ELIZA and PARRY, also it brings up the question of consciousness! His Turing Test has yet to be passed by any AI, such a system would be deemed Strong AI. One of the traits of a strong AI system is it’s ability to infer from an uncertain situation and there is a lot of hope in incorperating quantum computing, this would allow decisions to be made on a much more probable basis. Google have been testing quantum computer’s ability to recognise faces and it is a vast improvement on classical computers.

My new idea for how a consciouss AI might be created is to go for a hybrid approach, to mimic the human instinct and thought processes. An act-human interface, with a think-rationally system assessing the incomming ‘thoughts’. Back and forths could happen on uncertain results using some sort of reasoning system. I have a feeling incompleteness and imprecision are the only actual problems with forming strong AI, but for now that’s just a voice in my head. Talking about voices in my head, what about free will?

Free Will

"You see there is only one constant. One universal. It is the only real truth. Causality. Action, reaction. Cause and effect." - Merovingian. The Matrix Reloaded.

My view on free will is that of a reductionist’s, a clean explination can be found in Sam Harris’ video on the lack of free will. In this video Michio Kaku uses quantum mechanics as a gateway to free will. Using uncertainty as an argument for free will is changing the will-type, if uncertainty is affecting your choices, you are exibhiting uncertain will. They are not your choices. They are not free. To believe in free will is to believe in dualism. Some people like Daniel Dennett would disagree, but compatiblists are just lay-dualists and wishful determinists, if the choice can be reduced to the efforts of the brain at any point – there is no choice, just action-reaction.

I can’t buy into dualism. A soul, for example, would need inputs and outputs to be anything it’s claimed to be, it would need to interact with a fundamental force to affect our physical choices or record ethics for later judging. If this was the case it would be detectable. Supernatural entities acting on natural entities must themselves be at least part natural.

Determanism
My ultimate view is that of a determanistic one:

$\mbox{ The universe is } \boldsymbol{f(x) = 42} \mbox{, we're just the working out. }$

I like to think that philosophy is trying to understand the answer, cosmology is trying to figure out the input and physics is learning about the function. More to this, as long as x is the same the answer to life the universe and everything is always 42. By this I mean if we were to reset the universe the exact same thing would happen without any fluxuation of difference at all.

My determanistic approach to thinking stems directly from my skepticism of randomness. In chaos theory the randomness is known to come from a lack of exact measurements, but in quantum physics the idea of probability is introduced, I refute the claim that probability is proof for true randomness – again, unpredictability is a lack of exact measurements. Unfortunatly simply recreating the test parameters isn’t the same as exactly re-doing the test itself, so it is a non-testable hypothesis, but a rational and logical one. All evidence so far moves towards the idea of a deterministic universe, science itself is based on the idea of laws predicting events, Bertrand Russell puts it well when he says “Where determinism fails, science fails.” (Determinism and Physics). He also talks about determinism as a scientific dogma or closer to a prerequisite in Science and Religion.

But yeah, The Computer and the Brain? Pretty good book so far..

## 4 thoughts on “Soul & Simulation”

Systems that think like humans. -> Human brain is designed (by nature or by God) to survive and not to think about any other stuff.

Systems that think rationally. -> These systems should be designed to think about non survival related stuff to give us something plus and figure out what we simple can not becuase our design.

Is this is correct assumtion or a stupid question formed by a survival designed brain?

• Well actually the rational thought is closer to survival (logical solutions to a situation) and the human thought centers on emotions/interpritations/jokes/metacognition etc.

For an example the Turing test is used to determine the human-ness of an AI program, if it’s told a joke: “a man walks into a bar, ‘ouch!’ ” chances are it will be ‘confused’ because it won’t interpret why a man walking into a bar would be funny, but ofcourse we know the irony(?) that jokes are usually set up with a man walking into a drinks bar but the joke teller took it litrally to throw us off.

There are other examples like when asked who’s the best football team, a human is likely to be bias and go with their own team. but a rational agent would take the statistics and do the math. Weighing probability is what happens in survival: “see stripes in grass, might be tiger, if it is I might die, if it isn’t then nothing – better assume it is and run away.” <-definitely not dead is better than maybe nothing.

2. As Hofstadter said, one of the most important things we should make A.I. do is making errors. That along with more independence should bring quite interesting results in the field.