Tag Archives: Self-Modifying Code

Discussion #2 – Self-modifying Systems, “Non-Turing” Machines and King’s Quest I

Most computer programmers would say that a “good” programmer never writes self-modifying code. The main reason being that a self-modifying program is unstable and unpredictable. A program that changes itself is generally equated with a malicious program (e.g. a virus). But early researchers in computing worked with self-modifying code, partly driven by their scarcity of resources. However now very little serious work is being done in the field of self-modifying systems. Is this because we are afraid of creating something over which we do not have absolute control?

The Gentlemen Scientists are fascinated with anything that other people fear. Since we free on this forum from the pressure of being practical, our discussion veers randomly from self-modification (both human and machine) to genetic programming to artificial immune systems. Some day, one (or both) of us may write a novel about people who deliberately modify and mutilate themselves in the interests of improving the survival prospect of their species. Or maybe not.

As usual, we digress some more. We introduce the concept of “visual computing” – using visual objects directly to solve problems using a machine-human hybrid system. Which then leads us onto computer games. Are computer games modifying us to make the world more conducive to computer games? Are they playing us or are we playing them?

[An aside: the Gentlemen Scientists have been influenced greatly by early PC computer games such as King’s Quest. We have vivid memories of loading King’s Quest I from a floppy disc and then being amazed.]

So why are we afraid? We love determinism. We pay good money for guaranteed results. But we and our world are not deterministic, and no matter how many plans you make for your child’s life, he or she will always subvert and transcend them. We are not Turing machines (we think). Let’s build a new generation of AI which is embraces uncertainty, change and danger.

It’s a long discussion, but I think we sum it up nicely at 26:15 – “Self modifying systems can be highly adaptive and robust as long as they don’t stab a hole in their own head”

1Core Wars is a programming game in which each program tried to terminate its competition. A self-modifying program may be a strategy within the game. http://en.wikipedia.org/wiki/Core_War

2Turing’s Halting Problem – computer programs cannot solve the problem of working out whether a computer program will terminate or not. This is an analogue of Godel’s Theorem. http://www.cgl.uwaterloo.ca/~csk/halt/

3The Ethics of Deep Self-Modification – http://www.goertzel.org/books/logic/chapter_seven.htm

4Benja Fallenstein on the Löbian Obstacle to Self-Modifying Systems http://intelligence.org/2013/08/04/benja-interview/

5Genetic Programming – http://www.genetic-programming.org/

6Artificial Immune Systems http://www.artificial-immune-systems.org/people-new.shtml

7Nicholas Nassim Taleb has introduced a powerful concept called “anti-fragility”. The idea is that anti-fragile systems in the natural world are not only resilient and robust to unexpected changes and events, they actually need and crave a certain level on uncertainty

8Architecture for an Artificial Immune System – http://dl.acm.org/citation.cfm?id=1108862