Complexity - A Guided Tour Part 8

You’re reading novel Complexity - A Guided Tour Part 8 online at LightNovelFree.com. Please use the follow button to get notification about the latest chapter next time when you visit LightNovelFree.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy!

Similarly, this kind of dual use of information is key to Turing's proof of the undecidability of the Halting problem. Remember H and H from chapter 4? Do you recall how H ran on its own code? That is, just like our self-reproducing computer program above, H was used in two ways: as an interpreted program and as the input for that program.

Self-Replication in DNA.

At this point you may be groaning that we're back in the abstract realm of logical headaches. But give me a minute to bring us back to reality. The really amazing thing is that this dual use of information is key to the way DNA replicates itself. As I described in chapter 6, DNA is made up of strings of nucleotides. Certain substrings (genes) encode amino acids making up proteins, including the enzymes (special kinds of proteins) that effect the splitting of the double helix and the copying of each strand via messenger RNA, transfer RNA, ribosomes, et cetera. In a very crude a.n.a.logy, the DNA strings encoding the enzymes that perform the copying roughly correspond to the lines of code in the self-copying program. These "lines of code" in DNA are executed when the enzymes are created and act on the DNA itself, interpreting it as data to be split up and copied.

However, you may have noticed something I have so far swept under the rug. There is a major difference between my self-copying program and DNA self-reproduction. The self-copying program required an interpreter to execute it: an instruction pointer to move down the lines of computer code and a computer operating system to carry them out (e.g., actually perform the storing and retrieving of internal variables such as ip and L, actually print strings of characters, and so on). The interpreter is completely external to the program itself.

In contrast, in the case of DNA, the instructions for building the "interpreter"-the messenger RNA, transfer RNA, ribosomes, and all the other machinery of protein synthesis-are encoded along with everything else in the DNA. That is, DNA not only contains the code for its self-replicating "program" (i.e., the enzymes that perform the splitting and copying of DNA) but also it encodes its own interpreter (the cellular machinery that translates DNA into those very enzymes).



Von Neumann's Self-Reproducing Automaton.

Von Neumann's original self-reproducing automaton (described mathematically but not actually built by von Neumann) similarly contained not only a self-copying program but also the machinery needed for its own interpretation. Thus it was truly a self-reproducing machine. This explains why von Neumann's construction was considerably more complicated than my simple self-copying program. That it was formulated in the 1950s, before the details of biological self-reproduction were well understood, is testament to von Neumann's insight. Von Neumann's design of this automaton and mathematical proof of its correctness were mostly completed when he died in 1957, at the age of 53, of cancer possibly caused by his exposure to radiation during his work on the atomic bomb. The proof was completed by von Neumann's colleague, Arthur Burks. The complete work was eventually published in 1966 as a book, Theory of Self-Reproducing Automata, edited by Burks.

John von Neumann, 19031957 (AIP Emilio Segre Visual Archives) Von Neumann's design for a self-reproducing automaton was one of the first real advances in the science of artificial life, demonstrating that self-reproduction by machine was indeed possible in principle, and providing a "logic" of self-reproduction that turned out to have some remarkable similarities to the one used by living systems.

Von Neumann recognized that these results could have profound consequences. He worried about public perception of the possibilities of self-reproducing machines, and said that he did not want any mention of the "reproductive potentialities of the machines of the future" made to the ma.s.s media. It took a while, but the ma.s.s media eventually caught up. In 1999, computer scientists Ray Kurzweil and Hans Moravec celebrated the possibility of super-intelligent self-reproducing robots, which they believe will be built in the near future, in their respective nonfiction (but rather far-fetched) books The Age of Spiritual Machines and Robot. In 2000 some of the possible perils of self-reproducing nano-machines were decried by Bill Joy, one of the founders of Sun Microsystems, in a now famous article in Wired magazine called "Why the Future Doesn't Need Us." So far none of these predictions has come to pa.s.s. However, complex self-reproducing machines may soon be a reality: some simple self-reproducing robots have already been constructed by roboticist Hod Lipson and his colleagues at Cornell University.

John von Neumann.

It is worth saying a few words about von Neumann himself, one of the most important and interesting figures in science and mathematics in the twentieth century. He is someone you should know about if you don't already. Von Neumann was, by anyone's measure, a true genius. During his relatively short life he made fundamental contributions to at least six fields: mathematics, physics, computer science, economics, biology, and neuroscience. He is the type of genius whom people tell stories about, shaking their heads and wondering whether someone that smart could really be a member of the human species. I like these stories so much, I want to retell a few of them here.

Unlike Einstein and Darwin, whose genius took a while to develop, Hungarian born "Johnny" von Neumann was a child prodigy. He supposedly could divide eight-digit numbers in his head at the age of six. (It evidently took him a while to notice that not everyone could do this; as reported in one of his biographies, "When his mother once stared rather aimlessly in front of her, six-year-old Johnny asked: 'What are you calculating?' ") At the same age he also could converse with his father in ancient Greek.

At the age of eighteen von Neumann went to university, first in Budapest, then in Germany and Switzerland. He first took the "practical" course of studying chemical engineering but couldn't be kept away from mathematics. He received a doctorate in math at the age of twenty-three, after doing fundamental work in both mathematical logic and quantum mechanics. His work was so good that just five years later he was given the best academic job in the world-a professors.h.i.+p (with Einstein and G.o.del) at the newly formed Inst.i.tute for Advanced Study (IAS) in Princeton.

The inst.i.tute didn't go wrong in their bet on von Neumann. During the next ten years, von Neumann went on to invent the field of game theory (producing what has been called "the greatest paper on mathematical economics ever written"), design the conceptual framework of one of the first programmable computers (the EDVAC, for which he wrote what has been called "the most important doc.u.ment ever written about computing and computers"), and make central contributions to the development of the first atomic and hydrogen bombs. This was all before his work on self-reproducing automata and his exploration of the relations.h.i.+ps between the logic of computers and the workings of the brain. Von Neumann also was active in politics (his positions were very conservative, driven by strong anti-communist views) and eventually became a member of the Atomic Energy Commission, which advised the U.S. president on nuclear weapons policy.

Von Neumann was part of what has been called the "Hungarian phenomenon," a group of several Hungarians of similar age who went on to become world-famous scientists. This group also included Leo Szilard, whom we heard about in chapter 3, the physicists Eugene Wigner, Edward Teller, and Denis Gabor, and the mathematicians Paul Erdos, John Kemeny, and Peter Lax. Many people have speculated on the causes of this improbable cl.u.s.ter of incredible talent. But as related by von Neumann biographer Norman MacRae, "Five of Hungary's six n.o.bel Prize winners were Jews born between 1875 and 1905, and one was asked why Hungary in his generation had brought forth so many geniuses. n.o.bel laureate Wigner replied that he did not understand the question. Hungary in that time had produced only one genius, Johnny von Neumann."

Von Neumann was in many ways ahead of his time. His goal was, like Turing's, to develop a general theory of information processing that would encompa.s.s both biology and technology. His work on self-reproducing automata was part of this program. Von Neumann also was closely linked to the so-called cybernetics community-an interdisciplinary group of scientists and engineers seeking commonalities among complex, adaptive systems in both natural and artificial realms. What we now call "complex systems" can trace its ancestry to cybernetics and the related field of systems science. I explore these connections further in the final chapter.

Von Neumann's interest in computation was not always well received at the elite Inst.i.tute for Advanced Study. After completing his work on the EDVAC, von Neumann brought several computing experts to the IAS to work with him on designing and building an improved successor to EDVAC. This system was called the "IAS computer"; its design was a basis for the early computers built by IBM. Some of the "pure" scientists and mathematicians at IAS were uncomfortable with so practical a project taking place in their ivory tower, and perhaps even more uncomfortable with von Neumann's first application of this computer, namely weather prediction, for which he brought a team of meteorologists to the IAS. Some of the purists didn't think this kind of activity fit in with the inst.i.tute's theoretical charter. As IAS physicist Freeman Dyson put it, "The [IAS] School of Mathematics has a permanent establishment which is divided into three groups, one consisting of pure mathematics, one consisting of theoretical physicists, and one consisting of Professor von Neumann." After von Neumann's death, the IAS computer project was shut down, and the IAS faculty pa.s.sed a motion "to have no experimental science, no laboratories of any kind at the Inst.i.tute." Freeman Dyson described this as, "The sn.o.bs took revenge."

CHAPTER 9.

Genetic Algorithms.

AFTER HE ANSWERED THE QUESTION "Can a machine reproduce itself?" in the affirmative, von Neumann wanted to take the next logical step and have computers (or computer programs) reproduce themselves with mutations and compete for resources to survive in some environment. This would counter the "survival instinct" and "evolution and adaptation" arguments mentioned above. However, von Neumann died before he was able to work on the evolution problem.

Others quickly took up where he left off. By the early 1960s, several groups of researchers were experimenting with evolution in computers. Such work has come to be known collectively as evolutionary computation. The most widely known of these efforts today is the work on genetic algorithms done by John Holland and his students and colleagues at the University of Michigan.

John Holland is, in some sense, the academic grandchild of John von Neumann. Holland's own Ph.D. advisor was Arthur Burks, the philosopher, logician, and computer engineer who a.s.sisted von Neumann on the EDVAC computer and who completed von Neumann's unfinished work on self-reproducing automata. After his work on the EDVAC, Burks obtained a faculty position in philosophy at the University of Michigan and started the Logic of Computers group, a loose-knit collection of faculty and students who were interested in the foundations of computers and of information processing in general. Holland joined the University of Michigan as a Ph.D. student, starting in mathematics and later switching to a brand-new program called "communication sciences" (later "computer and communication sciences"), which was arguably the first real computer science department in the world. A few years later, Holland became the program's first Ph.D. recipient, giving him the distinction of having received the world's first Ph.D. in computer science. He was quickly hired as a professor in that same department.

John Holland. (Photograph copyright by the Santa Fe Inst.i.tute. Reprinted by permission.) Holland got hooked on Darwinian evolution when he read Ronald Fisher's famous book, The Genetical Theory of Natural Selection. Like Fisher (and Darwin), Holland was struck by a.n.a.logies between evolution and animal breeding. But he looked at the a.n.a.logy from his own computer science perspective: "That's where genetic algorithms came from. I began to wonder if you could breed programs the way people would say, breed good horses and breed good corn."

Holland's major interest was in the phenomenon of adaptation-how living systems evolve or otherwise change in response to other organisms or to a changing environment, and how computer systems might use similar principles to be adaptive as well. His 1975 book, Adaptation in Natural and Artificial Systems, laid out a set of general principles for adaptation, including a proposal for genetic algorithms.

My own first exposure to genetic algorithms was in graduate school at Michigan, when I took a cla.s.s taught by Holland that was based on his book. I was instantly enthralled by the idea of "evolving" computer programs. (Like Thomas Huxley, my reaction was, "How extremely stupid not to have thought of that!")

A Recipe for a Genetic Algorithm.

The term algorithm is used these days to mean what Turing meant by definite procedure and what cooks mean by recipe: a series of steps by which an input is transformed to an output.

In a genetic algorithm (GA), the desired output is a solution to some problem. Say, for example, that you are a.s.signed to write a computer program that controls a robot janitor that picks up trash around your office building. You decide that this a.s.signment will take up too much of your time, so you want to employ a genetic algorithm to evolve the program for you. Thus, the desired output from the GA is a robot-janitor control program that allows the robot to do a good job of collecting trash.

The input to the GA has two parts: a population of candidate programs, and a fitness function that takes a candidate program and a.s.signs to it a fitness value that measures how well that program works on the desired task.

Candidate programs can be represented as strings of bits, numbers, or symbols. Later in this chapter I give an example of representing a robot-control program as a string of numbers.

In the case of the robot janitor, the fitness of a candidate program could be defined as the square footage of the building that is covered by the robot, when controlled by that program, in a set amount of time. The more the better.

Here is the recipe for the GA.

Repeat the following steps for some number of generations: Generate an initial population of candidate solutions. The simplest way to create the initial population is just to generate a bunch of random programs (strings), called "individuals."

Calculate the fitness of each individual in the current population.

Select some number of the individuals with highest fitness to be the parents of the next generation.

Pair up the selected parents. Each pair produces offspring by recombining parts of the parents, with some chance of random mutations, and the offspring enter the new population. The selected parents continue creating offspring until the new population is full (i.e., has the same number of individuals as the initial population). The new population now becomes the current population.

Go to step 2.

Genetic Algorithms in the Real World.

The GA described above is simple indeed, but versions of it have been used to solve hard problems in many scientific and engineering areas, as well as in art, architecture, and music.

Just to give you a flavor of these problems: GAs have been used at the General Electric Company for automating parts of aircraft design, Los Alamos National Lab for a.n.a.lyzing satellite images, the John Deere company for automating a.s.sembly line scheduling, and Texas Instruments for computer chip design. GAs were used for generating realistic computer-animated horses in the 2003 movie The Lord of the Rings: The Return of the King, and realistic computer-animated stunt doubles for actors in the movie Troy. A number of pharmaceutical companies are using GAs to aid in the discovery of new drugs. GAs have been used by several financial organizations for various tasks: detecting fraudulent trades (London Stock Exchange), a.n.a.lysis of credit card data (Capital One), and forecasting financial markets and portfolio optimization (First Quadrant). In the 1990s, collections of artwork created by an interactive genetic algorithm were exhibited at several museums, including the Georges Pompidou Center in Paris. These examples are just a small sampling of ways in which GAs are being used.

Evolving Robby, the Soda-Can-Collecting Robot.

To introduce you in more detail to the main ideas of GAs, I take you through a simple extended example. I have a robot named "Robby" who lives in a (computer simulated, but messy) two-dimensional world that is strewn with empty soda cans. I am going to use a genetic algorithm to evolve a "brain" (that is, a control strategy) for Robby.

Robby's job is to clean up his world by collecting the empty soda cans. Robby's world, ill.u.s.trated in figure 9.1, consists of 100 squares (sites) laid out in a 10 10 grid. You can see Robby in site 0,0. Let's imagine that there is a wall around the boundary of the entire grid. Various sites have been littered with soda cans (but with no more than one can per site).

Robby isn't very intelligent, and his eyesight isn't that great. From wherever he is, he can see the contents of one adjacent site in the north, south, east, and west directions, as well as the contents of the site he occupies. A site can be empty, contain a can, or be a wall. For example, in figure 9.1, Robby, at site 0,0, sees that his current site is empty (i.e., contains no soda cans), the "sites" to the north and west are walls, the site to the south is empty, and the site to the east contains a can.

FIGURE 9.1. Robby's world. A 10 x 10 array, strewn with soda cans.

For each cleaning session, Robby can perform exactly 200 actions. Each action consists of one of the following seven choices: move to the north, move to the south, move to the east, move to the west, choose a random direction to move in, stay put, or bend down to pick up a can. Each action may generate a reward or a punishment. If Robby is in the same site as a can and picks it up, he gets a reward of ten points. However, if he bends down to pick up a can in a site where there is no can, he is fined one point. If he crashes into a wall, he is fined five points and bounces back into the current site.

Clearly, Robby's reward is maximized when he picks up as many cans as possible, without cras.h.i.+ng into any walls or bending down to pick up a can if no can is there.

Since this is a simple problem, it would probably be pretty easy for a human to figure out a good strategy for Robby to follow. However, the point of genetic algorithms is that humans, being intrinsically lazy, don't have to figure out anything; we just let computer evolution figure it out for us. Let's use a genetic algorithm to evolve a good strategy for Robby.

The first step is to figure out exactly what we are evolving; that is, what exactly const.i.tutes a strategy? In general, a strategy is a set of rules that gives, for any situation, the action you should take in that situation. For Robby, a "situation" is simply what he can see: the contents of his current site plus the contents of the north, south, east, and west sites. For the question "what to do in each situation," Robby has seven possible things he can do: move north, south, east, or west; move in a random direction; stay put; or pick up a can.

Therefore, a strategy for Robby can be written simply as a list of all the possible situations he could encounter, and for each possible situation, which of the seven possible actions he should perform.

How many possible situations are there? Robby looks at five different sites (current, north, south, east, west), and each of those sites can be labeled as empty, contains can, or wall. This means that there are 243 different possible situations (see the notes for an explanation of how I calculated this). Actually, there aren't really that many, since Robby will never face a situation in which his current site is a wall, or one in which north, south, east, and west are all walls. There are other "impossible" situations as well. Again, being lazy, we don't want to figure out what all the impossible situations are, so we'll just list all 243 situations, and know that some of them will never be encountered.

Table 9.1 is an example of a strategy-actually, only part of a strategy, since an entire strategy would be too long to list here.

Robby's situation in figure 9.1 is.

To decide what to do next, Robby simply looks up this situation in his strategy table, and finds that the corresponding action is MoveWest. So he moves west. And crashes into a wall.

I never said this was a good strategy. Finding a good strategy isn't our job; it's the job of the genetic algorithm.

TABLE 9-1.

I wrote the code for a genetic algorithm to evolve strategies for Robby. In my GA, each individual in the population is a strategy-a listing of the actions that correspond to each possible situation. That is, given a strategy such as the one in table 9.1, an individual to be evolved by the GA is just a listing of the 243 actions in the rightmost column, in the order given: MoveNorth MoveEast MoveRandom PickUpCan ... MoveWest ... StayPut The GA remembers that the first action in the string (here MoveNorth) goes with the first situation ("Empty Empty Empty Empty Empty"), the second action (here MoveEast) goes with the second situation ("Empty Empty Empty Empty Can"), and so on. In other words, I don't have to explicitly list the situations corresponding to these actions; instead the GA remembers the order in which they are listed. For example, suppose Robby happened to observe that he was in the following situation: I build into the GA the knowledge that this is situation number 2. It would look at the strategy table and see that the action in position 2 is MoveEast. Robby moves east, and then observes his next situation; the GA again looks up the corresponding action in the table, and so forth.

My GA is written in the programming language C. I won't include the actual program here, but this is how it works.

1. Generate the initial population. The GA starts with an initial population of 200 random individuals (strategies).

A random population is ill.u.s.trated in figure 9.2. Each individual strategy is a list of 243 "genes." Each gene is a number between 0 and 6, which stands for an action (0 = MoveNorth, 1 = MoveSouth, 2 = MoveEast, 3 = MoveWest, 4 = StayPut, 5 = PickUp, and 6 = RandomMove). In the initial population, these numbers are filled in at random. For this (and all other probabilistic or random choices), the GA uses a pseudo-random-number generator. Repeat the following for 1,000 generations: 2. Calculate the fitness of each individual in the population. In my program, the fitness of a strategy is determined by seeing how well the strategy lets Robby do on 100 different cleaning sessions. A cleaning session consists of putting Robby at site 0, 0, and throwing down a bunch of cans at random (each site can contain at most one can; the probability of a given site containing a can is 50%). Robby then follows the strategy for 200 actions in each session. The score of the strategy in each session is the number of reward points Robby acc.u.mulates minus the total fines he incurs. The strategy's fitness is its average score over 100 different cleaning sessions, each of which has a different configuration of cans.

FIGURE 9.2. Random initial population. Each individual consists of 243 numbers, each of which is between 0 and 6, and each of which encodes an action. The location of a number in a string indicates to which situation the action corresponds.

3. Apply evolution to the current population of strategies to create a new population. That is, repeat the following until the new population has 200 individuals: (a) Choose two parent individuals from the current population probabilistically based on fitness. That is, the higher a strategy's fitness, the more chance it has to be chosen as a parent.

(b) Mate the two parents to create two children. That is, randomly choose a position at which to split the two number strings; form one child by taking the numbers before that position from parent A and after that position from parent B, and vice versa to form the second child.

(c) With a small probability, mutate numbers in each child. That is, with a small probability, choose one or more numbers and replace them each with a randomly generated number between 0 and 6.

(d) Put the two children in the new population.

4. Once the new population has 200 individuals, return to step 2 with this new generation.

The magic is that, starting from a set of 200 random strategies, the genetic algorithm creates strategies that allow Robby to perform very well on his cleaning task.

The numbers I used for the population size (200), the number of generations (1,000), the number of actions Robby can take in a session (200), and the number of cleaning sessions to calculate fitness (100) were chosen by me, somewhat arbitrarily. Other numbers can be used and can also produce good strategies.

I'm sure you are now on the edge of your seat waiting to find out what happened when I ran this genetic algorithm. But first, I have to admit that before I ran it, I overcame my laziness and constructed my own "smart" strategy, so I could see how well the GA could do compared with me. My strategy for Robby is: "If there is a can in the current site, pick it up. Otherwise, if there is a can in one of the adjacent sites, move to that site. (If there are multiple adjacent sites with cans, I just specify the one to which Robby moves.) Otherwise, choose a random direction to move in."

This strategy actually isn't as smart as it could be; in fact, it can make Robby get stuck cycling around empty sites and never making it to some of the sites with cans.

I tested my strategy on 10,000 cleaning sessions, and found that its average (per-session) score was approximately 346. Given that at the beginning of each session, about 50%, or 50, of the sites contain cans, the maximum possible score for any strategy is approximately 500, so my strategy is not very close to optimal.

Can the GA do as well or better than this? I ran it to see. I took the highest-fitness individual in the final generation, and also tested it on 10,000 new and different cleaning sessions. Its average (per-session) score was approximately 483-that is, nearly optimal!

How Does the GA-Evolved Strategy Solve the Problem?.

Now the question is, what is this strategy doing, and why does it do better than my strategy? Also, how did the GA evolve it?

Let's call my strategy M and the GA's strategy G. Below is each strategy's genome.

M: 65635365625235325265635365615135315125235325215135315165635365 62523532526563536560503530502523532520503530501513531512523532 5215135315105035305025235325205035305065635356252353252656353 656151353151252353252151353151656353656252353252656353454.

Complexity - A Guided Tour Part 8

You're reading novel Complexity - A Guided Tour Part 8 online at LightNovelFree.com. You can use the follow function to bookmark your favorite novel ( Only for registered users ). If you find any errors ( broken links, can't load photos, etc.. ), Please let us know so we can fix it as soon as possible. And when you start a conversation or debate about a certain topic with other people, please do not offend them just because you don't like their opinions.


Complexity - A Guided Tour Part 8 summary

You're reading Complexity - A Guided Tour Part 8. This novel has been translated by Updating. Author: Melanie Mitchell already has 559 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

LightNovelFree.com is a most smartest website for reading novel online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to LightNovelFree.com