Complexity - A Guided Tour Part 12

You’re reading novel Complexity - A Guided Tour Part 12 online at LightNovelFree.com. Please use the follow button to get notification about the latest chapter next time when you visit LightNovelFree.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy!

This is, of course, not to say that computers are dumb about everything. In selected, narrow domains they have become quite intelligent. Computer-controlled vehicles can now drive by themselves across rugged desert terrain. Computer programs can beat human doctors at diagnosing certain diseases, human mathematicians at solving complex equations, and human grand masters at chess. These are only a few examples of a surge of recent successes in artificial intelligence (AI) that have brought a new sense of optimism to the field. Computer scientist Eric Horvitz noted, "At conferences you are hearing the phrase 'human-level AI,' and people are saying that without blus.h.i.+ng."

Well, some people, perhaps. There are a few minor "human-level" things computers still can't do, such as understand human language, describe the content of a photograph, and more generally use common sense as in the preceding examples. Marvin Minsky, a founder of the field of artificial intelligence, concisely described this paradox of AI as, "Easy things are hard." Computers can do many things that we humans consider to require high intelligence, but at the same time they are unable to perform tasks that any three-year-old child could do with ease.

Making a.n.a.logies.

An important missing piece for current-day computers is the ability to make a.n.a.logies.

The term a.n.a.logy often conjures up people's bad memories of standardized test questions, such as "Shoe is to foot as glove is to _____?" However, what I mean by a.n.a.logy-making is much broader: a.n.a.logy-making is the ability to perceive abstract similarity between two things in the face of superficial differences. This ability pervades almost every aspect of what we call intelligence.



Consider the following examples: A child learns that dogs in picture books, photographs, and real life are all instances of the same concept.

A person is easily able to recognize the letter A in a vast variety of printed typefaces and handwriting.

Jean says to Simone, "I call my parents once a week." Simone replies "I do that too," meaning, of course, not that she calls Jean's parents once a week, but that she calls her own parents.

A woman says to her male colleague, "I've been working so hard lately, I haven't been able to spend enough time with my husband." He replies, "Same here"-meaning not that he is too busy to spend enough time with the woman's husband, but that he has little time to spend with his girlfriend.

An advertis.e.m.e.nt describes Perrier as "the Cadillac of bottled waters." A newspaper article describes teaching as "the Beirut of professions." The war in Iraq is called "another Vietnam."

Britain and Argentina go to war over the Falklands (or las Malvinas), a set of small islands located near the coast of Argentina and populated by British settlers. Greece sides with Britain because of its own conflict with Turkey over Cyprus, an island near the coast of Turkey, the majority of whose population is ethnically Greek.

A cla.s.sical music lover hears an unfamiliar piece on the radio and knows instantly that it is by Bach. An early-music enthusiast hears a piece for baroque orchestra and can easily identify which country the composer was from. A supermarket shopper recognizes the music being piped in as a Muzak version of the Beatles' "Hey Jude."

The physicist Hideki Yukawa explains the nuclear force by using an a.n.a.logy with the electromagnetic force, on which basis he postulates a mediating particle for the nuclear force with properties a.n.a.logous to the photon. The particle is subsequently discovered, and its predicted properties are verified. Yukawa wins a n.o.bel prize.

This list is a small sampling of a.n.a.logies ranging from the mundane everyday kind to the once-in-a-lifetime-discovery kind. Each of these examples demonstrates, at different levels of impressiveness, how good humans are at perceiving abstract similarity between two ent.i.ties or situations by letting concepts "slip" from situation to situation in a fluid way. The list taken as a whole ill.u.s.trates the ubiquity of this ability in human thought. As the nineteenth-century philosopher Henry David Th.o.r.eau put it, "All perception of truth is the detection of an a.n.a.logy."

Perceiving abstract similarities is something computers are notoriously bad at. That's why I can't simply show the computer a picture, say, of a dog swimming in a pool, and ask it to find "other pictures like this" in my online photo collection.

My Own Route to a.n.a.logy.

In the early 1980s, after I had graduated from college and didn't quite know what to do with my life, I got a job as a high-school math teacher in New York City. The job provided me with very little money, and New York is an expensive city, so I cut down on unnecessary purchases. But one purchase I did make was a relatively new book written by a computer science professor at Indiana University, with the odd t.i.tle G.o.del, Escher, Bach: an Eternal Golden Braid. Having majored in math and having visited a lot of museums, I knew who G.o.del and Escher were, and being a fan of cla.s.sical music, I knew very well who Bach was. But putting their names together in a book t.i.tle didn't make sense to me, and my curiosity was piqued.

Reading the book, written by Douglas Hofstadter, turned out to be one of those life-changing events that one can never antic.i.p.ate. The t.i.tle didn't let on that the book was fundamentally about how thinking and consciousness emerge from the brain via the decentralized interactions of large numbers of simple neurons, a.n.a.logous to the emergent behavior of systems such as cells, ant colonies, and the immune system. In short, the book was my introduction to some of the main ideas of complex systems.

It was clear that Hofstadter's pa.s.sionate goal was to use similar principles to construct intelligent and "self-aware" computer programs. These ideas quickly became my pa.s.sion as well, and I decided that I wanted to study artificial intelligence with Hofstadter.

Douglas Hofstadter. (Photograph courtesy of Indiana University.) The problem was, I was a young n.o.body right out of college and Hofstadter was a famous writer of a best-selling book that had won both a Pulitzer Prize and a National Book Award. I wrote him a letter saying I wanted to come work with him as a graduate student. Naturally, he never responded. So I settled for biding my time and learning a bit more about AI.

A year later I had moved to Boston with a new job and was taking cla.s.ses in computer science to prepare for my new career. One day I happened to see a poster advertising a talk by Hofstadter at MIT. Excited, I went to the talk, and afterward mingled among the throng of fans waiting to meet their hero (I wasn't the only one whose life was changed by Hofstadter's book). I finally got to the front of the line, shook Hofstadter's hand, and told him that I wanted to work in AI on ideas like his and that I was interested in applying to Indiana University. I asked if I could visit him sometime at Indiana to talk more. He told me that he was actually living in Boston, visiting the MIT Artificial Intelligence Lab for the year. He didn't invite me to come talk to him at the AI Lab; rather he handed me off to talk to a former student of his who was hanging around, and quickly went on to the next person in line.

I was disappointed, but not deterred. I managed to find Hofstadter's phone number at the MIT AI Lab, and called several times. Each time the phone was answered by a secretary who told me that Hofstadter was not in, but she would be glad to leave a message. I left several messages but received no response.

Then, one night, I was lying in bed pondering what to do next, when a crazy idea hit me. All my calls to Hofstadter had been in the daytime, and he was never there. If he was never there during the day, then when was he there? It must be at night! It was 11:00 p.m., but I got up and dialed the familiar number. Hofstadter answered on the first ring.

He seemed to be in a much better mood than he was at the lecture. We chatted for a while, and he invited me to come by his office the next day to talk about how I could get involved in his group's research. I showed up as requested, and we talked about Hofstadter's current project-writing a computer program that could make a.n.a.logies.

Sometimes, having the personality of a bulldog can pay off.

Simplifying a.n.a.logy.

One of Hofstadter's great intellectual gifts is the ability to take a complex problem and simplify it in such a way that it becomes easier to address but still retains its essence, the part that made it interesting in the first place. In this case, Hofstadter took the problem of a.n.a.logy-making and created a microworld that retained many of the problem's most interesting features. The microworld consists of a.n.a.logies to be made between strings of letters.

For example, consider the following problem: if abc changes to abd, what is the a.n.a.logous change to ijk? Most people describe the change as something like "Replace the rightmost letter by its alphabetic successor," and answer ijl. But clearly there are many other possible answers, among them: ijd ("Replace the rightmost letter by a d"-similar to Jake putting his socks "on").

ijk ("Replace all c's by d's; there are no c's in ijk "), and abd ("Replace any string by abd ").

There are, of course, an infinity of other, even less plausible answers, such as ijxx ("Replace all c's by d's and each k by two x's"), but almost everyone immediately views ijl as the best answer. This being an abstract domain with no practical consequences, I may not be able to convince you that ijl is a better answer than, say, ijd if you really believe the latter is better. However, it seems that humans have evolved in such a way as to make a.n.a.logies in the real world that affect their survival and reproduction, and their a.n.a.logy-making ability seems to carry over into abstract domains as well. This means that almost all of us will, at heart, agree that there is a certain level of abstraction that is "most appropriate," and here it yields the answer ijl. Those people who truly believe that ijd is a better answer would probably, if alive during the Pleistocene, have been eaten by tigers, which explains why there are not many such people around today.

Here is a second problem: if abc changes to abd, what is the a.n.a.logous change to iijjkk? The abc abd change can again be described as "Replace the rightmost letter by its alphabetic successor," but if this rule is applied literally to iijjkk it yields answer iijjkl, which doesn't take into account the double-letter structure of iijjkk. Most people will answer iijjll, implicitly using the rule "Replace the rightmost group of letters by its alphabetic successor," letting the concept letter of abc slip into the concept group of letters for iijjkk.

Another kind of conceptual slippage can be seen in the problem abc abd.

kji ?

A literal application of the rule "Replace the rightmost letter by its alphabetic successor" yields answer kjj, but this ignores the reverse structure of kji, in which the increasing alphabetic sequence goes from right to left rather than from left to right. This puts pressure on the concept rightmost in abc to slip to leftmost in kji, which makes the new rule "Replace the leftmost letter by its alphabetic successor," yielding answer lji. This is the answer given by most people. Some people prefer the answer kjh, in which the sequence kji is seen as going from left to right but decreasing in the alphabet. This entails a slippage from "alphabetic successor" to "alphabetic predecessor," and the new rule is "Replace the rightmost letter by its alphabetic predecessor."

Consider.

abc abd.

mrrjjj ?

You want to make use of the salient fact that abc is an alphabetically increasing sequence, but how? This internal "fabric" of abc is a very appealing and seemingly central aspect of the string, but at first glance no such fabric seems to weave mrrjjj together. So either (like most people) you settle for mrrkkk (or possibly mrrjjk), or you look more deeply. The interesting thing about this problem is that there happens to be an aspect of mrrjjj lurking beneath the surface that, once recognized, yields what many people feel is a more satisfying answer. If you ignore the letters in mrrjjj and look instead at group lengths, the desired successors.h.i.+p fabric is found: the lengths of groups increase as "1-2-3." Once this connection between abc and mrrjjj is discovered, the rule describing abc abd can be adapted to mrrjjj as "Replace the rightmost group of letters by its length successor," which yields "1-2-4" at the abstract level, or, more concretely, mrrjjjj.

Finally, consider.

abc abd.

xyz ?

At first glance this problem is essentially the same as the problem with target string ijk given previously, but there is a snag: Z has no successor. Most people answer xya, but in Hofstadter's microworld the alphabet is not circular and therefore this answer is excluded. This problem forces an impa.s.se that requires a.n.a.logy-makers to restructure their initial view, possibly making conceptual slippages that were not initially considered, and thus to discover a different way of understanding the situation.

People give a number of different responses to this problem, including xy ("Replace the z by nothing at all"), xyd ("Replace the rightmost letter by a d"; given the impa.s.se, this answer seems less rigid and more reasonable than did ijd for the first problem above), xyy ("If you can't take the z's successor, then the next best thing is to take its predecessor"), and several other answers. However, there is one particular way of viewing this problem that, to many people, seems like a genuine insight, whether or not they come up with it themselves. The essential idea is that abc and xyz are "mirror images"-xyzis wedged against the end of the alphabet, and abc is similarly wedged against the beginning. Thus the z in xyz and the a in abc can be seen to correspond, and then one naturally feels that the x and the c correspond as well. Underlying these object correspondences is a set of slippages that are conceptually parallel: alphabetic-first alphabetic-last, rightmost leftmost, and successor predecessor. Taken together, these slippages convert the original rule into a rule adapted to the target string xyz: "Replace the leftmost letter by its predecessor." This yields a surprising but strong answer: wyz.

It should be clear by now that the key to a.n.a.logy-making in this microworld (as well as in the real world) is what I am calling conceptual slippage. Finding appropriate conceptual slippages given the context at hand is the essence of finding a good a.n.a.logy.

Being a Copycat.

Doug Hofstadter's plan was for me to write a computer program that could make a.n.a.logies in the letter-string world by employing the same kinds of mechanisms that he believed are responsible for human a.n.a.logy-making in general. He already had a name for this (as yet nonexistent) program: "Copycat." The idea is that a.n.a.logy-making is a subtle form of imitation-for example, ijk needs to imitate what happened when abc changed to abd, using concepts relevant in its own context. Thus the program's job was to be a clever and creative copycat.

I began working on this project at MIT in the summer of 1984. That fall, Hofstadter started a new faculty position at the University of Michigan in Ann Arbor. I also moved there and enrolled as a Ph.D. student. It took a total of six years of working closely with Doug for me to construct the program he envisioned-the devil, of course, is in the details. Two results came out of this: a program that could make human-like a.n.a.logies in its microworld, and (finally) my Ph.D.

How to Do the Right Thing.

To be an intelligent copycat, you first have to make sense of the object, event, or situation that you are "copycatting." When presented with a situation with many components and potential relations among components, be it a visual scene, a friend's story, or a scientific problem, how does a person (or how might a computer program) mentally explore the typically intractably huge number of possible ways of understanding what is going on and possible similarities to other situations?

The following are two opposite and equally implausible strategies, both to be rejected: Some possibilities are a priori absolutely excluded from being explored. For example, after an initial scan of mrrjjj, make a list of candidate concepts to explore (e.g., letter, group of letters, successor, predecessor, rightmost) and rigidly stick to it. The problem with this strategy, of course, is that it gives up flexibility. One or more concepts not immediately apparent as relevant to the situation (e.g., group length) might emerge later as being central.

All possibilities are equally available and easy to explore, so one can do an exhaustive search through all concepts and possible relations.h.i.+ps that would ever be relevant in any situation. The problem with this strategy is that in real life there are always too many possibilities, and it's not even clear ahead of time what might const.i.tute a possible concept for a given situation. If you hear a funny clacking noise in your engine and then your car won't start, you might give equal weight to the possibilities that (a) the timing belt has accidentally come off its bearings or (b) the timing belt is old and has broken. If for no special reason you give equal weight to the third possibility that your next-door neighbor has furtively cut your timing belt, you are a bit paranoid. If for no special reason you also give equal weight to the fourth possibility that the atoms making up your timing belt have quantum-tunneled into a parallel universe, you are a bit of a crackpot. If you continue and give equal weight to every other possibility ... well, you just can't, not with a finite brain. However, there is some chance you might be right about the malicious neighbor, and the quantum-tunneling possibility shouldn't be forever excluded from your cognitive capacities or you risk missing a n.o.bel prize.

The upshot is that all possibilities have to be potentially available, but they can't all be equally available. Counterintuitive possibilities (e.g., your malicious neighbor; quantum-tunneling) must be potentially available but must require significant pressure to be considered (e.g., you've heard complaints about your neighbor; you've just installed a quantum-tunneling device in your car; every other possibility that you have explored has turned out to be wrong).

The problem of finding an exploration strategy that achieves this goal has been solved many times in nature. For example, we saw this in chapter 12 in the way ant colonies forage for food: the shortest trails leading to the best food sources attain the strongest pheromone scent, and increasing numbers of ants follow these trails. However, at any given time, some ants are still following weaker, less plausible trails, and some ants are still foraging randomly, allowing for the possibility of new food sources to be found.

This is an example of needing to keep a balance between exploration and exploitation, which I mentioned in chapter 12. When promising possibilities are identified, they should be exploited at a rate and intensity related to their estimated promise, which is being continually updated. But at all times exploration for new possibilities should continue. The problem is how to allocate limited resources-be they ants, lymphocytes, enzymes, or thoughts-to different possibilities in a dynamic way that takes new information into account as it is obtained. Ant colonies have solved this problem by having large numbers of ants follow a combination of two strategies: continual random foraging combined with a simple feedback mechanism of preferentially following trails scented with pheromones and laying down additional pheromone while doing so.

The immune system also seems to maintain a near optimal balance between exploration and exploitation. We saw in chapter 12 how the immune system uses randomness to attain the potential for responding to virtually any pathogen it encounters. This potential is realized when an antigen activates a particular B cell and triggers the proliferation of that cell and the production of antibodies with increasing specificity for the antigen in question. Thus the immune system exploits the information it encounters in the form of antigens by allocating much of its resources toward targeting those antigens that are actually found to be present. But it always continues to explore additional possibilities that it might encounter by maintaining its huge repertoire of different B cells. Like ant colonies, the immune system combines randomness with highly directed behavior based on feedback.

Hofstadter proposed a scheme for exploring uncertain environments: the "parallel terraced scan," which I referred to in chapter 12. In this scheme many possibilities are explored in parallel, each being allocated resources according to feedback about its current promise, whose estimation is updated continually as new information is obtained. Like in an ant colony or the immune system, all possibilities have the potential to be explored, but at any given time only some are being actively explored, and not with equal resources. When a person (or ant colony or immune system) has little information about the situation facing it, the exploration of possibilities starts out being very random, highly parallel (many possibilities being considered at once) and unfocused: there is no pressure to explore any particular possibility more strongly than any other. As more and more information is obtained, exploration gradually becomes more focused (increasing resources are concentrated on a smaller number of possibilities) and less random: possibilities that have already been identified as promising are exploited. As in ant colonies and the immune system, in Copycat such an exploration strategy emerges from myriad interactions among simple components.

Overview of the Copycat Program.

Copycat's task is to use the concepts it possesses to build perceptual structures-descriptions of objects, links between objects in the same string, groupings of objects in a string, and correspondences between objects in different strings-on top of the three "raw," unprocessed letter strings given to it in each problem. The structures the program builds represent its understanding of the problem and allow it to formulate a solution. Since for every problem the program starts out from exactly the same state with exactly the same set of concepts, its concepts have to be adaptable, in terms of their relevance and their a.s.sociations with one another, to different situations. In a given problem, as the representation of a situation is constructed, a.s.sociations arise and are considered in a probabilistic fas.h.i.+on according to a parallel terraced scan in which many routes toward understanding the situation are tested in parallel, each at a rate and to a depth reflecting ongoing evaluations of its promise.

Copycat's solution of letter-string a.n.a.logy problems involves the interaction of the following components: The Slipnet: A network of concepts, each of which consists of a central node surrounded by potential a.s.sociations and slippages. A picture of some of the concepts and relations.h.i.+ps in the current version of the program is given in figure 13.1. Each node in the Slipnet has a dynamic activation value that gives its current perceived relevance to the a.n.a.logy problem at hand, which therefore changes as the program runs. Activation also spreads from a node to its conceptual neighbors and decays if not reinforced. Each link has a dynamic resistance value that gives its current resistance to slippage. This also changes as the program runs. The resistance of a link is inversely proportional to the activation of the node naming the link. For example, when opposite is highly active, the resistance to slippage between nodes linked by opposite links (e.g., successor and predecessor) is lowered, and the probability of such slippages is increased.

FIGURE 13.1. Part of Copycat's Slipnet. Each node is labeled with the concept it represents (e.g., AZ, rightmost, successor). Some links between nodes (e.g., rightmostleftmost) are connected to a label node giving the link's relations.h.i.+p (e.g., opposite). Each node has a dynamic activation value (not shown) and spreads activation to neighboring nodes. Activation decays if not reinforced. Each link has an intrinsic resistance to slippage, which decreases when the label node is activated.

The Works.p.a.ce: A working area in which the letters composing the a.n.a.logy problem reside and in which perceptual structures are built on top of the letters.

Codelets: Agents that continually explore possibilities for perceptual structures to build in the Works.p.a.ce, and, based on their findings, attempt to instantiate such structures. (The term codelet is meant to evoke the notion of a "small piece of code," just as the later term applet in Java is meant to evoke the notion of a small application program.) Teams of codelets cooperate and compete to construct perceptual structures defining relations.h.i.+ps between objects (e.g., "b is the successor of a in abc," or "the two i's in iijjkk form a group," or "the b in abc corresponds to the group of j's in iijjkk," or "the c in abc corresponds to the k in kji "). Each team considers a particular possibility for structuring part of the world, and the resources (codelet time) allocated to each team depends on the promise of the structure it is trying to build, as a.s.sessed dynamically as exploration proceeds. In this way, a parallel terraced scan of possibilities emerges as the teams of codelets, via compet.i.tion and cooperation, gradually build up a hierarchy of structures that defines the program's "understanding" of the situation with which it is faced.

Temperature, which measures the amount of perceptual organization in the system. As in the physical world, high temperature corresponds to disorganization, and low temperature corresponds to a high degree of organization. In Copycat, temperature both measures organization and feeds back to control the degree of randomness with which codelets make decisions. When the temperature is high, reflecting little perceptual organization and little information on which to base decisions, codelets make their decisions more randomly. As perceptual structures are built and more information is obtained about what concepts are relevant and how to structure the perception of objects and relations.h.i.+ps in the world, the temperature decreases, reflecting the presence of more information to guide decisions, and codelets make their decisions more deterministically.

A Run of Copycat.

The best way to describe how these different components interact in Copycat is to display graphics from an actual run of the program. These graphics are produced in real-time as the program runs. This section displays snapshots from a run of the program on abc abd, mrrjjj ?

Figure 13.2: The problem is presented. Displayed are: the Works.p.a.ce (here, the as-yet unstructured letters of the a.n.a.logy problem); a "thermometer" on the left that gives the current temperature (initially set at 100, its maximum value, reflecting the lack of any perceptual structures); and the number of codelets that have run so far (zero).

FIGURE 13.2.

FIGURE 13.3.

Figure 13.3: Thirty codelets have run and have investigated a variety of possible structures. Conceptually, codelets can be thought of as antlike agents, each one probabilistically following a path to explore but being guided by the paths laid down by other codelets. In this case the "paths" correspond to candidate perceptual structures. Candidate structures are proposed by codelets looking around at random for plausible descriptions, relations.h.i.+ps, and groupings within strings, and correspondences between strings. A proposed structure becomes stronger as more and more codelets consider it and find it worthwhile. After a certain threshold of strength, the structure is considered to be "built" and can then influence subsequent structure building.

In figure 13.3, dotted lines and arcs represent structures in early stages of consideration; dashed lines and arcs represent structures in more serious stages of consideration; finally, solid lines and arcs represent structures that have been built. The speed at which proposed structures are considered depends on codelets' a.s.sessments of the promise of the structure. For example, the codelet that proposed the am correspondence rated it as highly promising because both objects are leftmost in their respective strings: ident.i.ty relations.h.i.+ps such as leftmost leftmost are always strong. The codelet that proposed the aj correspondence rated it much more weakly, since the mapping it is based on, leftmost rightmost, is much weaker, especially given that opposite is not currently active. Thus the am correspondence is likely to be investigated more quickly than the less plausible aj correspondence.

The temperature has gone down from 100 to 94 in response to the single built structure, the "sameness" link between the rightmost two j's in mrrjjj. This sameness link activated the node same in the Slipnet (not shown), which creates focused pressure in the form of specifically targeted codelets to look for instances of sameness elsewhere.

FIGURE 13.4.

Figure 13.4: Ninety-six codelets have run. The successors.h.i.+p fabric of abc has been built. Note that the proposed c-to-b predecessor link of figure 13.3 has been out-competed by a successor link. The two successor links in abc support each other: each is viewed as stronger due to the presence of the other, making rival predecessor links much less likely to destroy the successor links.

Two rival groups based on successors.h.i.+p links between letters are being considered: bc and abc (a whole-string group). These are represented by dotted or dashed rectangles around the letters in figure 13.4. Although bc got off to an early lead (it is dashed while the latter is only dotted), the group abc covers more objects in the string. This makes it stronger than bc-codelets will likely get around to testing it more quickly and will be more likely to build it than to build bc. A strong group, jjj, based on sameness is being considered in the bottom string.

Exploration of the crosswise aj correspondence (dotted line in figure 13.3) has been aborted, since codelets that further investigated it found it too weak to be built. A cj correspondence has been built (jagged vertical line); the mapping on which it is based (namely, both letters are rightmost in their respective strings) is given beneath it.

Since successor and sameness links have been built, along with an ident.i.ty mapping (rightmost rightmost), these nodes are highly active in the Slipnet and are creating focused pressure in the form of codelets to search explicitly for other instances of these concepts. For example, an ident.i.ty mapping between the two leftmost letters is being considered.

FIGURE 13.5.

In response to the structures that have been built, the temperature has decreased to 76. The lower the temperature, the less random are the decisions made by codelets, so unlikely structures such as the bc group are even more unlikely to be built.

Figure 13.5: The abc and jjj groups have been built, represented by solid rectangles around the letters. For graphical clarity, the links between letters in a group are not displayed. The existence of these groups creates additional pressure to find new successors.h.i.+p and sameness groups, such as the rr sameness group that is being strongly considered. Groups, such as the jjj sameness group, become new objects in the string and can have their own descriptions, as well as links and correspondences to other objects. The capital J represents the object consisting of the jjj group; the abc group likewise is a new object but for clarity a single letter representing it is not displayed. Note that the length of a group is not automatically noticed by the program; it has to be noticed by codelets, just like other attributes of an object. Every time a group node (e.g., successor group, sameness group) is activated in the Slipnet, it spreads some activation to the node length. Thus length is now weakly activated and creating codelets to notice lengths, but these codelets are not urgent compared with others and none so far have run and noticed the lengths of groups.

Complexity - A Guided Tour Part 12

You're reading novel Complexity - A Guided Tour Part 12 online at LightNovelFree.com. You can use the follow function to bookmark your favorite novel ( Only for registered users ). If you find any errors ( broken links, can't load photos, etc.. ), Please let us know so we can fix it as soon as possible. And when you start a conversation or debate about a certain topic with other people, please do not offend them just because you don't like their opinions.


Complexity - A Guided Tour Part 12 summary

You're reading Complexity - A Guided Tour Part 12. This novel has been translated by Updating. Author: Melanie Mitchell already has 583 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

LightNovelFree.com is a most smartest website for reading novel online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to LightNovelFree.com