Incomplete Nature Part 7

You’re reading novel Incomplete Nature Part 7 online at LightNovelFree.com. Please use the follow button to get notification about the latest chapter next time when you visit LightNovelFree.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy!

7.

HOMEODYNAMICS.

Nothing endures but change.

-HERAc.l.i.tUS.

WHY THINGS CHANGE.



What causes things to change? Is there any difference between things that are forced to change and things that change on their own? Do we need to think of causality differently in these two conditions? Since ancient times, people have a.s.sumed that when things change, they must have been induced to do so, and that lack of change, that is, stasis or stability, requires no explanation. Of course, the reality is not quite this simple. For the most part it does require intervention to change things, but this is because it is far more common to find things in a stable low-energy state or at rest than in an unstable changing state. So we normally think of causes as disturbances of otherwise stable states of affairs. When asked: "What caused X to happen?" we naturally a.s.sume that the state prior to X would have remained unchanged were it not for some perturbation, that we understand as the cause. Nevertheless, some changes happen spontaneously and may need to be actively or pa.s.sively prevented from occurring. For example, objects suspended above the surface of the Earth will tend to fall unless supported, and even if propelled away from the Earth by the force of muscle or chemical explosion, unless they are accelerated to escape velocity-just under seven miles per second off the surface of the Earth-they will eventually reverse direction, fall back, and finally come to rest when stopped by the Earth's surface. We can prevent this only by constantly providing propulsion or erecting some support to prevent movement all the way to the Earth's surface.

Not all spontaneous changes are movements. Most organic material that is no longer part of a living body, such as food left out in warm open air, will spontaneously decay. Of course, in the case of organic decay, the process is actively supported by the influence of bacteria and mold. But even without this active decomposition, aided by living organic processes, molecular structure and chemical composition will eventually degrade spontaneously due to the breakdown of unstable chemical bonds in warm, wet conditions. Spontaneous chemical breakdown of physical and molecular structure is nevertheless a function of micro-scale movement-the incessant jostling of molecules in air and water that allows chemical bonds to be broken and rearranged. This is why freezing can slow this process, as well as halting the actions of microorganisms.

Reflecting on the movement of physical bodies, Aristotle came to the commonsense conclusion that a moving object will persist in movement only so long as it is constantly pushed or pulled; otherwise, it will eventually stop moving of its own accord. So, from what he could see, it appeared that the natural state of things was to be at rest. The problem with this view, as was subsequently demonstrated, is that not all forms of change require continuous intervention and not all forms of stability are changeless. While objects flying through the air or pushed along the ground do tend to come to rest if not continually pushed, as every modern schoolchild soon learns, this is because they are slowed by the friction of the comparatively stable medium they are in contact with and through or over which they must move. Although even shortly after Aristotle some of his own students began to question his theory of persistent movement, the final refutation of this view can probably be traced to the medieval scholar John Buridan, who conceived of the concept of impetus-that attribute of a moving ma.s.s that intrinsically tends to perpetuate its movement unless resisted. Following the further refinement of this idea at the hands of such later geniuses as Galileo and Newton, modern textbooks inform us that if an object is not in motion, it takes a push to get it moving, but once set in motion it will continue until it meets some resistance. Does this mean that simple movement should not be considered a form of change? Or does this mean that some forms of change are not caused, at least in the colloquial sense of that word?

There is a partial a.n.a.logue of this property of resistance to change in thermodynamics as well: the resistance of a thermodynamic system (a solid, liquid, or gas) to a change in state. Of course, the resistance of a physical object to any change in trajectory or velocity is not a statistical phenomenon. It is a simple single-value property, defined as its inertial ma.s.s-and precisely correlated with the pull that gravity has on it. To be inert is to be unchanging, and so in one sense this is merely a way of naming the extent to which it resists being moved or altered in movement. The a.n.a.logue in a thermodynamic system is also a single global property, which also is a function of motion, and likewise can be a.s.sessed in terms of resistance to change, or inertness. Thus what we might call thermodynamic inertia is exemplified by how difficult it is to induce modification of a spontaneous trajectory of thermodynamic change toward equilibrium, or to induce change away from thermodynamic equilibrium.

A thermodynamic system in its equilibrium state is not at rest microscopically, only in terms of its global distribution of macroscopic features (e.g., temperature); but to cause this collective motion state to diverge from equi-distribution requires that this system interact with another that is in a different thermodynamic state (e.g., a hotter or colder system). Thus the interaction of two systems with different collective motion values modifies these values. This is the case whether both, one, or none are in thermodynamic equilibrium. If one or both are in the process of spontaneous change toward equilibrium, interaction will alter these global rates and/or initiate change with respect to equilibrium. The case of perturbation from equilibrium can thus be a.n.a.logized to the initiation of movement of an object from a state of rest, and the case of change in the rate at which a system changes toward equilibrium can be a.n.a.logized to the change of velocity of a moving object. Each is due to collision or some other energetic interaction. Both a system in the spontaneous change toward equilibrium and the spontaneous stability of a system in equilibrium are in some sense non-perturbed frames of reference. They are in this way a.n.a.logous to Galilean reference frames in constant unperturbed movement or at rest, respectively. So there is, in fact, a deep commonality which derives from the fact that thermodynamic processes are themselves reflections of underlying microscopic dynamical processes.

In summary, we can draw a number of rough a.n.a.logies between Newtonian dynamics and thermodynamics. First, the equilibrium state can be crudely a.n.a.logized to a ma.s.s moving at constant velocity in a straight line, in the sense that a system in equilibrium is dynamically active, changing from state to state, and yet exhibiting no change in global (distributional) properties. Second, like a moving ma.s.s, a thermodynamic system at equilibrium will tend to maintain its dynamics within the same distributional parameters until perturbed: its maximum entropy state. Third, like the inertia of a ma.s.sive body, a thermodynamic system at equilibrium will resist being modified, with a degree of resistance proportional to the size of the collection of molecules that const.i.tute it.

Like Aristotle's conception of constant motion, it is not surprising that there is a tendency to conceive of spontaneous thermodynamic change toward equilibrium as also being in some sense "pushed" to change. So, despite what we have been taught about the slowing of a projectile's velocity due to friction, Aristotle's commonsense interpretation still influences thinking about causality in other domains and in more abstract ways. It is still a tacit intuition that lurks behind many of the difficulties with notions of causality. For example, when schoolchildren are told that atoms are composed of particles (electrons) continually in motion "orbiting" a nucleus, this intuition prompts many students to question why these little dynamos don't eventually run down. What keeps the electrons...o...b..ting forever? Or for that matter what keeps planets...o...b..ting forever (at least in the ideal case of not being influenced by other objects), or what would keep a top spinning in s.p.a.ce once set in motion?

The reason that these phenomena are a bit counterintuitive is that we tend to confuse spontaneous change that happens irrespective of any other influence with non-spontaneous change that requires something else from outside to perturb things. If we want to say that any change must have a cause, then spontaneous change (e.g., constant linear motion, or the settling of an unequally heated gas into an equilibrium state) and non-spontaneous change (e.g., altering the velocity or direction of movement of a ma.s.sive object, or unevenly heating the air in a room) cannot have the same kind of cause.

Although Aristotle's physics is often blamed for this fallacious denial of the spontaneity of continuous movement, this is only partly accurate. On careful reflection, there is another way to interpret Aristotle that is a bit more charitable-and suggestive. Since the Enlightenment, it has become the doctrine of Western science that there is only one form of causality in the world: that driven by energy and force. But, as we have already seen in chapter 1, Aristotle's view of causality was more pluralistic, involving four distinct conceptions of causality (material, efficient, formal, and final causality). The modern sense of what might best be called "motive" cause, implicit in Newton's mechanics, captures only one sense of Aristotle's way of thinking about change and causality. Most would equate it with his notion of efficient cause. To understand the other three, it is necessary to imagine ourselves living in his time before most of the discoveries of science that we now take for granted.

Aristotle's conception of causality a.s.sumed a cosmology that is quite different from our modern view. It was based on the four kinds of basic substance in the cosmology of his day-earth, water, air, and fire. Each form of matter was thought to have the intrinsic property of sinking or rising until it reached its natural level in his cosmic hierarchy (i.e., layered in the above order from the Earth upward). As a result, each was thought to exhibit a tendency to change position relative to other forms of matter until it eventually reached this locus of stability. By virtue of being in mixtures with other forms of matter, and being impeded in numerous other ways, these basic tendencies could produce intermediate effects, or remain potential and able to express themselves later, when less constrained. Though for modern science this whole fanciful cosmology is of merely historical interest, this very different method of approaching the question of why things change suggests a way to partially reconcile the Aristotelean pluralistic view of cause with the modern view, and to give his notion of causality a more charitable (though perhaps too charitable) reading.

For Aristotle, although any change could be attributed to a cause, this cause did not have to be efficient cause, in the modern sense of a force. Consider again Aristotle's cosmology. His explanation for why fire tended to rise upward through the air, and rain tended to fall down through it, did not require some efficient push or pull compelling these motions. These tendencies were, according to him, intrinsic to each substance with respect to its location in the cosmic hierarchy. Each of the four prime elements of matter had its own natural position in this hierarchy in comparison to the others, and each substance naturally tended to settle in a position that was most consistent with its composition. Each possessed an intrinsic disposition to change position, until it reached its natural position of balance relative to other forms of matter. It was their natural tendency, their spontaneous nature. If one wanted to mix things in ways that deviated from these natural tendencies, or move things in directions opposite to these natural tendencies-for example, raising stones upward on a hill or forcing air downward into water under a bowl-an efficient cause would be required. Otherwise, the cause was intrinsic to the material and its relation to the hierarchic geometry of the world.

In Newton's mechanics, however, all causes were defined in external terms, with the possible exception of gravitation. In this system, the explanation for persistent linear motion became effectively "no cause." It was a spontaneous tendency, no different than being at rest-a state that would persist until it was perturbed by an externally originating force. The reduction of causality to one type only eliminated the need for a causal theory of rectilinear movement. Thus, whereas Aristotle's notion of causality distinguished causes of spontaneous change from causes of forced change, modern thinkers since Galileo and Newton have a.s.sumed that spontaneous change, such as maintaining constant movement in a straight line, is not caused. Aristotle was, of course, conceiving of "cause" in a very different and more general sense. He probably would not have been satisfied with the response that this persistence needs no explanation, even if the effects of friction were to be explained. He would not merely have been satisfied to learn that in the absence of friction, there need be no push or pull. He would have wanted to know why.

To ask what kind of "cause" is responsible for the persistence of linear movement might better be phrased: What explains why linear motion persists when there is no interference? To reply, as Galileo and Newton reasoned, that all internal relations.h.i.+ps within that frame of reference can't be distinguished from what would be the case at rest doesn't quite answer Aristotle's question. What would answer it?

Notice that what Galileo recognized was that the geometry of the trajectories and dynamical interactions (like collisions) of events that take place within such an inertial system are the same, irrespective of this movement. The ball tossed up from your seat in a moving jet plane drops back down into your lap as if you were sitting still. In fact, sitting still on the surface of the Earth is also to be rotating at faster than the speed of sound. So to speculatively extrapolate from Aristotle's a.n.a.lysis, considering this fact, we might describe the cause of this indistinguishability in something like geometric terms, as being formally caused.

Indeed, it wasn't until Einstein's theory of relativity that modern science was able to more precisely explain why a constantly moving inertial frame is effectively no different than one that is at rest. Specifically, Einstein's account of gravitation in his General Theory of Relativity can be seen as resolving both issues in geometric terms. The geometry of s.p.a.ce-time is effectively "curved" in an accelerated frame of reference, whether by gravitation or the application of a force, but linear otherwise. Thus in Einstein's famous a.n.a.logy of standing in an elevator in s.p.a.ce that is being constantly accelerated at 32 feet per second every second, throwing a ball would cause it to behave as though in the Earth's gravitational field. Watching the ball's movement from outside one would see it moving in a straight line, while observed from inside it would follow a parabolic path. In these frames of reference, the motion is described as following a geodesic trajectory. This means that it follows the most minimal path between two points, whether the s.p.a.ce is linear or warped in some way. So, observing events in an accelerated frame of reference transforms the geometry of s.p.a.ce-time from a Euclidean (linear) to a non-Euclidean (curved) form. The patterns of change within that frame are systematically curved, as though the geometry is warped. This makes them clearly distinguishable from what would occur at rest or in constant linear motion. Toss up a ball in an accelerating car and it won't simply fall back into your lap, because at the point it was tossed you were going slower than when it falls back down.

The problem of explaining gravity was key to Einstein's insight because, unlike the acceleration of an object by the constant application of energy, a s.p.a.cecraft orbiting the Earth is following a curved path, and yet inside things behave as though it is at rest in empty s.p.a.ce. Even though the orbit follows a curved trajectory around the Earth, there is no sensation of being accelerated off a linear path, as would be experienced if the same curved trajectory were to be generated by rocket propulsion. As Einstein reasoned, if one considered the geometry of the Earth-gravitational-frame to be warped, this curvilinear motion would be equivalent to linearity in a Euclidean frame.

Whereas the straight trajectory case could be ignored, this could not. It begged for a theory of what causes curved trajectories of change to persist spontaneously in this context. Newton's explanation, that it was a force that acted at a distance, was troublesome because mechanical forces are generated at the cost of work, but gravity just is, so to speak.1 The explanation that Einstein provided was essentially geometric. s.p.a.ce-time is itself warped near a gravitational ma.s.s.2 According to Einstein's account, falling in a gravitational field is motion that is linear with respect to the curved s.p.a.ce of unforced trajectories. So, following a curved path (such as an orbit) due to gravity is not intrinsically distinguishable from moving in a straight line unperturbed by any force. In reflection, then, the same can be said of linear motion in a non-warped (Euclidean) s.p.a.ce of possibilities for change. It is in this sense (taking advantage of a bit of revisionist license) that we might say that Einstein has provided something very much like a formal causal account of spontaneous dynamical tendencies.

For Aristotle, fire doesn't need to be efficiently pushed upward, because that is where it belongs in the geometry of his cosmos, and it possesses an intrinsic tendency to change location until it settles into this natural position. For Einstein, no application of force is needed to bend the s.p.a.ce vehicle's path around the Earth because that too is consistent with the local geometry of the cosmos. In ancient Chinese philosophy, this might be described as following the Tao, the natural path, an unforced trajectory.

But when things change in non-spontaneous ways, they must be caused to do so extrinsically. When objects are accelerated, or b.u.mped off course by collision, or heated to a different temperature, or broken up from previously stable configurations, they have been acted upon from the outside. All forms of non-spontaneous change require a non-geometric form of causality-one that is more familiar to modern minds. This is the realm of force, energy, and work. Surprisingly, these concepts are often misunderstood, and to some extent are even mysterious (as you can discover by trying to find a precise account of what energy actually is, as opposed to what it does or how it can be transformed; see below). This is not problematic for physics or chemistry, but it becomes an impediment to explaining ententional phenomena, precisely because this requires reintegrating concepts of forced and spontaneous change into a more general logic of change. Before we can lay the groundwork for a more subtle approach to the emergence of the distinctive twists of causal logic that appear in the world with living and thinking beings, we need to unravel some of these potential sources of confusion.

A BRIEF HISTORY OF ENERGY.

In hindsight, it might seem surprising that it took two centuries after Isaac Newton had described the laws of mechanical interactions in precise mathematical terms to finally make sense of the concept of energy. After all, it was implicit in the concepts of force and work, and these could be precisely measured and computed from simple equations involving ma.s.ses being accelerated or stopped or moved against a resistance. And it was the taming of energy to do work through water, wind, and the heat of combustion that characterized the beginnings of the Industrial Revolution during those centuries. But as ubiquitous as its use has become today, the use of the term energy wasn't even coined to describe this familiar scientific concept until 1807. So why was it so counterintuitive?

Actually, despite a preoccupation with energy in our present age-as we consume it, pay for it, drill into the Earth and split atoms in search of it-it is still a difficult concept to grasp in a precise sense. Mostly, we think of energy in the context of the heat and pressure produced by fuel being burned to power machines to move or manufacture things, or else as a kind of invisible substance that flows through electrical wires from machines powered by burning fuels, the weight of falling water, or the pressure of wind, and used to illuminate dark rooms or animate computers. We find that it can be stored in elevated water or in the potential chemical reactions within electric batteries. But the problem with these conceptions is that they are not quite consistent with one of the most basic laws of physics: energy cannot be created or destroyed in the processes we are talking about. When we burn fossil fuels to propel our automobiles or cook our meals, we are "using up" the fuel, breaking its chemical bonds, and often then just dumping the exhaust into the surrounding air; but the energy is not used up, it is merely transformed from one form to another. What is depleted or lost is the ability to do more work with that energy subsequently. As the chemical bonds are broken and the heat of combustion dissipated into the surroundings after it is used, it becomes less available for another use. The energy is still there. It's just more widely distributed than before, with the result that our ability to use it to do further work with has been diminished or lost altogether.

Our contemporary folk understanding of energy is in many ways still held hostage to late eighteenth- and early nineteenth-century misconceptions that it is something that can be stored and used up, even though we no longer view it as an invisible substance that makes flammable things volatile and hot things hot. In the mid-eighteenth century, it was thought that things that could burst into flames contained an invisible substance called "phlogiston." Then, in 1783, the French chemist Antoine Lavoisier discovered that a substance in the air, oxygen, was critical for combustion, as opposed to releasing phlogiston from the burning material. Instead, he found that oxygen was being used up and combined with the burning substance when it was heated to combustion. He argued that what was needed was an explanation for the heat of this process.

But heat was even more troublesome. Since it could be transferred from solid object to solid object simply by contact, it could not merely be an invisible substance in air in the way that oxygen was. a.n.a.logous to the pa.s.sing of static electric charge from object to object by contact, many scientists argued that the transfer of heat occurred due to the transfer of an intangible ether, and they gave this intangible substance the name "caloric." However, there was a basic problem with this way of thinking. This supposedly invisible, intangible substance could be transformed into phenomena that didn't quite fit the characterization-such as movement. For example, it was apparent that while moving objects could exchange their momentum, as Newton had precisely calculated, their momentum could also be turned into heat, and vice versa. The heat of combustion could be harnessed to produce motion, and the friction produced by the motion of one object against another could produce heat. This troublesome fact was driven home by Benjamin Thompson in 1798. Thompson was curious about the reason that so much heat was generated during the boring of cannons. In measuring this heat, he noticed that there was a correspondence between the amount of boring and the amount of heat. This led him to conclude that heat must therefore be a form of mechanical motion as well.

By the middle of the nineteenth century it was clear that moving ma.s.ses, heat, chemical reactions, light, and even electricity could all be transformed one into another. There needed to be a way to describe what they all shared in common. It was the invention of the steam engine that provided scholars interested in this problem with a machine that systematically transformed the heat of combustion into mechanical forces that could be precisely measured. As this new general-purpose engine began to fuel the Industrial Revolution, it now became a practical matter to understand the relations.h.i.+p that linked these physical quant.i.ties. A number of theorists concerned with this issue at the beginning of the century had speculated that such transformations must be at least partially reversible. This suggested that it might be possible that nothing is actually lost in such transformations. Heat might be transformed into motion and then the motion back into heat, and so on. But it was Sadi Carnot's a.n.a.lysis of the cyclic transformation of heat to mechanical energy in heat engine operation in 1824 that set the stage for pulling apart the problems of the conservation and loss of the capacity to do work in these transformations. He showed that despite this theoretical intertransformability of heat and mechanical motion, in the real world there would always be an inevitable failure to achieve lossless transformation. This was the beginning of the realization that a perpetual motion machine would be impossible.

The critical step toward precisely a.s.sessing this transformation process was taken by showing how it could be precisely quantified. The man who did this was the English physicist James Prescott Joule. Joule began his studies by exploring the relative economics of steam engines and the newly invented electric motor as potential alternatives for industrial applications, specifically with respect to his family's brewing business. This a.n.a.lysis led him to quickly recognize that the mechanical capacity derived from burning coal was far more efficient than what could be derived from the chemistry of batteries. His interest in precisely measuring electrical, chemical, mechanical, and heating processes led him to build a simple device to study how the stirring of a liquid could raise its temperature. In 1843, he showed that this increase was precisely proportional to the amount of time and effort put into the stirring, which he measured by having the stirrer unwind a string attached to a suspended weight.

He thus identified the precise correspondence for converting units of heat to units of distance that a weight was moved with respect to gravity, and since gravity exerts a constant acceleration, this translated into a constant force times distance: Newton's measure of mechanical work. Thus Joule was able to measure what was transformed from movement into heat. This could be extrapolated into a conversion that yielded a universal unit of "economical duty," as he called it. The unit now bears his name. A "joule" is this common standard of energy: the work it takes to raise one pound (e.g., against the force of gravity) to a height of one foot: a "foot-pound."

Identifying a common unit of measure opened the door to definitively answering the question: What gets preserved and what gets used up in these transformations? The surprising answer turned out to be that nothing gets eliminated, but not everything is retrievable. What made this so counterintuitive was that while in every case there was a physical substance involved-falling water, spontaneous chemical reactions, heated gases, ma.s.ses in motion-these were not being depleted in the process of being transformed one into the other. So the ability of these processes to change the state of things-to do work-must be due to something other than just the materials involved. For example, unlike the movement of pressurized gas from one container to another, heat transferred from one solid structure to another does not require the movement of some substance (e.g., caloric) from one to the other, and yet both the movement of a gas and the pa.s.sive transmission of heat from place to place can turn factory wheels and move ma.s.ses. What is common to both the spontaneous movement of materials from place to place and the spontaneous transfer of heat between substances is that both processes involve the reduction of a difference in the distribution of some quant.i.ty of some physical property. This property seemed always to be something dynamical, like movement, or potentially dynamical, like the tension of a compressed spring. We know now that in the case of heat being transferred without any material transfer, it is the redistribution of the translational and vibrational movements of the molecules of the one body to the molecules of the other, like the vibrations of a tuning fork being transferred to the body of a violin by holding them in tight contact.

So, although something does indeed "move" in these cases, it needn't be a substance, or an intangible ether. What does move? Two important hints to this answer are: (1) that it moves of its own accord; and (2) it involves a difference in some feature of the materials (more generally, of some medium, such as an electromagnetic field) at one locus compared to the corresponding feature at a neighboring locus. If this difference involves something dynamic and changeable in the one, it will tend to spontaneously redistribute, becoming more evenly distributed into the neighboring vicinity to the extent that it is not impeded from doing so. This difference can take many forms: differences in the average speed of molecules, differences in the average density of molecules, differences in the concentrations of different kinds of molecules, differences in electric potential, and so on. When things aren't equal, and aren't prevented from distributing or propagating into neighboring s.p.a.ces, they will tend to spontaneously even out, through whatever pathways present themselves. You might say that nature is ultimately unbiased.

We often tend to describe the process of energy transformation in overly concrete terms. It is common to describe such transformations as involving "a flow of energy" from one place to another, as when describing electricity flowing from the power plant to one's home. And we talk about resources that we use as "energy sources," such as the chemical bond energy liberated by burning fossil fuels, as though it is something intrinsic to that material. This language tends to perpetuate the substance misconception, which was implicit in the concept of caloric. What "flows downhill," so to speak, in these transformations, is basically just a "difference." If it is heat in one subregion of a liquid or gas, then a change in the average velocity and vibration of molecules propagates along the gradient of difference until that gradient is eliminated. If it is a difference in pressure, it is the average distance between molecules that propagates. If it is electricity, it is a difference in relative charge that propagates along a conductor. And although there can be material that embodies these differences which may move as well, such as in the flow of water or wind, it is a difference in some attribute they embody that matters (e.g., elevation or pressure).

The term energy was introduced into these discussions in 1807 by Thomas Young and began to be favored over such terms as caloric by the mid-

nineteenth century because of its more generic application beyond heat. The term was already in use in English, for example, in reference to "agitated" speech, and was originally derived from the Greek , combining the root (for "activity" or "work") and the prefix (for "in" or "to"). Although the modern technical sense of the term reentered physics in the nineteenth century, Aristotle is often credited with the first use of this Greek word in a scientific sense to mean something like "vigor." So the etymology of "energy" effectively defines it with respect to work-as its intrinsic source. Although in a slightly metaphoric sense we can say that the capacity to do work flows across a gradient, it might be more accurate to say that the capacity to do work is a gradient across which there is a tendency to even out and dissipate. Energy is more accurately, then, a relations.h.i.+p of difference or asymmetry, embodied in some substrate, and which is spontaneously unstable and self-eliminating-a tendency described by the second law.

In line with the figure-background s.h.i.+ft of emphasis that I have been advocating, I suggest that the key to understanding what energy is is to stop focusing on the stuff that embodies it, and instead consider the form that is embodied. In the most abstract sense, energy is a relations.h.i.+p of difference that tends to eliminate itself. It can be more accurately described as a relations.h.i.+p of difference distributed in some substrate that will spontaneously tend to even out if unimpeded. The substrate can be anything from a heated solid object to a perturbed electromagnetic field. The difference can be as trivial as the concentration difference between water molecules in a gla.s.s and those of a drop of ink placed in it. In a typical thermodynamic system this tendency to even out, described by the second law of thermodynamics, derives from the statistical asymmetry implicit in the details of spontaneous interactions of component particles (e.g., ink molecules). Iterated over time, this generates a colossally asymmetric tendency that is eliminated only when the average motions and positions of the molecules have become sufficiently uncorrelated with respect to each other so that neither increase nor decrease in the correlations is more likely.

More important, any differences that tend to eliminate themselves can be a source of energy. But only if this spontaneous tendency is in some way constrained.

FALLING AND FORCING.

The difference between spontaneous and non-spontaneous change is one of the most common features encountered in everyday experience. It is also the basis of what is often described as "broken symmetry" in the world. Some changes are symmetric in the sense that they are just as natural run forwards or backwards. This is approximately the case with the collision of two billiard b.a.l.l.s in the absence of friction. Play the movie of a collision both forwards and backwards and it will be difficult to discern which is the "real" version. This is because Newton's laws of motion are symmetric. But if the movie involves fifteen b.a.l.l.s colliding with one another, it will often be quite obvious which sequence is shown forwards and which is shown backwards. And this doesn't depend on energy being added or friction slowing the velocities. Even on an imaginary frictionless billiard table, this can be discovered so long as one begins with an asymmetrically organized arrangement. Thus the breaking up of a symmetrically organized triangular array of b.a.l.l.s (as in a pool "break") so that they become scattered and careen around the table will be the obvious forward direction, whereas a reversal of this movie will appear quite unnatural. However, if we begin the movie at progressively later and later points, after each ball has had an opportunity to haphazardly interact dozens of times with different b.a.l.l.s at different positions around the table, it will become progressively more difficult to discern forward- from backward-running movies (Figure 7.1).

The early phases of such a process are familiar to our everyday experience. Mixing things up is far easier, and far more likely, than unmixing them. And once mixed up, things tend to stay that way. This spontaneous asymmetry in the pattern of dynamical change is the essence of the second law of thermodynamics. And as we have also seen, it is with respect to this spontaneous asymmetry of change that emergent processes are typically defined. So before we can make complete sense of the dynamics that produces emergent phenomena, it is important to be more precise about what makes this form of spontaneous asymmetric change similar to and different from the kind of spontaneous change that is exhibited by a body in constant linear motion.

FIGURE 7.1: A cartoon characterization of the asymmetry implicit in thermodynamic change from a constrained ("ordered") state to a less constrained ("disordered") state, which tends to occur spontaneously (an orthograde process), contrasted with the reversed direction of change, which does not tend to occur spontaneously (a contragrade process), and so only tends to occur in response to the imposition of highly constrained external work (arrows in the image on the right).

The conundrum that heat posed for nineteenth-century physicists was explaining how the dynamics of individually colliding billiard b.a.l.l.s could be symmetric with respect to time, and thus theoretically reversible, and yet their collective behavior is not. The answer was first glimpsed by James Clerk Maxwell and later applied to thermodynamic processes by Ludwig Boltzmann in the last half of the nineteenth century. Basically, each collision results in changing the velocity and direction of each object (billiard ball or molecule), and as more and more collisions ensue, the velocities and directions of movement will become more and more divergent from original values, a.n.a.logous to the way that shuffling a deck of cards causes any two cards that were once close together to progressively get more and more separated, until they vary around the average degree of separation for any two cards. Similarly, as each billiard ball or molecule interacts again and again with others, local correlations of position and movement get progressively redistributed. As the collection continues to interact (a.s.suming no loss due to friction), it will vacillate around the average distribution; but it will be astronomically unlikely to pa.s.s through a highly regularized state (like a stationary triangular arrangement) ever again.

This too is a geometric effect, but in this case it involves the geometry of the probable paths of change, not the geometry of s.p.a.ce-time. It exemplifies the fact that the universe has a deeply asymmetric predisposition when it comes to any process involving many components. The probability that any interaction will reflect this asymmetric bias is proportional to the number of interacting components or features.

The Newtonian collision between two objects is thus the limiting case, not the rule. It is generally possible to discern which way the movie of a dynamical interaction between many objects is being played because some large-scale distributional features become very much more likely than others. This becomes evident even with only a handful of objects, such as the b.a.l.l.s on a billiard table. As we add more and more objects and interactions, this asymmetry grows rapidly, quickly reaching the point where the probabilities are effectively indistinguishable from 0 and 1 (impossibility and certainty). This is the case for most thermodynamic systems, since in even a minuscule volume of liquid or gas we may be dealing with billions of molecules, interacting with each other billions of times each second. So, in any human-scale thermodynamic process, the probability of reflecting this bias is essentially certain. These large numbers guarantee that "more-is-different" with near certainty in any microscopic to macroscopic comparison. This more-is-different effect will also turn out to be a key factor in the explanation of emergent phenomena, which in all cases involve significant increases in scale and a corresponding compounding of lower-level interaction effects.

Consider one of the most familiar of thermodynamic processes, dissolving some solid-say, a cube of sugar-in a container of water. At normal temperatures, a small cube of sugar dissolves naturally and effortlessly with a little patience. It is a spontaneous process. Separating it out again, however, can be incredibly complicated, laborious, and time-consuming. Even employing the most sophisticated of physical-chemical purification processes, you will never fully retrieve the amount and structure that you began with. We basically don't have to intervene in the dissolving of sugar in water, unless we want to make the process go faster (by stirring or heating) than it would occur spontaneously, but any process that exactly reverses the original mixing will be decidedly non-spontaneous, often requiring considerable outside intervention using highly contrived means. The partial exception to this takes advantage of another spontaneous process-evaporation-though it only separates the water from the sugar and doesn't produce tiny crystals organized into a cube.

We take this sort of causal asymmetry for granted, recognizing that some forms of change are spontaneous and resistant to intervention, while others require intervention to force them to occur because they are resistant to change. Both kinds of change occur because of certain forms of interactions and how they are constrained and biased by the conditions within which they occur. So it shouldn't abuse the meaning of "cause-to-happen" to say that both are caused, even though their consequences are quite dichotomous. This is made explicit in the cla.s.sic thermodynamic model system: a gas in a container that can be isolated from all outside influences. Asymmetrically heat the container, using an external heat source, and the majority of molecules in one region are forced to move more rapidly than the majority at some other region. But as soon as the external influence is removed, the gas will begin an inevitable transition back to equilibrium, redistributing these local differences. In the one case the cause must be imposed from without for change to occur, in the other the cause is intrinsic; change will happen unless it is impeded by outside intervention. So, in commonsense terms, we say that some things happen "naturally," while other things don't.

For general purposes, then, it would be useful to distinguish between changes that must be forced to occur through extrinsic intervention and those that require intervention to prevent them from occurring. Surprisingly, there are no terms that characterize this difference. In order to facilitate the discussion of how they contribute to the range of dynamical processes we will encounter in the effort to explain emergent phenomena, I offer two neologisms: I will call changes in the state of a system that are consistent with the spontaneous, "natural" tendency to change, irrespective of external interference, orthograde changes. The term literally refers to going with the grade or tilt or tendency of things, as in falling, or "going along with the flow." In contrast, I will call changes in the state of a system that must be extrinsically forced, because they run counter to orthograde tendencies, contragrade changes.

The usefulness of this distinction may appear to be minimal at this stage of the a.n.a.lysis, since I have merely renamed things that are already familiar. Nevertheless, it will become clear later, once we begin dealing with processes that no longer exhibit simple thermodynamic tendencies.

Because we tend to consider orthograde changes "natural," we might be tempted to describe contragrade changes as somehow unnatural. This turns out to be a misleading dichotomy because, as we'll see, even though contragrade changes are not spontaneous and intrinsic, they are in no way unnatural or artificial. They are merely the result of the interaction between contrasting orthograde processes. Because the world is structured and not uniform, and because there are many distinct dimensions of orthograde change possible (involving different properties of things, such as temperature, ma.s.s, movement, electric charge, structural form, etc.), certain of these tendencies can interact in relative isolation from others.3 Contragrade change is the natural consequence of one orthograde process influencing a different orthograde process-for example, via some intervening medium. This implies that in one sense all change ultimately originates from spontaneous processes. It is simply because the world is highly heterogeneous that there can be contragrade processes. Thus, although orthograde processes are the basis for all change, the ortho/contra distinction is not artificial.

More precisely, then, we can also distinguish orthograde and contragrade processes in terms of relative causal isolation, which also distinguishes intrinsic from extrinsic causal influences. A composite system that is isolated from outside interactions will intrinsically exhibit orthograde change, but not contragrade change. Contragrade change is only possible with respect to extrinsic relations.h.i.+ps between systems with different orthograde tendencies. This is just a restatement of the isolation conditions for the second law of thermodynamics. But as we will see subsequently, reframing thermodynamic processes in orthograde/contragrade terms will provide a much more general distinction that will be useful beyond both Newtonian and thermodynamic a.n.a.lyses.

How might we apply this distinction to more cla.s.sical approaches to dynamical processes? Consider Newton's laws of motion. A ma.s.s moving in a straight line with a constant velocity can be described as undergoing orthograde change (of position), whereas a ma.s.s acted on by a force and altered in velocity and direction can be described as exhibiting contragrade change (for the duration of the period in which that force is acting). In a simple thermodynamic system, change toward equilibrium is orthograde while change away from equilibrium is contragrade. So the case of sugar spontaneously dissolving in a container of water exemplifies an orthograde change, whereas processes that a chemist might employ to extract this sugar again would involve contragrade processes.

The more general value of designating terms for these contrary "orientations" of change is that it can help us to distinguish the different ways we tend to use the concept of causality. While both the spontaneous dissolving of sugar in water and its extraction by chemical means are changes that are caused in a generic sense, what we mean by "cause" is quite different in these two situations. Both are consistent with the laws of physics and chemistry; they nevertheless differ radically in how these causal influences are organized.

In the case of processes like the dissolving and diffusion of sugar molecules in water, this distinction also is relative to scale. The dislodging of a sugar molecule from its crystal lattice is the result of microscopic contragrade processes. The molecular collisions and electrochemical interactions between water and sugar molecules are contragrade, because the interaction among these molecules changes their spontaneous motions and ranges of movement. The continually diverging diffusion of sugar molecules into the surrounding water is also the result of contragrade dynamics. Each of the innumerable collisions that results in changes in a molecule's velocity and direction of movement is a contragrade event.

The trajectories of sugar molecules as they interact with neighboring water molecules are more likely to expand into new territories. This tendency effectively reflects the geometry of the situation. There are more ways for molecules to diverge than to converge in their relative locations. As every molecule is b.u.mped nanosecond-to-nanosecond onto a new path, each molecule's new position gets more and more superimposed on the others' former positions, while their velocities and directions of motion also sample values once exhibited by others. This tendency of interacting molecules to wander into each other's spatial positions and dynamical values is responsible for the orthograde dynamic that characterizes the global change toward equilibrium. In this way, contragrade dynamics at one level produce orthograde dynamics at the higher level.

Another merit of describing change in these complementary terms is that it gives new meaning to the defining property of matter-a resistance to change-and the defining property of energy-that which is required to overcome resistance to change. Since orthograde processes ensue spontaneously, they are ubiquitously present, even during processes of contragrade (forced) changes. A contragrade change must therefore derive from two or more orthograde processes, each in some way undoing the other's effects. To put this in the terms introduced in the previous chapter, each must constrain the other. The tendency of one orthograde process to realize the full range of its degrees of freedom (e.g., the diffusion into all potential locations) must diminish the tendency of another orthograde process to realize all its potential degrees of freedom. This is easily demonstrated for interacting thermodynamic systems otherwise isolated from other outside influences. Thus, for example, the specific heat of one object placed in contact with another with a different specific heat will result in their combined development toward a different maximum entropy value than either would reach had they remained isolated from one another. This is because there will be a net asymmetric redistribution of the molecular motions in the two materials, in which the rate of orthograde change of one will accelerate and the other will decelerate with respect to their prior rates of change, since the one is now further and the other closer to the maximum entropy state than before. One is thereby relatively deconstrained and the other relatively constrained in its domain of possible states. Resistance to change is thus a signature of the additive and canceling effects of interacting orthograde dynamics. In this respect, it is again roughly a.n.a.logous to the composition of momenta of colliding objects in Newtonian mechanics.4 Using this insight, we can now redefine the concept of constraint in orthograde and contragrade terms. Constraints are defined with respect to orthograde maxima, that is, the point at which an orthograde dynamic change is no longer asymmetrical. A constrained orthograde process is thus one in which certain dimensions of change are not available. This can be the result of extrinsic bounds on these values such as might be imposed by the walls of a container, or the result of contragrade processes countering an orthograde change. Interestingly, as we will see in the next chapter, this means that contragrade processes at one level can generate the conditions for a higher-level orthograde process of constraint generation.

One benefit of articulating this orthograde/contragrade distinction is that it provides a language for describing the dynamical relations.h.i.+ps that link different levels of a process. In cla.s.sical emergentist terms, we could even say that in the case of close-to-equilibrium thermodynamics, the orthograde increase in entropy is supervenient on the c.u.mulative effects of the contragrade dynamics of incessant molecular interactions. This suggests that the orthograde/contragrade distinction may offer a useful way to reframe the emergence problem. Indeed, ultimately we will discover that a fundamental reversal of orthograde processes is a defining attribute of an emergent transition.

Under some circ.u.mstances (which will be the focus of much of our subsequent a.n.a.lysis), an extrinsically perturbing dynamical influence can be highly stable, thus providing an incessant contragrade influence. This is, for example, the condition of the Earth as a whole, where the perturbations provided by constant low-level solar radiation over billions of years have made life a possibility. The constancy of this source of contragrade influence is a critical factor, since its stability has allowed the formation of a vast web of dissipative pathways through which energy is released. While in transit through terrestrial substrates, this persistent perturbation has been available to drive chemical reactions in a myriad of contragrade directions. The result is that otherwise unlikely molecular structures are being constantly synthesized even while orthograde thermodynamic tendencies tend to break them down.

Following the n.o.bel laureate theoretical chemist Ilya Prigogine, who revolutionized how we think about such systems, we call the Earth a dissipative system, because it is constantly both taking in and dissipating energy. During the time that this dissipative relations.h.i.+p has been stable-roughly for a little over 5 billion years-it has made numerous other contragrade processes possible, including nearly all of the contragrade chemistry that const.i.tutes life. Indeed, every living organism is a constellation of contragrade processes, which continually hold local molecular orthograde processes at bay.

Of course, the history of human technology involves the discovery of ways to utilize certain orthograde processes in order to drive desired contragrade processes. This is the general logic characterizing all the many types of machines that we employ to bend the world to our wishes. Reliably being able to produce unlikely outcomes, such as the purification of minerals, can provide the conditions for producing yet other, even more unlikely outcomes, such as the a.s.sembly of refined materials to form the circuits of my computer. But as we will see shortly, systems formed by stable contragrade processes can sometimes exhibit what amount to higher-order orthograde processes that are quite different than those from which they arise.

REFRAMING THERMODYNAMICS.

Although the use of the terms orthograde and contragrade to describe both Newtonian and thermodynamic processes might at first appear to be merely a convenient way to group together diverse processes that have one or the other of two superficial features in common, the contrast it identifies is fundamental. Understanding how this underlying dynamical principle is exemplified in these most basic cla.s.ses of processes is the first step toward a theory of what I will call emergent dynamics.

In cla.s.sical thermodynamic theory there is, in effect, only one orthograde tendency: the increase in entropy of a compositional system. Entropy is a technical term introduced in the mid-nineteenth century by Rudolf Clausius. It can be defined in a number of ways. Generally, following the work of a later theoretician, Ludwig Boltzmann, it is a measure of the disorder among parameters defining the state of a collection of components, such as the momenta of molecules of a gas. However, this way of describing entropy is easily misunderstood. As we saw in the last chapter, order understood as merely some state of things that we prefer-such as the alphabetical order of names in a list, or a particular symmetry in shape-is subjective and epiphenomenal. It has no independent role to play in physical interactions. Thermodynamic processes occur indifferently to our observation or a.n.a.lysis of them, and indeed are essential to our being alive and able to make such a.s.sessments. To divorce the concept of order from an epiphenomenal, model-based, subjective notion of form, and instead only consider the intrinsic model-independent and objective aspects of form, it is necessary to consider it in inverse, with respect to constraint.

Rather than order or disorder, then, I suggest that we begin to think of entropy as a measure of constraint. An increase in entropy is a decrease in constraint, and vice versa. When gas in a container is far from equilibrium, local correlations among molecular velocities and molecular types are higher than at equilibrium. They are not thoroughly mixed with respect to each other and with respect to the properties of other molecules. Their distinctive properties, relative spatial positions, and momenta occupy only a subset of what is possible in this context. So, for example, there are vastly more fast-moving (warm) CO2 molecules near my mouth and more slow-moving (cooler) oxygen molecules distributed elsewhere in the room. But after I leave, this asymmetry of correlated movements and molecular concentrations will eventually even out. The equilibrium state of maximum entropy is one in which correlations are minimized, and the likelihood of measuring any given property, such as molecular concentration or temperature, is the same irrespective of where it is measured. The distribution of attributes is no longer constrained. We can thus describe the increase in entropy as a decrease in constraints, and the second law can be restated as follows: In any given interaction, the global level of constraint can only decrease.

Achieving a state of maximum entropy and minimum dynamical constraint-the state of equilibrium where a system is resistant to further global change-is not to bring that system to rest microscopically. Whereas a gas at equilibrium is exemplified by an absence of change in the global distribution of properties-no change in constraint-the component molecules are incessantly moving, constantly b.u.mping into one another, and thereby contributing to the stable pressure and temperature of the whole volume. Stability at this higher level is only a reflection of the statistical smoothing and central tendency of the dynamics at the lower molecular interaction level. What is stable at equilibrium is the level of dynamical constraint.

In principle, from a Newtonian perspective, each individual molecular interaction in a liquid or gas is exactly reversible. If a movie of each molecular collision were run in reverse, like the reversed movie of a billiard ball collision, it would appear as lawful and physically realistic as when run forwards. And yet, as the work of Maxwell and Boltzmann demonstrated, the lack of temporal distinction quickly breaks down with larger and larger groups of interacting components, because each interaction will tend to produce progressively divergent values from the previous state, ultimately sampling more and more of the possible values distributed around the mean value for the entire collection. The dissociation of macro from micro processes that characterizes thermodynamic systems, like gases, means that we cannot attribute the asymmetry of thermodynamic change to the properties of individual molecular collisions alone. Molecular collisions are necessary for any change of state of a typical thermodynamic system, but they are not sufficient to determine its asymmetric direction of change. This is ultimately due to the highly asymmetric "geometry" of the possible distributions of molecular properties and trajectories of their movements.

It is common to represent each possible distribution of molecular properties within such a system as a point in an abstract phase s.p.a.ce,5 or state s.p.a.ce. Changes in state of the system can then be represented as a continuous line within this abstract s.p.a.ce of possibilities. One way to conceive of the asymmetry of the geometry of thermodynamic change is to imagine it in a "warped" phase s.p.a.ce, as though at one point the plane of possible trajectories is pulled like a rubber sheet to create a deep dimple. In such a s.p.a.ce, all trajectories will tend to bend into the region in which there are the most adjacent options, that is, toward the region of maximum curvature. It is as though that region of alternatives is somehow more dense and contains more positions for the same "volume."

As we saw earlier, in the terms of complexity theory and dynamical systems theory, such a region of values toward which trajectories of dynamical change are biased is called an attractor.6 Attraction in this sense is not an active process or the result of a force. It is merely the result of the statistical asymmetry of optional states. The apparent tendency to transition to states which are closer to an attractor-e.g., toward an equilibrium state in thermodynamics-is a function of this biased probability. This might be thought of as a "warping" of the s.p.a.ce of probable configurations. Each individual micro interaction is unbiased with respect to any other, and so it only accounts for change. It is only when the repeated interactions of many components are considered that this curious geometry of chance becomes evident. It is a relational property, determined with respect to possible configurations, that determines the asymmetric directionality of the change of the whole collection. Systems of interacting components tend to change toward equilibrium states because there are vastly more trajectories of change that lead from less symmetric to more symmetric distributions than the other way around.

A FORMAL CAUSE OF EFFICIENT CAUSES?.

This way of understanding the microdynamics underlying the second law of thermodynamics complicates how we need to think about physical causality, but it also provides a more precise way to formulate the concepts of orthograde and contragrade change that can help unravel the tangle of confusions surrounding the nature of emergent phenomena. An intuitive sense of how this reframes the notion of causality can be gained by a comparison to two of Aristotle's notions of cause: formal and efficient causes. Orthograde thermodynamic change occurs because it is an unperturbed reflection of the s.p.a.ce of possible trajectories of change for that system. It is in this sense a consequence of the geometric properties of this probability s.p.a.ce. An orthograde change just happens, irrespective of anything else, so long as there is any change occurring at all. I take this to be a reasonable way to reinterpret Aristotle's notion of a formal cause in a modern scientific framework, because the source of the asymmetry is ultimately a formal or geometric principle.

Contragrade change, however, does not tend to occur in the absence of intervention. It is extrinsically imposed on a system that doesn't tend to change in that direction spontaneously. The source of a contragrade change is what we typically understand as the mechanical cause of something. In this sense, it is the result of efficient means for forcing change away from what is stable and resistant to modification. So this provides a somewhat more precise reframing of Aristotle's efficient cause, appropriate to a modern scientific framework.7 The value of fractionating the concept of physical cause in this way is that it helps us to avoid two of the most serious pitfalls of both crude reductionism and naive emergentism.

The first pitfall is an effort to define emergent relations.h.i.+ps in terms of "top-down" or "downward" causality. This is basically the a.s.sumption that emergent novel causal properties at a higher level of organization can impose an influence on the lower-level processes that gave rise to them. Although top-down causality sounds as though it could be guilty of vicious regress if understood synchronically, there is at least a general sense in which the concept can be rendered unproblematic from a process perspective. Consider again the dynamically maintained biomolecules reciprocally synthesized and const.i.tuting a living cell. They are not components independent of this metabolic network, but are created by being enmeshed in that network of chemical reactions. In this sense, they are both const.i.tuents and products of the larger dynamical network. One could say that component molecules generate the whole via a bottom-up causal logic, and the whole generates these molecules via an independent top-down causal logic. Of course, each molecule is generated via interactions among other molecules, and it is only the constraints on these patterns of interaction that are relevant. These constraints don't arise from any contragrade forcing from the whole. Rather they arise reciprocally, from the constellation of relations.h.i.+ps that makes the synthesis of each type of biomolecule in an organism indirectly dependent on each other. It is the systemic "geometric" position within this whole dynamical network that is the source of these constraints. In this sense, I have to agree with Claus Emmeche and his colleagues, who argue that "downward causation cannot be interpreted as any kind of efficient causation. Downward causation must be interpreted as a case of formal causation, an organizing principle."8 The second pitfall is the a.s.sumption that what is absent cannot be a causal influence on real events. The geometry of possible trajectories of change from state to state is not anything material. While it is accurate to say tha

Incomplete Nature Part 7

You're reading novel Incomplete Nature Part 7 online at LightNovelFree.com. You can use the follow function to bookmark your favorite novel ( Only for registered users ). If you find any errors ( broken links, can't load photos, etc.. ), Please let us know so we can fix it as soon as possible. And when you start a conversation or debate about a certain topic with other people, please do not offend them just because you don't like their opinions.


Incomplete Nature Part 7 summary

You're reading Incomplete Nature Part 7. This novel has been translated by Updating. Author: Terrence W. Deacon already has 837 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

LightNovelFree.com is a most smartest website for reading novel online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to LightNovelFree.com

RECENTLY UPDATED NOVEL