The Hidden Reality Part 7
You’re reading novel The Hidden Reality Part 7 online at LightNovelFree.com. Please use the follow button to get notification about the latest chapter next time when you visit LightNovelFree.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy!
A similar statistical approach can be applied to a multiverse. Imagine we are investigating a multiverse theory that allows for a wide range of different universes-different values of force strengths, particle properties, cosmological constant values, and so on. Imagine further that the cosmological process by which these universes form (such as the creation of bubble universes in the Landscape Multiverse) is sufficiently well understood that we can determine the distribution of universes, with various properties, across the multiverse. This information has the capacity to yield significant insights.
To ill.u.s.trate the possibilities, suppose our calculations yield a particularly simple distribution: some physical features vary widely from universe to universe, but others are unchanging. For example, imagine the math reveals that there's a collection of particles, common to all the universes in the multiverse, whose ma.s.ses and charges have the same values in each universe. A distribution like this generates absolutely firm predictions. If experiments undertaken in our single lone universe don't find the predicted collection of particles, we'd rule out the theory, multiverse and all. Knowledge of the distribution thus makes this multiverse proposal falsifiable. Conversely, if our experiments were to find the predicted particles, that would increase our confidence that the theory is right.4 For another example, imagine a multiverse in which the cosmological constant varies across a huge range of values, but it does so in a highly nonuniform manner, as ill.u.s.trated schematically in Figure 7.1 Figure 7.1. The graph denotes the fraction of universes within the multiverse (vertical axis) that have a given value of the cosmological constant (horizontal axis). If we were part of such a multiverse, the mystery of the cosmological constant would take on a decidedly different character. Most universes in this scenario have a cosmological constant close to what we've measured in our universe, so while the range of possible possible values would be huge, the skewed distribution implies that the value we've observed is nothing special. For such a multiverse, you should be no more mystified by our universe's having a cosmological constant value 10 values would be huge, the skewed distribution implies that the value we've observed is nothing special. For such a multiverse, you should be no more mystified by our universe's having a cosmological constant value 10123 than you should be surprised by encountering a sixty-two-pound Labrador retriever during your next stroll around the neighborhood. Given the relevant distributions, each is the most likely thing that could happen. than you should be surprised by encountering a sixty-two-pound Labrador retriever during your next stroll around the neighborhood. Given the relevant distributions, each is the most likely thing that could happen.
Figure 7.1 A possible distribution of cosmological constant values across a hypothetical multiverse, ill.u.s.trating that highly skewed distributions can make otherwise puzzling observations understandable A possible distribution of cosmological constant values across a hypothetical multiverse, ill.u.s.trating that highly skewed distributions can make otherwise puzzling observations understandable.
Here's a variation on the theme. Imagine that, in a given multiverse proposal, the cosmological constant's value varies widely, but unlike in the previous example, it varies uniformly; the number of universes that have a given value of the cosmological constant is on a par with the number of universes that have any other value of the cosmological constant. But imagine further that a close mathematical study of the proposed multiverse theory reveals an unexpected feature in the distribution. For those universes in which the cosmological constant is in the range we've observed, the math shows there's always a species of particle whose ma.s.s is, say, five thousand times that of the proton-too heavy to have been observed in accelerators built in the twentieth century, but right within the range of those built in the twenty-first. Because of the tight correlation between these two physical features, this multiverse theory is also falsifiable. If we fail to find the predicted heavy species of particle we would disprove this proposed multiverse; discovery of the particle would strengthen our confidence that the proposal is correct.
Let me underscore that these scenarios are hypothetical. I invoke them because they illuminate a possible profile for scientific insight and verification in the context of a multiverse. I suggested earlier that if a multiverse theory gives rise to testable features beyond the prediction of other universes, it's possible-in principle-to a.s.semble a supporting case even if the other universes are inaccessible. The examples just given make this suggestion explicit. For these kinds of multiverse proposals, the answer to the question heading this section would unequivocally be yes.
The essential feature of such "predictive multiverses" is that they're not composed from a grab-bag of const.i.tuent universes. Instead, the capacity to make predictions emerges from the multiverse evincing an underlying mathematical pattern: physical properties are distributed across the const.i.tuent universes in a sharply skewed or highly correlated manner.
How might this happen? And, leaving the realm of "in principle," does does it happen in the multiverse theories we've encountered? it happen in the multiverse theories we've encountered?
Predictions in a Multiverse II: So much for principle; where do we stand in practice?
The distribution of dogs in a given area depends on a range of influences, among them cultural and financial factors and plain old happenstance. Because of this complexity, if you were intent on making statistical predictions your best bet would be to bypa.s.s considerations of how a given dog distribution came to be and simply use the relevant data from the local dog licensing authority. Unfortunately, multiverse scenarios don't have comparable census bureaus, so the a.n.a.logous option isn't available. We're forced to rely on our theoretical understanding of how a given multiverse might arise to determine the distribution of the universes it would contain.
The Landscape Multiverse, relying on eternal inflation and string theory, provides a good case study. In this scenario, the twin engines driving the production of new universes are inflationary expansion and quantum tunneling. Remember how this goes: An inflating universe, corresponding to one or another valley in the string landscape, quantum-tunnels through one of the surrounding mountains and settles down in another valley. The first universe-with definite features such as force strengths, particle properties, value of the cosmological constant, and so forth-acquires an expanding bubble of the new universe (see Figure 6.7 Figure 6.7), with a new set of physical features, and the process continues.
Now, being a quantum process, such tunneling events have a probabilistic character. You can't predict when or where they will happen. But you can predict the probability probability that a tunneling event will happen in any given interval of time and burrow in any given direction-probabilities that depend on detailed features of the string landscape, such as the alt.i.tude of the various mountain peaks and valleys (the value, that is, of their respective cosmological constants). The more probable tunneling events will happen more often, and the resulting distribution of universes will reflect this. The strategy, then, is to use the mathematics of inflationary cosmology and string theory to calculate the distribution of universes, with various physical features, across the Landscape Multiverse. that a tunneling event will happen in any given interval of time and burrow in any given direction-probabilities that depend on detailed features of the string landscape, such as the alt.i.tude of the various mountain peaks and valleys (the value, that is, of their respective cosmological constants). The more probable tunneling events will happen more often, and the resulting distribution of universes will reflect this. The strategy, then, is to use the mathematics of inflationary cosmology and string theory to calculate the distribution of universes, with various physical features, across the Landscape Multiverse.
The rub is that so far no one has been able to do so. Our current understanding suggests a lush string landscape with a gargantuan number of mountains and valleys, which makes it a ferociously difficult mathematical challenge to work out the details of the resulting multiverse. Pioneering work by cosmologists and string theorists have contributed significantly to our understanding, but the investigations are still rudimentary.5 To go further, multiverse proponents advocate introducing one more important element into the mix. Consideration of the selection effects introduced in the previous chapter: anthropic reasoning.
Predictions in a Multiverse III: Anthropic reasoning.
Many of the universes in a given multiverse are bound to be lifeless. The reason, as we've seen, is that changes to nature's fundamental parameters from their known values tend to disrupt the conditions favorable for life to emerge.6 Our very existence implies that we could never find ourselves in any of the lifeless domains, and so there's nothing further to explain about why we don't see their particular combination of properties. If a given multiverse proposal implied a unique life-supporting universe, we'd be golden. We would work out that special universe's properties mathematically; if they differed from what we've measured in our own universe, we could rule out that multiverse proposal. If the properties agreed with ours, we'd have an impressive vindication of anthropic multiverse theorizing-and reason to vastly expand our picture of reality. Our very existence implies that we could never find ourselves in any of the lifeless domains, and so there's nothing further to explain about why we don't see their particular combination of properties. If a given multiverse proposal implied a unique life-supporting universe, we'd be golden. We would work out that special universe's properties mathematically; if they differed from what we've measured in our own universe, we could rule out that multiverse proposal. If the properties agreed with ours, we'd have an impressive vindication of anthropic multiverse theorizing-and reason to vastly expand our picture of reality.
In the more plausible case that there is not a unique life-supporting universe, a number of theorists (they include Steven Weinberg, Andrei Linde, Alex Vilenkin, George Efstathiou, and many others) have advocated an enhanced statistical approach. Rather than calculate the relative preponderance, within the multiverse, of various kinds of universes, they propose that we calculate the number of inhabitants-physicists usually call them observers-who would find themselves in various kinds of universes. In some universes, conditions might barely be compatible with life, so observers would be rare, like the occasional cactus in a harsh desert; other universes, with more hospitable conditions, would teem with observers. The idea is that, just as canine census data let us predict what kinds of dogs we can expect to encounter, so observer census data let us predict the properties that a typical inhabitant living somewhere in the multiverse-you and I, according to the reasoning of this approach-should expect to see.
A concrete example was worked out in 1997 by Weinberg and his collaborators Hugo Martel and Paul Shapiro. For a multiverse in which the cosmological constant varies from universe to universe, they calculated how abundant life would be in each. This difficult task was made feasible by invoking the Weinberg proxy (Chapter 6): instead of life proper, they considered the formation of galaxies. More galaxies means more planetary systems and hence, the underlying a.s.sumption goes, a greater likelihood of life, intelligent life in particular. Now, as Weinberg had found in 1987, even a modest cosmological constant generates enough repulsive gravity to disrupt galaxy formation so only domains of the multiverse that have sufficiently small cosmological constants need be considered. A cosmological constant that's negative results in a universe that collapses well before galaxies form, so these realms of the multiverse can be omitted from the a.n.a.lysis, too. Anthropic reasoning thus focuses our attention on the portion of the multiverse in which the cosmological constant lies in a narrow window; as discussed in Chapter 6 Chapter 6, the calculations show that for a given universe to contain galaxies, its cosmological constant needs to be less than about 200 times the critical density (a ma.s.s equivalent of about 1027 grams in each cubic centimeter of s.p.a.ce, or about 10 grams in each cubic centimeter of s.p.a.ce, or about 10121 in Planck units). in Planck units).7 For universes whose cosmological constant is in this range, Weinberg, Martel, and Shapiro then undertook a more refined calculation. They determined the fraction of matter in each such universe that would clump together over the course of cosmological evolution, a pivotal step on the road to galaxy formation. They found that if the cosmological constant is very near the window's upper limit, relatively few clumps would form, because the outward push of the cosmological constant acts like a strong wind, blowing most dust acc.u.mulations apart. If the cosmological constant's value is near the window's lower limit, zero, they found that many clumps form, because the disrupting influence of the cosmological constant is minimized. Which means there's a large chance you'll be in a universe whose cosmological constant is near zero, since such universes have an abundance of galaxies and, by the reasoning of this approach, life. There's a small chance you'll be in a universe whose cosmological constant is near the window's upper limit, about 10121, because such universes are endowed with far fewer galaxies. And there's a modest chance you'll be in a universe whose cosmological constant lies at a value between these extremes.
Using the quant.i.tative version of these results, Weinberg and his collaborators calculated the cosmic a.n.a.log of encountering a sixty-two-pound Labrador on an average walk around the neighborhood-the cosmological constant value, that is, witnessed by an average observer in the multiverse. The answer? Somewhat larger than what the subsequent supernova measurements revealed, but definitely in the same ballpark. They found that roughly 1 in 10 to 1 in 20 inhabitants of the multiverse would have an experience comparable to ours, measuring the cosmological constant's value in their universe to be about 10123.
While a higher percentage would be more satisfying, the result is impressive, nonetheless. Prior to this calculation, physics faced a mismatch between theory and observation of more than 120 orders of magnitude, suggesting strongly that something was profoundly amiss with our understanding. The multiverse approach of Weinberg and his collaborators, however, showed that finding yourself in a universe whose cosmological constant is on a par with the value we've measured is roughly as surprising as running into that s.h.i.+h tzu in a neighborhood dominated by Labs. Which is to say, not that surprising at all. Certainly, when viewed from this multiverse perspective, the observed value of the cosmological constant doesn't suggest a profound lack of understanding, and that's an encouraging step forward.
Subsequent a.n.a.lyses, though, emphasized an interesting facet that some interpret as weakening the result. For simplicity's sake, Weinberg and his collaborators imagined that across their multiverse only the cosmological constant's value varied from universe to universe; other physical parameters were a.s.sumed fixed. Max Tegmark and Martin Rees noted that if both the cosmological constant's value and, say, the size of the early universe quantum jitters were imagined to vary from universe to universe, the conclusion would change. Recall that the jitters are the primordial seeds of galaxy formation: tiny quantum fluctuations, stretched by inflationary expansion, yield a random a.s.sortment of regions where the density of matter is a little higher or a little lower than average. The higher-density regions exert a greater gravitational pull on nearby matter and so grow yet larger, ultimately coalescing into galaxies. Tegmark and Rees pointed out that much as bigger piles of leaves can better withstand a brisk breeze, so larger primordial seeds can better withstand the disruptive outward push of a cosmological constant. A multiverse in which both the seed size and the value of the cosmological constant vary would therefore contain universes where larger cosmological constants were offset by larger seeds; that combination would be compatible with galaxy formation-and hence with life. A multiverse of this sort increases the cosmological constant value that a typical observer would see and so results in a decrease-potentially a sharp one-of the fraction of observers who would find their cosmological constant to have as small a value as we've measured.
Staunch multiverse proponents are fond of pointing to the a.n.a.lysis of Weinberg and his collaborators as a success of anthropic reasoning. Detractors are fond of pointing to the issues raised by Tegmark and Rees as making the anthropic result less convincing. In reality, the debate is premature. These are all highly exploratory, first-pa.s.s calculations, best viewed as providing insight into the general domain of anthropic reasoning. Under certain restrictive a.s.sumptions, they show that the anthropic framework can take us within the ballpark of the measured cosmological constant; relax those a.s.sumptions somewhat, and the calculations show that the size of the ballpark grows substantially. Such sensitivity implies that a refined multiverse calculation will require a precise understanding of the detailed properties that characterize the const.i.tuent universes, and how they vary, thus replacing arbitrary a.s.sumptions with theoretical directives. This is essential if a multiverse is to stand a chance of yielding definitive conclusions.
Researchers are working hard to achieve this goal, but as of today, they have yet to reach it.8 Prediction in a Multiverse IV: What will it take?
What hurdles, then, will we need to clear before we can extract predictions from a given multiverse? There are three that figure most prominently.
First, as pointedly ill.u.s.trated by the example just discussed, a multiverse proposal must allow us to determine which physical features vary from universe to universe, and for those features that do vary, we must be able to calculate their statistical distribution across the multiverse. Essential for doing so is an understanding of the cosmological mechanism by which the proposed multiverse is populated by universes (such as the creation of bubble universes in the Landscape Multiverse). It is this mechanism that determines how prevalent one kind of universe is relative to another, and so it is this mechanism that determines the statistical distribution of physical features. If we're fortunate, the resulting distributions, either across the entire multiverse or across those universes supporting life, will be sufficiently skewed to yield definitive predictions.
A second challenge, if we do need to invoke anthropic reasoning, comes from the central a.s.sumption that we humans are garden-variety average. Life might be rare in the multiverse; intelligent life might be rarer still. But among all intelligent beings, the anthropic a.s.sumption goes, we are so thoroughly typical that our observations should be the average of what intelligent beings inhabiting the multiverse would see. (Alexander Vilenkin has called this the principle of mediocrity principle of mediocrity). If we know the distribution of physical features across life-supporting universes, we can calculate such averages. But typicality is a th.o.r.n.y a.s.sumption. If future work shows that our observations fall into the range of calculated averages in a particular multiverse, confidence in our typicality-and in the multiverse proposal-would grow. That would be exciting. But if our observations fall outside the averages that could be evidence that the multiverse proposal is wrong, or it could mean that we are just not typical. Even in a neighborhood that has 99 percent Labs, you can still run into Dobermans, an atypical dog. Distinguis.h.i.+ng between a failed multiverse proposal and a successful one in which our universe is atypical may prove difficult.9 Progress on this issue will likely require a better understanding of how intelligent life arises in a given multiverse; with that knowledge, we could at least clarify how typical our own evolutionary history has so far been. This, of course, is a major challenge. To date, most anthropic reasoning has completely skirted the issue by invoking Weinberg's a.s.sumption-that the number of intelligent life-forms in a given universe is proportional to the number of galaxies it contains. As far as we know, intelligent life needs a warm planet, which requires a star, which is generally part of a galaxy, and so there's reason to believe Weinberg's approach holds water. But since we have only the most rudimentary understanding of even our own genesis, the a.s.sumption remains tentative. To refine our calculations, the development of intelligent life needs to be far better understood.
The third hurdle is simple to explain but in the long run may well be the one that's last standing. It has to do with dividing up infinity.
Dividing Up Infinity.
To understand the problem, return to dogs. If you live in a neighborhood populated with three Labs and one dachshund, then, ignoring complications such as how often the dogs are walked, you're three times more likely to run into a Lab. The same would apply if there were 300 Labs and 100 dachshunds; 3,000 Labs and 1,000 dachshunds; 3 million Labs and 1 million dachshunds, and so on. But what if these numbers were infinitely infinitely large? How do you compare an infinity of dachshunds to three times infinity of Labradors? Although this sounds like the tortured math of one-upping seven-year-olds, there's a real question here. Is three times infinity larger than plain old infinity? If so, is it three times as large? large? How do you compare an infinity of dachshunds to three times infinity of Labradors? Although this sounds like the tortured math of one-upping seven-year-olds, there's a real question here. Is three times infinity larger than plain old infinity? If so, is it three times as large?
Comparisons involving infinitely large numbers are notoriously tricky. For dogs on earth, of course, the difficulty doesn't arise, because the populations are finite. But for universes const.i.tuting particular multiverses, the problem can be very real. Take the Inflationary Multiverse. Looking at the entire block of Swiss cheese from an imaginary outsider's perspective, we would see it continue to grow and produce new universes endlessly. That's what the "eternal" in "eternal inflation" means. Moreover, taking an insider's perspective, we've seen that each bubble universe itself harbors an infinite number of separate domains, filling out a Quilted Multiverse. In making predictions we necessarily confront an infinity of universes.
To grasp the mathematical challenge, imagine that you're a contestant on Let's Make a Deal Let's Make a Deal and you've won an unusual prize: an infinite collection of envelopes, the first containing $1, the second $2, the third $3, and so on. As the crowd cheers, Monty chimes in to make you an offer. Either keep your prize as is, or elect to have him double the contents of each envelope. At first it seems obvious that you should take the deal. "Each envelope will contain more money than it previously did," you think, "so this has to be the right move." And if you had only a finite number of envelopes, it and you've won an unusual prize: an infinite collection of envelopes, the first containing $1, the second $2, the third $3, and so on. As the crowd cheers, Monty chimes in to make you an offer. Either keep your prize as is, or elect to have him double the contents of each envelope. At first it seems obvious that you should take the deal. "Each envelope will contain more money than it previously did," you think, "so this has to be the right move." And if you had only a finite number of envelopes, it would would be the right move. To exchange five envelopes containing $1, $2, $3, $4, and $5 for envelopes with $2, $4, $6, $8, and $10 makes una.s.sailable sense. But after another moment's thought, you start to waver, because you realize that the infinite case is less clear-cut. "If I take the deal," you think, "I'll wind up with envelopes containing $2, $4, $6, and so on, running through all the even numbers. But as things currently stand, my envelopes run through be the right move. To exchange five envelopes containing $1, $2, $3, $4, and $5 for envelopes with $2, $4, $6, $8, and $10 makes una.s.sailable sense. But after another moment's thought, you start to waver, because you realize that the infinite case is less clear-cut. "If I take the deal," you think, "I'll wind up with envelopes containing $2, $4, $6, and so on, running through all the even numbers. But as things currently stand, my envelopes run through all all whole numbers, the evens as well as the odds. So it seems that by taking the deal I'll be whole numbers, the evens as well as the odds. So it seems that by taking the deal I'll be removing removing the odd dollar amounts from my total tally. That doesn't sound like a smart thing to do." Your head starts to spin. Compared envelope by envelope, the deal looks good. Compared collection to collection, the deal looks bad. the odd dollar amounts from my total tally. That doesn't sound like a smart thing to do." Your head starts to spin. Compared envelope by envelope, the deal looks good. Compared collection to collection, the deal looks bad.
Your dilemma ill.u.s.trates the kind of mathematical pitfall that makes it so hard to compare infinite collections. The crowd is growing antsy, you have to make a decision, but your a.s.sessment of the deal depends on the way you compare the two outcomes.
A similar ambiguity afflicts comparisons of a yet more basic characteristic of such collections: the number of members each contains. The Let's Make a Deal Let's Make a Deal example ill.u.s.trates this, too. Which are more plentiful, whole numbers or even numbers? Most people would say whole numbers, since only half of the whole numbers are even. But your experience with Monty gives you sharper insight. Imagine that you take Monty's deal and wind up with all even dollar amounts. In doing so, you wouldn't return any envelopes nor would you require any new ones, since Monty would simply double the amount of money in each. You conclude, therefore, that the number of envelopes required to accommodate all whole numbers is the same as the number of envelopes required to accommodate all even numbers-which suggests that the populations of each category are equal ( example ill.u.s.trates this, too. Which are more plentiful, whole numbers or even numbers? Most people would say whole numbers, since only half of the whole numbers are even. But your experience with Monty gives you sharper insight. Imagine that you take Monty's deal and wind up with all even dollar amounts. In doing so, you wouldn't return any envelopes nor would you require any new ones, since Monty would simply double the amount of money in each. You conclude, therefore, that the number of envelopes required to accommodate all whole numbers is the same as the number of envelopes required to accommodate all even numbers-which suggests that the populations of each category are equal (Table 7.1). And that's weird. By one method of comparison-considering the even numbers as a subset of the whole numbers-you conclude that there are more whole numbers. By a different method of comparison-considering how many envelopes are needed to contain the members of each group-you conclude that the set of whole numbers and the set of even numbers have equal populations.
Table 7.1 Every whole number is paired with an even number, and vice versa, suggesting that the quant.i.ty of each is the same Every whole number is paired with an even number, and vice versa, suggesting that the quant.i.ty of each is the same.
You can even convince yourself that there are more more even numbers than there are whole numbers. Imagine that Monty offered to quadruple the money in each of the envelopes you initially had, so there would be $4 in the first, $8 in the second, $12 in the third, and so on. Since, again, the number of envelopes involved in the deal stays the same, this suggests that the quant.i.ty of whole numbers, where the deal began, is equal to that of numbers divisible by four ( even numbers than there are whole numbers. Imagine that Monty offered to quadruple the money in each of the envelopes you initially had, so there would be $4 in the first, $8 in the second, $12 in the third, and so on. Since, again, the number of envelopes involved in the deal stays the same, this suggests that the quant.i.ty of whole numbers, where the deal began, is equal to that of numbers divisible by four (Table 7.2), where the deal wound up. But such a pairing, marrying off each whole number to a number that's divisible by 4, leaves an infinite set of even bachelors-the numbers 2, 6, 10, and so on-and thus seems to imply that the evens are more plentiful than the wholes.
Table 7.2 Every whole number is paired with every other even number, leaving an infinite set of even bachelors, suggesting that there are more evens than wholes Every whole number is paired with every other even number, leaving an infinite set of even bachelors, suggesting that there are more evens than wholes.
From one perspective, the population of even numbers is less than that of whole numbers. From another, the populations are equal. From another still, the population of even numbers is greater than that of the whole numbers. And it's not that one conclusion is right and the others wrong. There simply is no absolute answer to the question of which of these kinds of infinite collections are larger. The result you find depends on the manner in which you do the comparison.10 That raises a puzzle for multiverse theories. How do we determine whether galaxies and life are more abundant in one or another type of universe when the number of universes involved is infinite? The very same ambiguity we've just encountered will afflict us just as severely, unless physics picks out a precise basis on which to make the comparisons unless physics picks out a precise basis on which to make the comparisons. Theorists have put forward proposals, various a.n.a.logs of the pairings given in the tables, that emerge from one or another physical consideration-but a definitive procedure has yet to be derived and agreed upon. And, just as in the case of infinite collections of numbers, different approaches yield different results. According to one way of comparing, universes with one array of properties preponderate; according to an alternative way, universes with different properties do.
The ambiguity has a dramatic impact on what we conclude are typical or average properties in a given multiverse. Physicists call this the measure problem measure problem, a mathematical term whose meaning is well suggested by its name. We need a means for measuring the sizes of different infinite collections of universes. It is this information that we need in order to make predictions. It is this information that we need in order to work out how likely it is that we reside in one type of universe rather than another. Until we find a fundamental dictum for how we should compare infinite collections of universes, we won't be able to foretell mathematically what typical multiverse dwellers-us-should see in experiments and observations. Solving the measure problem is imperative.
A Further Contrarian Concern.
I've called out the measure problem in its own section not only because it is a formidable impediment to prediction, but also because it may entail another, disquieting consequence. In Chapter 3 Chapter 3, I explained why the inflationary theory has become the de facto cosmological paradigm. A brief burst of rapid expansion during our universe's first moments would have allowed today's distant regions to have communicated early on, which explains the common temperature that measurements have found; rapid expansion also irons out any spatial curvature, rendering the shape of s.p.a.ce flat, in line with observations; and finally, such expansion turns quantum jitters into tiny temperature variations across s.p.a.ce that are both measurable in the microwave background radiation and essential to galaxy formation. These successes yield a strong case.11 But the eternal version of inflation has the capacity to undermine the conclusion. But the eternal version of inflation has the capacity to undermine the conclusion.
Whenever quantum processes are relevant, the best you can do is predict the likelihood of one outcome relative to another. Experimental physicists, taking this to heart, perform experiments over and over again, acquiring reams of data on which statistical a.n.a.lyses can be run. If quantum mechanics predicts that one outcome is ten times as likely as another, then the data should very nearly reflect this ratio. The cosmic microwave background calculations, whose match to observations is the most convincing evidence for the inflationary theory, rely on quantum field jitters, so they are also probabilistic. But, unlike laboratory experiments, they can't be checked by running the big bang over and over again. So how are they interpreted?
Well, if theoretical considerations conclude, say, that there's a 99 percent probability that the microwave data should take one form and not another, and if the more probable outcome is what we observers see, the data are taken as strongly supporting the theory. The rationale is that if a collection of universes were all produced by this same underlying physics, the theory predicts that about 99 percent of them should look much like what we observe and about 1 percent to deviate significantly.
Now, if the Inflationary Multiverse had a finite population of universes, we could straightforwardly conclude that the number of oddball universes where quantum processes result in data not matching expectations remains, comparatively speaking, very small. But if, as in the Inflationary Multiverse, the population of universes is not finite, it is far more challenging to interpret the numbers. What's 99 percent of infinity? Infinity. What's 1 percent of infinity? Infinity. Which is bigger? The answer requires us to compare two infinite collections. And as we've seen, even when it seems plain that one infinite collection is larger than another, the conclusion you reach depends on your method of comparison.
The contrarian concludes that when inflation is eternal, the very predictions that we use to build our confidence in the theory are compromised the very predictions that we use to build our confidence in the theory are compromised. Every possible outcome allowed by the quantum calculations, however unlikely-a .1 percent quantum probability, a .0001 percent quantum probability, a .0000000001 percent quantum probability-would be realized in infinitely many universes simply because any of these numbers times infinity equals infinity. Without a fundamental prescription for comparing infinite collections, we can't possibly say that one collection of universes is larger than the rest and is thus the most likely kind of universe for us to witness, we lose the capacity to make definite predictions.
The optimist concludes that the spectacular agreement between quantum calculations in inflationary cosmology and data, as in Figure 3.5 Figure 3.5, must reflect a deep truth. With a finite number of universes and observers, the deep truth is that universes in which the data deviate from quantum predictions-those with a .1 percent quantum probability, or a .0001 percent quantum probability, or a .0000000001 percent quantum probability-are indeed rare, and that's why garden-variety multiverse inhabitants like us don't find ourselves living inside one of them. With an infinite number of universes, the optimist concludes, the deep truth must be that the rarity of anomalous universes, in some yet to be established manner, still holds. The expectation is that we will one day derive a measure, a definite means for comparing the various infinite collections of universes, and that those universes emerging from rare quantum aberrations will have a tiny measure compared with those emerging from the likely quantum outcomes. To accomplish this remains an immense challenge, but the majority of researchers in the field are convinced that the agreement in Figure 3.5 Figure 3.5 means that we will one day succeed. means that we will one day succeed.12 Mysteries and Multiverses: Can a multiverse provide explanatory power of which we'd otherwise be deprived?
No doubt you've noticed that even the most sanguine projections suggest that predictions emerging from a multiverse framework will have a different character from those we traditionally expect from physics. The precession of the perihelion of Mercury, the magnetic dipole moment of the electron, the energy released when a nucleus of uranium splits into barium and krypton: these these are predictions. They result from detailed mathematical calculations based on solid physical theory and produce precise, testable numbers. And the numbers have been verified experimentally. For example, calculations establish that the electron's magnetic moment is 2.0023193043628; measurements reveal it to be 2.0023193043622. Within the tiny margins of error inherent to each, experiment thus confirms theory to better than 1 part in 10 billion. are predictions. They result from detailed mathematical calculations based on solid physical theory and produce precise, testable numbers. And the numbers have been verified experimentally. For example, calculations establish that the electron's magnetic moment is 2.0023193043628; measurements reveal it to be 2.0023193043622. Within the tiny margins of error inherent to each, experiment thus confirms theory to better than 1 part in 10 billion.
From where we now stand, it seems that multiverse predictions will never reach this standard of precision. In the most refined scenarios, we might be able to predict that it's "highly likely" that the cosmological constant, or the strength of the electromagnetic force, or the ma.s.s of the up-quark lies within some range of values. But to do better, we'll need extraordinarily good fortune. In addition to solving the measure problem, we'll need to discover a convincing multiverse theory with profoundly skewed probabilities (such as a 99.9999 percent probability that an observer will find himself in a universe with a cosmological constant equal to the value we measure) or astonis.h.i.+ngly tight correlations (such as that electrons exist only in universes with a cosmological constant equal to 10123). If a multiverse proposal doesn't have such favorable features, it will lack the precision that for so long has distinguished physics from other disciplines. To some researchers, that's an unacceptable price to pay.
For quite a while, I took that position too, but my view has gradually s.h.i.+fted. Like every other physicist, I prefer sharp, precise, and unequivocal predictions. But I and many others have come to realize that although some fundamental features of the universe are suited for such precise mathematical predictions, others are not-or, at the very least, it's logically possible that there may may be features that stand beyond precise prediction. From the mid-1980s, when I was a young graduate student working on string theory, there was broad expectation that the theory would one day explain the values of particle ma.s.ses, force strengths, the number of spatial dimensions, and just about every other fundamental physical feature. I remain hopeful that this is a goal we will one day reach. But I also recognize that it is a tall order for a theory's equations to churn away and produce a number like the electron's ma.s.s (.000000000000000000000091095 in units of the Planck ma.s.s) or the top quark's ma.s.s (.0000000000000000632, in units of the Planck ma.s.s). And when it comes to the cosmological constant, the challenge appears herculean. A calculation that after pages of manipulations and megawatts of computer-crunching results in the very number that highlights the first paragraph of be features that stand beyond precise prediction. From the mid-1980s, when I was a young graduate student working on string theory, there was broad expectation that the theory would one day explain the values of particle ma.s.ses, force strengths, the number of spatial dimensions, and just about every other fundamental physical feature. I remain hopeful that this is a goal we will one day reach. But I also recognize that it is a tall order for a theory's equations to churn away and produce a number like the electron's ma.s.s (.000000000000000000000091095 in units of the Planck ma.s.s) or the top quark's ma.s.s (.0000000000000000632, in units of the Planck ma.s.s). And when it comes to the cosmological constant, the challenge appears herculean. A calculation that after pages of manipulations and megawatts of computer-crunching results in the very number that highlights the first paragraph of Chapter 6 Chapter 6-well, it's not impossible but it does strain even the optimist's optimism. Certainly, string theory seems no closer to calculating any of these numbers today than it did when I first started working on it. This doesn't mean that it, or some future theory, won't one day succeed. Maybe the optimist needs to be yet more imaginative. But given the physics of today, it makes sense to consider new approaches. That's what the multiverse does.
In a well-developed multiverse proposal, there's a clear delineation of the physical features that need to be approached differently from standard practice: those that vary from universe to universe. And that's the power of the approach. What you can absolutely count on from a multiverse theory is a sharp vetting of which single-universe mysteries persist in the many-universe setting, and which do not.
The cosmological constant is a prime example. If the cosmological constant's value varies across a given multiverse, and does so in sufficiently fine increments, what was once mysterious-its value-would now be prosaic. Just as a well-stocked shoe store surely has your shoe size, an expansive multiverse surely has universes with the value of the cosmological constant we've measured. What generations of scientists might have struggled valiantly to explain, the multiverse would have explained away. The multiverse would have shown that a seemingly deep and perplexing issue emerged from the misguided a.s.sumption that the cosmological constant has a unique value. It is in this sense that a multiverse theory has the capacity to offer significant explanatory power, and it has the potential to profoundly influence the course of scientific inquiry.
Such reasoning must be wielded with care. What if Newton, after the apple fell, reasoned that we're part of a multiverse in which apples fall down in some universes, up in others, and so the falling apple simply tells us which kind of universe we inhabit, with no need for further investigation? Or, what if he'd concluded that in each universe some apples fall down while others fall up, and the reason we see the falling-down variety is simply the environmental fact that, in our universe, apples that fall up have already done so and have thus long since departed for deep s.p.a.ce? This is a fatuous example, of course-there's never been any reason, theoretical or otherwise, for such thinking-but the point is serious. By invoking a multiverse, science could weaken the impetus to clarify particular mysteries, even though some of those mysteries might be ripe for standard, nonmultiverse explanations. When all that was really called for was harder work and deeper thinking, we might instead fail to resist the lure of multiverse temptation and prematurely abandon conventional approaches.
This potential danger explains why some scientists shudder at multiverse reasoning. It's why a multiverse proposal that's taken seriously needs to be strongly motivated from theoretical results, and it must articulate with precision the universes of which it's composed. We must tread carefully and systematically. But to turn away from a multiverse because it could could lead us down a blind alley is equally dangerous. In doing so, we might well be turning a blind eye to reality. lead us down a blind alley is equally dangerous. In doing so, we might well be turning a blind eye to reality.
*Because there are differing perspectives regarding the role of scientific theory in the quest to understand nature, the points I'm making are subject to a range of interpretations. Two prominent positions are realists realists, who hold that mathematical theories can provide direct insight into the nature of reality, and instrumentalists instrumentalists, who believe that theory provides a means for predicting what our measuring devices should register but tells us nothing about an underlying reality. Over decades of exacting argument, philosophers of science have developed numerous refinements of these and related positions. As no doubt is clear, my perspective, and the approach I take in this book, is decidedly in the realist camp. This chapter in particular, examining the scientific validity of certain types of theories, and a.s.sessing what those theories might imply for the nature of reality, is one in which various philosophical orientations would approach the topic with considerable differences.
*In a multiverse containing an enormous number of different universes, a reasonable concern is that regardless of what experiments and observations reveal, there is some universe in the theory's gargantuan collection that's compatible with the results. If so, there'd be no experimental evidence that could prove the theory wrong; in turn, no data could be properly interpreted as evidence supporting the theory. I will consider this issue shortly.
CHAPTER 8.
The Many Worlds of Quantum Measurement.
The Quantum Multiverse.
The most reasonable a.s.sessment of the parallel universe theories we've so far encountered is that the jury is out. An infinite spatial expanse, eternal inflation, braneworlds, cyclical cosmology, string theory's landscape-these intriguing ideas have emerged from a range of scientific developments. But each remains tentative, as do the multiverse proposals each has sp.a.w.ned. While many physicists are willing to offer their opinions, pro and con, regarding these multiverse schemes, most recognize that future insights-theoretical, experimental, and observational-will determine whether any become part of the scientific canon.
The multiverse we'll now take up, emerging from quantum mechanics, is viewed very differently. Many physicists have already reached a final verdict on this particular multiverse. The thing is, they haven't all reached the same verdict. The differences come down to the deep and as yet unresolved problem of navigating from the probabilistic framework of quantum mechanics to the definite reality of common experience.
Quantum Reality.
In 1954, nearly thirty years after the foundations of quantum theory had been set down by luminaries like Niels Bohr, Werner Heisenberg, and Erwin Schrodinger, an unknown graduate student from Princeton University named Hugh Everett III came to a startling realization. His a.n.a.lysis, which focused on a gaping hole that Bohr, the grand master of quantum mechanics, had danced around but failed to fill, revealed that a proper understanding of the theory might require a vast network of parallel universes. Everett's was one of the earliest mathematically motivated insights suggesting that we might be part of a multiverse.
Everett's approach, which in time would be called the Many Worlds interpretation of quantum mechanics, has had a checkered history. In January 1956, having worked out the mathematical consequences of his new proposal, Everett submitted a draft of his thesis to John Wheeler, his doctoral adviser. Wheeler, one of twentieth-century physics' most celebrated thinkers, was thoroughly impressed. But that May, when Wheeler visited Bohr in Copenhagen and discussed Everett's ideas, the reception was icy. Bohr and his followers had spent decades refining their view of quantum mechanics. To them, the questions Everett raised, and the outlandish ways in which he thought they should be addressed, were of little merit.
Wheeler held Bohr in the highest regard, and so placed particular value on appeasing his elder colleague. In response to the criticisms, Wheeler delayed granting Everett his Ph.D. and compelled him to modify the thesis substantially. Everett was to cut out those parts blatantly critical of Bohr's methodology and emphasize that his results were meant to clarify and extend the conventional formulation of quantum theory. Everett resisted, but he had already accepted a job in the Defense Department (where he would soon play an important behind-the-scenes role in the Eisenhower and Kennedy administrations' nuclear-weapons policy) that required a doctorate, so he reluctantly acquiesced. In March of 1957, Everett submitted a substantially trimmed-down version of his original thesis; by April it was accepted by Princeton as fulfilling his remaining requirements, and in July it was published in the Reviews of Modern Physics Reviews of Modern Physics.1 But with Everett's approach to quantum theory having already been dismissed by Bohr and his entourage, and with the muting of the grander vision articulated in the original thesis, the paper was ignored. But with Everett's approach to quantum theory having already been dismissed by Bohr and his entourage, and with the muting of the grander vision articulated in the original thesis, the paper was ignored.2 Ten years later, the renowned physicist Bryce DeWitt plucked Everett's work from obscurity. DeWitt, who was inspired by the results of his graduate student Neill Graham that further developed Everett's mathematics, became a vocal proponent of the Everettian rethinking of quantum theory. Besides publis.h.i.+ng a number of technical papers that brought Everett's insights to a small but influential community of specialists, in 1970 DeWitt wrote a general level summary for Physics Today Physics Today that reached a much broader scientific audience. And unlike Everett's 1957 paper, which s.h.i.+ed away from talk of other worlds, DeWitt underscored this feature, highlighting it with an unusually candid reflection regarding his "shock" on learning Everett's conclusion that we are part of an enormous "multiworld." The article generated a significant response in a physics community that had become more receptive to tampering with orthodox quantum ideology and ignited a debate, still going on, that concerns the nature of reality when, as we believe they do, quantum laws hold sway. that reached a much broader scientific audience. And unlike Everett's 1957 paper, which s.h.i.+ed away from talk of other worlds, DeWitt underscored this feature, highlighting it with an unusually candid reflection regarding his "shock" on learning Everett's conclusion that we are part of an enormous "multiworld." The article generated a significant response in a physics community that had become more receptive to tampering with orthodox quantum ideology and ignited a debate, still going on, that concerns the nature of reality when, as we believe they do, quantum laws hold sway.
Let me set the stage.
The upheaval in understanding that took place between roughly 1900 and 1930 resulted in a ferocious a.s.sault on intuition, common sense, and the well-accepted laws that the new vanguard soon began calling "cla.s.sical physics"-a term that carries the weight and respect given to a picture of reality that is at once venerable, immediate, satisfying, and predictive. Tell me how things are now, and I'll use the laws of cla.s.sical physics to predict how things will be at any moment in the future, or how they were at any moment in the past. Subtleties such as chaos (in the technical sense: slight changes in how things are now can result in huge errors in the predictions) and the complexity of the equations challenge the practicality of this program in all but the simplest situations, but the laws themselves are unwavering in their viselike grip on a definitive past and future.
The quantum revolution required that we give up the cla.s.sical perspective because new results established that it was demonstrably wrong. For the motion of big objects like the earth and the moon, or of everyday objects like rocks and b.a.l.l.s, the cla.s.sical laws do a fine job of prediction and description. But pa.s.s into the microworld of molecules, atoms, and subatomic particles and the cla.s.sical laws fail. In contradiction of the very heart of cla.s.sical reasoning, if you run identical experiments on identical particles that have been set up identically, you will generally not not get identical results. get identical results.
Imagine, for example, that you have 100 identical boxes, each containing one electron, set up according to an identical laboratory procedure. After exactly 10 minutes, you and 99 cohorts measure the positions of each of the 100 electrons. Despite what Newton, Maxwell, or even a young Einstein would have antic.i.p.ated-would likely have been willing to bet their lives on-the 100 measurements won't yield the same result. In fact, at first blush the results will look random, with some electrons found near their box's front lower left corner, some near the back upper right, some around the middle, and so on.
The regularities and patterns that make physics a rigorous and predictive discipline become apparent only if you run this same experiment, with 100 boxed electrons, over and over again. Were you to do so, here's what you'd find. If your first batch of 100 measurements found 27 percent of the electrons near the lower left corner, 48 percent near the upper right corner, and 25 percent near the middle, then the second batch will yield a very similar distribution. So will the third batch, the fourth, and those that follow. The regularity, therefore, isn't evident in any single measurement; you can't predict where any given electron will be. Instead, the regularity is found in the statistical distribution statistical distribution of many measurements. The regularity, that is, speaks to the likelihood, or of many measurements. The regularity, that is, speaks to the likelihood, or probability probability, of finding an electron at any particular location.
The breathtaking achievement of quantum mechanics' founders was to develop a mathematical formalism that dispensed with the absolute predictions intrinsic to cla.s.sical physics and instead predicted such probabilities. Working from an equation Schrodinger published in 1926 (and an equivalent though somewhat more awkward equation Heisenberg wrote down in 1925), physicists can input the details of how things are now, and then calculate the probability that they will be one way, or another, or another still, at any moment in the future.
But don't be misled by the simplicity of my little electron example. Quantum mechanics applies not just to electrons but to all types of particles, and it tells us not only about their positions but about also their velocities, their angular momenta, their energies, and how they behave in a wide range of situations, from the barrage of neutrinos now wafting through your body, to the frenzied atomic fusions taking place in the cores of distant stars. Across such a broad sweep, the probabilistic predictions of quantum mechanics match experimental data. Always. In the more than eighty years since these ideas were developed, there has not been a single verifiable experiment or astrophysical observation whose results conflict with quantum mechanical predictions.
For a generation of physicists to have confronted such a radical departure from the intuitions formed out of thousands of years of collective experience, and in response to have recast reality within a wholly new framework based on probabilities, is a virtually unmatched intellectual achievement. Yet one uncomfortable detail has been hovering over quantum mechanics since its inception-a detail that eventually opened a pathway to parallel universes. To understand it, we need to look a little more closely at the quantum formalism.
The Puzzle of Alternatives.
In April 1925, during an experiment at Bell Labs undertaken by two American physicists, Clinton Davisson and Lester Germer, a gla.s.s tube containing a hot chunk of nickel suddenly exploded. Davisson and Germer had been spending their days firing beams of electrons at specimens of nickel to investigate various aspects of the metal's atomic properties; the equipment failure was a nuisance, albeit one all too familiar in experimental work. On cleaning up the gla.s.s shards, Davisson and Germer noticed that the nickel had been tarnished during the explosion. Not a big deal, of course. All they had to do was heat the sample, vaporize the contaminant, and start again. And so they did. But that choice, to clean the sample instead of opting for a new one, proved fortuitous. When they directed the electron beam at the newly cleaned nickel, the results were completely different from any they or anyone else had ever encountered. By 1927, it was clear that Davisson and Germer had established a vital feature of the rapidly developing quantum theory. And within a decade, their serendipitous discovery would be honored with the n.o.bel Prize.
Although Davisson and Germer's demonstration predates talking movies and the Great Depression, it's still the most widely used method for introducing quantum theory's essential ideas. Here's how to think about it. When Davisson and Germer heated the tarnished sample, they caused numerous small nickel crystals to meld into fewer larger ones. In turn, their electron beam no longer reflected off a highly uniform surface of nickel but instead bounced back from a few concentrated locations where the larger nickel crystals were centered. A simplified version of their experiment, the setup of Figure 8.1 Figure 8.1, in which electrons are fired at a barrier containing two narrow slits, highlights the essential physics. Electrons emanating from one slit or the other are like electrons bouncing back from one nickel crystal or its neighbor. Modeled in this way, Davisson and Germer were carrying out the first version of what's now called the double-slit experiment double-slit experiment.
To grasp Davisson and Germer's startling result, imagine closing off either the left or the right slit and capturing the electrons that pa.s.s through, one by one, on a detector screen. After many such electrons are fired, the detector screens will look like those in Figure 8.2a Figure 8.2a and and Figure 8.2b Figure 8.2b. A rational, nonquantum trained mind would therefore expect that when both slits are open, the data would be an amalgam of these two results. But the astounding fact is that this is not not what happens. Instead, Davisson and Germer found data, much like those ill.u.s.trated in what happens. Instead, Davisson and Germer found data, much like those ill.u.s.trated in Figure 8.2c Figure 8.2c, consisting of light and dark bands indicating a series of positions where electrons do and do not land.
Figure 8.1 The essence of the Davisson and Germer experiment is captured by the "double-slit" setup in which electrons are fired at a barrier that has two narrow slits. In the Davisson and Germer experiment, two streams of electrons are produced when incident electrons bounce off neighboring nickel crystals; in the double-slit experiment, two similar streams are produced by electrons that pa.s.s through the neighboring slits The essence of the Davisson and Germer experiment is captured by the "double-slit" setup in which electrons are fired at a barrier that has two narrow slits. In the Davisson and Germer experiment, two streams of electrons are produced when incident electrons bounce off neighboring nickel crystals; in the double-slit experiment, two similar streams are produced by electrons that pa.s.s through the neighboring slits.
These results deviate from expectations in a way that's particularly peculiar. The dark bands are locations where electrons are copiously detected if only the left slit or only the right slit is open (the corresponding regions in Figures 8.2a Figures 8.2a and and 8.2b 8.2b are are bright bright), but which are apparently unreachable when both slits are available. The presence of the left slit thus changes the possible landing locations of electrons pa.s.sing through the right slit, and vice versa The presence of the left slit thus changes the possible landing locations of electrons pa.s.sing through the right slit, and vice versa. Which is thoroughly perplexing. On the scale of a tiny particle like an electron, the distance between the slits is huge. So when the electron pa.s.ses through one slit, how could the presence or absence of the other have any effect, let alone the dramatic influence evident in the data? It's as if for many years you happily enter an office building using one door, but when the management finally adds a second door on the building's other side, you can no longer reach your office.
What are we to make of this? The double-slit experiment leads us inescapably to a conclusion hard to fathom. Regardless of which slit it pa.s.ses through, each individual electron somehow "knows" about both. There's something a.s.sociated with, or connected to, or part of each individual electron that is affected by both slits.
But what could that something be?
Figure 8.2 (a) The data obtained when electrons are fired and only the left slit is open The data obtained when electrons are fired and only the left slit is open. (b) (b) The data obtained when electrons are fired and only the right slit is open The data obtained when electrons are fired and only the right slit is open. (c) (c) The data obtained when electrons are fired and both slits are open The data obtained when electrons are fired and both slits are open.
Quantum Waves.
For a clue as to how an electron traveling through one slit "knows" about the other, look more closely at the data in Figure 8.2c Figure 8.2c. The light-dark-light-dark pattern is as recognizable to a physicist as a mother's face is to her baby. The pattern says-no, it screams-waves. If you've ever dropped two pebbles into a pond and watched as the resulting ripples spread and overlap, you know what I mean. Where the peak of one wave crosses the peak of another, the combined wave height is big; where the trough of one crosses the trough of another, the combined wave depression is deep; and most important of all, where the peak of one crosses the trough of the other, the waves cancel and the water remains level. This is ill.u.s.trated in Figure 8.3 Figure 8.3. If you were to insert a detector screen across the top of the figure that recorded the water's agitation at each location-the larger the agitation, the brighter the reading-the result would be a series of alternating bright and dark regions on the screen. Bright regions would be where the waves reinforce each other, yielding much agitation; dark regions would be where the waves cancel, yielding no agitation. Physicists say the overlapping waves interfere interfere with one another, and call the bright-dark-bright data they produce an with one another, and call the bright-dark-bright data they produce an interference pattern interference pattern.
The similarity to Figure 8.2c Figure 8.2c is unmistakable, so in trying to explain the electron data we're led to think about waves. Good. That's a start. But the details are still murky. What kind of waves? Where are they? And what have they to do with particles such as electrons? is unmistakable, so in trying to explain the electron data we're led to think about waves. Good. That's a start. But th
The Hidden Reality Part 7
You're reading novel The Hidden Reality Part 7 online at LightNovelFree.com. You can use the follow function to bookmark your favorite novel ( Only for registered users ). If you find any errors ( broken links, can't load photos, etc.. ), Please let us know so we can fix it as soon as possible. And when you start a conversation or debate about a certain topic with other people, please do not offend them just because you don't like their opinions.
The Hidden Reality Part 7 summary
You're reading The Hidden Reality Part 7. This novel has been translated by Updating. Author: Brian Greene already has 669 views.
It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.
LightNovelFree.com is a most smartest website for reading novel online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to LightNovelFree.com
- Related chapter:
- The Hidden Reality Part 6
- The Hidden Reality Part 8