Loading...
Menu
Ebooks   ➡  Nonfiction  ➡  Science and Nature  ➡  Physics

On the Interstellar Travel

[] [ON THE INTERSTELLAR TRAVEL
A book on Information Physics]

By Sergio Michelson © 2014-2016. http://spacetravelscience.com/

 

Because a fact seems strange to you, you conclude that it is not one. … All science, however, commences by being strange. Science is successive. It goes from one wonder to another. It mounts by a ladder. The science of today would seem extravagant to the science of a former time. Ptolemy would believe Newton mad. -Victor Hugo

[][] Introduction

In this book we will develop a new approach to fundamental physics. It presents a different explanation for experimental findings that lie at its core. We call it Information Physics. As its name suggests, it is built around the concept of information. The notion of information is applied to fundamental physics in a way that is different from the current information theory, also known as Shannon’s theory.

Information Physics is a scientific theory because it explains experimental results stretching back hundreds of years. It also gives new predictions that can be tested. It is an alternative to a Relativistic point of view.

*Overview *

Information Physics starts by intentionally ignoring Relativity, Quantum Mechanics and Newtonian physics, but reducing to all of them as a special case. The idea is that information plays a more fundamental role in Nature than we currently suspect.

One prediction of Information Physics is a physical possibility of faster-than-light motion in deep space, under conditions that cannot be achieved near large mass such as Earth or the Sun.

Because Information Physics starts before the first principles, which includes Relativity, Einstein’s work is not debated, other than in a context of a historical frame of reference. For example, equations that look similar to that of Relativity are derived without it.

Information Physics uses only three-dimensional space and linear time. In order to explain relativistic phenomena, the need for more complex notions does not arise.

This book isn’t about philosophical aspects of information in Nature, but rather about its inner workings. The main topic is the throughput of information use in physical systems. In the latter part of the book, we will focus on its formal mathematical results.

[*What is in this book? *]

We will start with the basic idea of Information Physics, introduced by questions and analogies. This includes “Getting started” and “Information” chapters.

Next, we’ll discuss some of the relevant theories, in these chapters: “Shannon and the concept of information”, “Einstein’s Relativity” and “Quantum Mechanics”. We’ll talk only in limited terms about these theories, as much as we need for our purposes.

The following chapter, “Information Physics”, introduces the essential concepts in an informal manner.

In the chapter “Why space has three dimensions?”, we’ll mathematically derive that the number of dimensions should be three.

In “Unification by information” we’ll deduce the basics of modern physics, in a way that is qualitative. We’ll also talk about some of the predictions of Information Physics.

Following this, in “The Math: Proof of concept”, we will show the simplified mathematical proof of Einstein’s kinematic time dilation, without Relativity or a notion of light. We’ll also generalize the math to show that Faster Than Light motion is possible.

“Speed of light” chapter will touch on the concept of maximum speed in Nature, and its relation to the speed of light.

The “Mass” chapter will talk about gravitational and inertial mass and their common origin, from the informational perspective.

In “Information Physics and the Principle of Uncertainty”, the notions of uncertainty and quantizing are explained from a different standpoint.

“De Sitter effect without Relativity” qualitatively explains the effect that’s considered one of the principal proofs of Relativity, by using informational approach only.

In the following chapter, “The Proof: Beyond Michelson-Morley”, the pillar of Special Relativity is examined, including its flaws. Experiments to prove Information Physics are proposed.

In “FTL (Faster Than Light) Motion” and “Artificial Gravity”, the reasons and circumstances of these predictions are explained.

We will lean toward informal narrative here. For the formal theory of Information Physics, please read it on the Web.

[][] Getting Started

[*What would Einstein say? *]

One of the most striking effects of Relativity is time dilation. It means that clocks tick slower for a body in motion. For example, time for a space ship moving close to the speed of light would slow down to a crawl.

Usually time dilation is depicted as an upward curve, where the value for time dilation approaches infinity as the speed approaches c (or the speed of light, 300,000 km/s). Here’s a typical diagram that shows why nothing can travel faster than light:

As you can see, time dilation would effectively slow down the passage of time to a standstill as the speed of light is approached. Other effects happen as well, such as mass increase, but as an illustration, let’s stick with time dilation.

Here is a diagram that exemplifies how Information Physics generalizes the results of Relativity. It shows the circumstances of a practical faster-than-light motion, in a situation of a ship moving away from Earth:

We will derive it mathematically and explain the circumstances under which it holds. Einstein’s physics becomes the left-most quadrant on the diagram. This quadrant represents our world. The other quadrants represent the deep space outside the world of massive bodies, like Earth.

The factor f in the above diagram is called “information influence” and for everything we have done so far, it has the value of nearly 1. This includes all the experiments on, or near large bodies like Earth. However, this factor [_f _]becomes smaller and smaller the further away from Earth, and the larger the departing mass is.

For example, the above diagram shows what happens to a large spaceship. Close to Earth, its maximum speed is limited to the speed of light (300,000 km/s). A good distance from it, factor [f _]becomes one-half, and the maximum speed becomes double the 300,000 km/s. Still further away, factor _f becomes one-third and the maximum speed becomes triple the 300,000 km/s, and so on.

While this is not in accordance with Einstein’s Relativity, it doesn’t have to be. Why?

The theory presented here is a generalization of Relativity. As such, it complies with experimental data, while suggesting new experiments. Einstein’s theory, when it comes to scenarios like this in deep space, is only a theory and has not been proven. In those scenarios, Einstein’s theory is a conjecture based on indirect experiments, such as those performed on tiny particles here on Earth. In other words, we assume that Relativity will hold, but we [_don’t know _]for sure. Our theory says we’re in for a surprise.

*The problem with laws of physics *

Consider a train of thought in the form of Q&A that depicts the current view:

Step 1: Why do we have laws of physics?

Step 2: To explain non-random behavior in Nature.

Step 3: Why is there non-random behavior in Nature?

Step 4: It’s because of laws of physics.

Step 5: Go to Step 1.

The above is, of course, a circular reasoning. Unfortunately, today’s physics has nothing better to offer. The question of why everything in Nature isn’t random remains.

Physical laws do not explain that. They describe the behavior we find in Nature, but they do not explain why would there be any behavior that requires explaining.

*Information Physics, Relativity, QM and “It from Bit” *

Information Physics brings together the two disparate views of modern physics: Relativity and Quantum Mechanics. It replaces the foundations of Relativity and Quantum Mechanics with a new concept of information.

For that reason, the theories of Einstein and Heisenberg are not argued, but rather, are derived as a special case of Information Physics.

This makes Information Physics different from any other attempt to place information at the core of physical reality, including before and after the It From Bit paradigm.

It’s different because all other theories cannot account for Relativity. None accounts for Quantum Mechanics. And none produces experimentally verifiable results that diverge from both. That’s what’s needed for something to be called new physics.

[*What is Information Physics? *]

To begin with, the usual concepts you’d expect are missing. There is no mass, light, gravity, energy or force, and no principles of Relativity and Quantum Mechanics.

If you think about all these, the complexity is staggering. Is it likely that the Universe would start off with all of them? Or would it likely start with a much simpler foundation?

Modern physics rests on a general premise that physical laws govern everything.

All the while, the question remains: why would there be physical laws? What is it that enforces the laws? How would a particle such as an electron know how to behave?

The answer is that only the use of information can produce non-random behavior. In plain language, a decision to act in a non-random way cannot be made without information use.

The paradigm shift proposed is to say that even elementary particles, whatever they may be, use information to act the way they do.

Usage of information by elementary particles, whatever they may be, is why the non-random behavior is present in Nature.

[*How do we examine the role of information in the physical world? *]

What is the exact physical embodiment of information? How exactly does information use happen?

We won’t answer these questions. That’s because we don’t have to. We are only concerned with a generic model of information use. Assuming that such a model is the simplest possible, we can avoid being trapped in a speculation about these details.

We can assume there is a physical method of information use, but what it is precisely, is not something we care about. That sort of abstraction is a good thing, because then, our conclusions will hold regardless of the actual underlying physical reality.

We certainly don’t claim that the Universe is made out of information. It’s worth reiterating the central premise: non-random behavior we classify as laws of physics can only exist as a result of information use.

Between that, and saying that everything is made out of information, there’s a proverbial Grand Canyon.

Information Physics derives the notion of matter to be the foundational entity on which information use is based. However, the origin and the meaning of the concept of matter is different in Information Physics.

[*Why is interstellar travel possible? *]

Information Physics predicts that, far from stellar bodies, time does not crawl to a standstill and mass does not exponentially increase as the speed of a large object increases.

This can be tested fairly easy with the level of technology we have today. That’s the good news. However, it involves outer space, which is expensive. That’s the bad news.

We currently think that nothing can move faster than 300,000 km/s. This is because today’s theories suggest it. This is also because we generalized the outcome of experiments we performed with tiny particles here on Earth.

That is lots of suggesting and generalizing without experimental backing. Information Physics says that we’re wrong about that.

At the same time, all the experiments we have performed to date are in accordance with Information Physics, just as they are with current physics.

Accelerating a tiny particle here on Earth will generally not produce superluminal speeds, however in deep space, accelerating large objects can do so, according to Information Physics. That’s the difference that has never been thought of, let alone tried experimentally, because current theories do not predict it.

[*How to achieve interstellar travel? *]

Information Physics predicts that pull-based artificial gravity is possible. Ultra-fast rotational motion of heavy microscopic matter (not necessarily around the common center of rotation) is predicted to cause the same gravity as that of a massive body.

A craft can be made to “fall” in a given direction without experiencing inertial effects (just as with natural gravity), even though there is no massive body present, towards which it is falling.

[*Why are the predictions of Information Physics different? *]

Information Physics views physical reality as an information system, the kind of which has never been explored before.

It intentionally ignores the present-day physics as its foundation. Despite that, it arrives at the same conclusions where strong experimental verification exists, but at other times, the conclusions are different and lead to new physics.

Because new predictions are reported, along with proposed experiments to verify them, Information Physics cannot be a tautology (a tautology is a derivation of a premise that starts from that very premise).

*Going forward *

In the course of this book, we’ll talk about:

… how the concept of information fits into the very foundations of reality

… how to derive Einstein’s equations, like time dilation, without Einstein

… how to procure quantum basis for reality without postulating it

… how to derive Newton’s Law of Gravitation without Newton

… how to deduce that mass, light and gravity have to exist, without knowing that they do

… why maximum speed in Nature is local, with the speed of light the slowest of them all

… why near Earth it’s impossible to break the light barrier

… why we don’t need Einstein’s Relativity

… why interstellar travel is possible

… why true antigravity is possible

… why our experiments see none of this

… how to test these claims.

[][] Information

*Virtual worlds *

Imagine a world of virtual reality living inside of your computer. There are people in it, together with houses, streets, trees and the sky. Imagine that your computer is advanced enough, so that virtual reality is as good as the reality we live in. Imagine that laws of physics are the same in the virtual world as they are in ours. People in this reality are intelligent and self-aware, but they don’t know they all live in your computer.

The following conceptual diagram depicts virtual reality, based on computing resources.

What is the difference between our world and the virtual world? For intents and purposes of living in the world, there is none. For intents and purposes of understanding the world, there is a difference, and it’s a very important one.

Imagine if the people in virtual reality at some point learned they are not real, but are actually the product of information use in your computer.

Now, the virtual people could use the laws of information science to understand their world. For example, if they knew that everything that happens is a result of computation, they could use that to their advantage. How could they do that? After all, they only know they are the result of information use, but they don’t know how it’s done.

The virtual people are in luck. There are generic rules of information use, regardless of how it’s done. It’s important to stress that the generic rules we are referring to, do not depend on first principles of physics, such as Relativity or Quantum Mechanics.

And if their world of information is made in the simplest possible way, then those generic rules become even more specific. The virtual people can imagine what kind of information framework they live in. This framework is true [regardless of the physical reality of their world – _]in this case, inside your computer, but in general case, by any possible means[. _]

For example, in such a framework, a concept of change can exist only if there is a basic mechanism of memory. In such a world, a specific action can happen only if there is information used for it to happen. These truisms apply to any kind of information use. Once you properly take them into account, you can understand the information-based world a whole lot better.

[*Back to reality: our reality isn’t virtual *]

Let’s step back. The inhabitants of virtual reality can understand their world better, because their world is based on information (after all, it runs as a program on your computer). Our reality isn’t a program on someone’s computer. So how does all this deliberation help us?

In Information Physics, we say that physical matter operates by means of information use. But, there is no computer on someone’s lap that runs our reality. Our reality is [_naturally informational, because it has to be so. _]We know of no other method of producing non-random behavior other than through the use of information. We will talk more about this in the following chapter.

A naturally informational system is in some ways similar to a virtual world we described. But this naturally informational system [_doesn’t run on a computer. _]It runs in physical space and a constant flow of time moving forward. This is to say, it runs in a simplest version of space and time.

If this is so, then we can apply the basic tenets of information science to our world as well. By knowing our reality is naturally informational, we can also apply the apparent facts about physical space and constantly forward-moving time. For example, we know that all directions in empty space must be equal, because there is no reason for any direction to be preferred. By using facts like that, and combining it with information science, we can learn even more than the virtual people can. This is all possible if we know that our reality is a natural informational one.

But why would this be so?

*Information drives reality *

How do elementary particles work? The question isn’t posed in the context of what they do. There is a good chunk of physics dealing with this question. We know that particles do specific things, such as for example, electrons attract protons and repel other electrons. The question is, how do particles do, whatever it is they do?

To answer that, think of the world of virtual reality. There, all that happens in a specific way happens because there’s information to guide it. Whatever happens without information has to be random.

A specific behavior cannot be achieved without the use of information. This fact shouldn’t be lost on us. We know that’s true in our own reality, and we know it’s true for everything in virtual reality. It is considered axiomatic, i.e. true on its face. Yet somehow, in fundamental physics, we assume that’s not the case. In our example, we presume that particles follow laws that apply to them. We do not think that particles use information, so they, too, can achieve specific behavior.

The idea of Information Physics is that particles do use information. They too, cannot escape the conundrum of having to use information to act the way they do. It is a tenet of elementary logic that without the use of information, the resulting action is always random.

Any other way of explaining the behavior of particles reduces to magic at one point or another. This is regardless of how advanced the method of explanation is, or how good that method is in [predicting the behavior of particles. _]Remember, the question we asked isn’t about _predicting the behavior of particles. The question is about why they would behave in a way that requires predicting.

If information is responsible for the behavior of elementary particles, then there is the question of what a true elementary particle is made out of? It functions solely by using information. To function by using information, there must be a mechanism that stores, shares and processes information. We will deduce a great deal about what this mechanism does and what basic characteristics it has. However, this mechanism, whatever its actual shape or form, isn’t something we can observe. Here is why:

If we could observe this mechanism, it means we could obtain information about it beyond the information it serves. But then, the information it serves wouldn’t be really fundamental. It would also mean that this mechanism is made out of other entities that, conceptually, do the same job. It would be a duplication of methods and means without any purpose. It would also lead to infinite amount of information held in a single particle, because we could repeat above dissection forever.

We will call this mechanism a physical particle. The name is obviously already used in physics to denote tiny specks of matter, classified by their behavior and qualities. While that is all fine, we think of physical particles in more general terms: they can store, share and process information. Everything else you can say about them is just reducing this information to what we can observe. We focus on how the particles work – which is by using information, instead of a notion that they follow laws, which amounts to anthropomorphism, borrowed from our sense of world order.

The reality unfolds by usage of information. The method of usage is given form as a physical particle.

What is the world made out of now? It’s made out of particles that possess and use information. Thus we say that the usage of information is a foundational layer of reality, one that comes before the physics as we know it today.

This is formation-by-information. We ourselves are the result of information use, and so is the Universe around us.

A conceptual diagram depicting formation-by-information is given below.

The fact that we can build computers, or that our own minds work like that in many ways, isn’t a coincidence. We live in the informational Universe, and we, and our creations, are a reflection of that, and not the other way around.

The computational qualities arising in Nature aren’t emerging, but rather, are ultimately foundational.

[*What is matter? *]

In Information Physics, physical matter is the basis for the use of information. Information does not exist on its own. Physical matter is the enabler of information use. We won’t get into how it enables information use, because we don’t need to. We can figure out much without getting into those details.

*It’s all very practical in the end *

Consider an analogy about why there’s kinematic time dilation, which is the slowing down of time for objects in motion. This is a pivotal result of Einstein’s Relativity. Let’s figure it out without any notions of relativity or light, by using an informational approach.

Every particle in Nature works by processing information. This information comes from all particles. This drives the reality we see. Think about the throughput of processing information. To do that, step back to our everyday lives for a second.

Imagine you’re in a fast train, looking out of the window. If you are thinking about something, you will now think slower compared to when a train was at rest. Why? This is because there are many more details to process about the surroundings outside the train, if the train moves. The faster the train, the more details there are for you to process, and the slower your mind is.

What do we conclude here? Your mind is an information system. It has a limited capacity. When there’s more information to process, it slows down. When you’re in motion, your mind becomes slower because there’s more to take in. It’s as if time itself slows down. In reality though, it is only the throughput of processing information that has declined.

Now, consider if everything in Nature works this way, including elementary particles. If they are information systems of limited capacity, then relative motion will influence them just as it influences you: they will act slower. This conclusion can be formalized, and when we do so, it will match Einstein’s results for time dilation in special cases. In other cases, the equation we get becomes a more generalized version of Einstein’s results.

Keep in mind that we will often use analogies to help visualize the concepts. For example, we will use analogies with shooting a video, just as we already used an analogy with looking out of a moving train. If you take those analogies literally, and think they are used to prove or disprove anything, do so at your own risk. Most every analogy used in modern physics carries in it somewhat of a [circular reasoning. _]That’s understood, but it doesn’t diminish the value of them. The actual ideas and the formal reasoning in the full paper do _not rely on any analogy.

[*Where do we go from here? *]

We’ll first touch on the work of Shannon and the modern information theory. We’ll continue with some insight into works of Einstein and Heisenberg. Others may be mentioned as well. The physics revolution of the early 20th century (of which Einstein and Heisenberg were an integral part) was an effort by many people. Partly by reasons of clarity and partly by reasons of fame and familiarity, we’ll focus on their work and perhaps a few others.

Keep in mind that the overview of the works of Einstein and Heisenberg is [not given in order to argue their points. _]Information Physics takes a route that doesn’t intersect with their work. There is nothing to argue. The overview is given _only as a historical perspective into what we’re talking about here and why.

Skipping to the very end, it will be demonstrated that Einstein’s equations hold only in some special situations, such as when on Earth or nearby, or in general near a large mass. Some good distance from Earth and the Solar system, those very equations take a different form. Achieving speeds greater than the speed of light is no longer prohibited. Mass will not increase infinitely and time will not slow down to zero, as Einstein’s equations predict. The flow of time will not reverse, as it is sometimes pointed out in the popular press, and you will not end up in the past, having a chance to ruin the date that brought your parents together.

[][] Shannon and the concept of information

In this chapter we will discuss the notion of information used by physical particles in the present-day context.

In the late 1940’s Claude Shannon developed his information theory. Shannon’s theory is the basis for many aspects of modern computer science, with applications in other areas as well.

Shannon’s theory introduced the concept of information as a measure of entropy, which in broader terms means the uncertainty of predicting the value of something. For example, a coin tossed will end up heads or tails. If we predict it to be heads, there is obviously some uncertainty, because we don’t know for sure which one it will be. If we could predict the toss every time, then the result of tossing would never be news to us, and would have no information.

Information is generally defined as an observable quantity in Shannon’s theory. The same notion is used in modern physics as well. This definition essentially treats a physical system as a black box we can study for information.

Besides Shannon’s approach, there are others that tackle the concept of information. The ways to do so are many and some pose complex questions. However, so far, they have one thing in common: information is observed, no matter the underlying model for it. In Quantum theory though, qubits may contain much more information than we observe. The question of whether such information really exists is open to this day. Regardless, we still consider information only as something accessible to us, and nothing more.

Here we consider that the origin of physical processes may lie in the usage of information. This information is accessible to foundational entities, analogous to the way observable information is accessible to us. But, the information used by physical particles isn’t accessible to us. Only the result of its use is accessible to us – the result of its use being the behavior of physical particles.

In essence, we are broadening Shannon’s definition of information, as well as all others available to us today.

[][] Einstein’s Relativity

Before Einstein, relativity generally referred to a notion that laws of mechanics are the same for all inertial (i.e. un-accelerated) frames of reference. In other words, the experiment with the ball bouncing off the walls will end up the same way, regardless of whether you are conducting it in your house, or on a train moving uniformly. Einstein expanded this notion with the postulate that the speed of light is the same in all such frames of reference. He showed that as a result, time must slow down for moving objects and that mass increased as well.

The Special theory of Relativity was mostly grounded in Maxwell’s theory of light and dealt with systems that do not accelerate. In 1916 Einstein published the General theory of Relativity. This was a geometric theory of gravitation which generalized his Special Relativity, hence the name. Einstein said, and later described it as a happiest thought of his life, that a person falling in a gravitational field would not know that he is falling (setting aside the fact of everyday life that falling off of a roof is something you would know, but not in a sense Einstein meant).

For example, if you were in an enclosed area with no windows falling towards Earth, you could not tell if you are stationary or actually falling. This is meant to say that you could not perform any experiment that could conclusively differentiate the two scenarios. This is also called the Equivalence Principle.

From here on, Einstein managed to develop a General Relativity and show that gravitation affects passage of time, among other things.

*Einstein and the nature of time *

The idea that time can slow down is an ambiguous proposal. Consider two people moving fast toward one another. According to Einstein’s Relativity, each person’s clock will move slower relative to the other. Think about this again.

If clock A ticks slower than clock B, then clock B ticks faster than clock A. That’s common sense. The idea that A ticks slower than B, and B ticks slower than A, seems contradictory (a popular version of this is called the “twin paradox”). Einstein’s Relativity solves this issue in General Relativity in a way that is not easy to describe in a few words. For those who wish to delve into specifics, take a look at Daniel F. Styer’s paper in American Journal of Physics, aptly named “How do two moving clocks fall out of sync? A tale of trucks, threads and twins” published in 2007.

The reason why seemingly paradoxical statements (such as the above) get resolved in Relativity is because Einstein’s work is well-defined and is a self-consistent theory. The self-consistency is important because it means that conclusions will hold, assuming that initial assumptions are true. Of course, self-consistency is not a euphemism for the truth. And finally, if initial assumptions are not always true, then the theory is not always true either. According to Information Physics, this is the case in certain special cases.

We will show in Information Physics that informational premise can be used to explain the same extraordinary experimental results. The results of Relativity will emerge as a special case.

*Balls and photons *

To understand what does it mean that “the speed of light is the same for all observers” consider the following analogy.

Imagine throwing a ball forward from a moving car. Assuming there is no air resistance, the speed of the ball will equal the speed of a car plus the speed at which it is thrown.

Now imagine this ball being a photon of light. For example you can turn the lights on and now the car is effectively “throwing” photons forward. Unlike any other ball, the photon’s speed will not be the sum of the car’s speed and that of a photon. That is extraordinary and difficult to conceptualize.

A photon will move at the same speed no matter how fast the car moves.

This is hard to imagine because no ball will behave this way. But photons do, and that is what is meant by “the speed of light being the same for all observers”.

This assumption of Special Relativity is now accepted at face value in the modern scientific community. But we really do not know why the speed of light would be the same for all observers, even if our experiments seem to confirm it. When it comes to General Relativity, we also don’t know why the Equivalence Principle should hold true, either.

It can be said that Einstein’s postulate is a natural extension of a Galilean principle of Relativity which says that mechanical laws are the same in all inertial frames of reference. It can be said that Einstein extended the principle of Relativity to the electromagnetic realm.

Even though the original principle of Relativity has been accepted for centuries dating back to Galileo, we don’t know why it is true, either. It was also a Galileo’s “hunch”, just like Einstein’s postulate. Saying that it makes sense to “extend it” with another hunch is compounding hunches on top of each other. All the while the true reason for why it is so remains unknown.

We will not use any of Einstein’s postulates or principles in Information Physics. They are not needed to derive its results.

[][] Quantum Mechanics

Some two decades after Special Relativity, Werner Heisenberg published his Uncertainty principle. The essence of it is that we cannot know for certain all aspects of motion.

For example, position and momentum of an object can only be known to a certain degree. This is in stark contrast with the long-held beliefs of Newtonian mechanics where laws of motion can describe all there is to know about an object. This was the dawn of Quantum Mechanics, a branch of physics that fundamentally established probabilities (and not certainties) as a way to look at the physical world. Suddenly, and especially in the microscopic realm, we could no longer apply the same logic as we did before.

While prior to Quantum Mechanics we would think of these microscopic particles akin to billiard balls, now we would think of them as waves of probability. Quantum Mechanics encompasses much more than this (especially in its later iterations), however we will stick with its fundamentals.

Heisenberg’s Principle of Uncertainty remains the central idea of Quantum Mechanics even with all the development of the theory the ensuing decades saw. It embodies the notion that predictive determinism is not possible in Nature. It put the concept of causality in a very uncomfortable place in the mind of many physicists.

*Uncertainty uncertain *

At the same time, no fundamental explanation has ever been offered for the uncertainty principle. Why is there inherent uncertainty when it comes to determining both position and momentum of an object? It is often said that it is the innate quality of Nature.

It is also often said, and known as Copenhagen interpretation, that there is no true reality of a physical system, but rather there is only an observational reality. It is somewhat akin to saying that if something isn’t observed then it does not truly exist for us, in a sense that we cannot say anything precise about it.

A popular thought experiment known as the “Schrodinger’s cat” envisions a cat in a box. There is a deadly poison in the box and whether it gets released or not depends on whether some small particle behaves one way or the other. Because we are not observing this particle, we don’t know how it behaves. So we don’t know if the poison is released or not, and if the cat is dead or alive. This is easy to understand. What Quantum Mechanics is saying, however, is that unless we observe this particle, it can be in any state really (with different probabilities) and more specifically in different states at once, so each state exists at the same time. Cat could be alive and dead at the same time!

Only when we perform a measurement, we would change the system so that it becomes one or the other. This is very different from just not knowing what state it is in, while in reality the state of the system is always one or the other. This is saying that the state of the system is many things at once. This is proposing that the act of observation declares one of those states a “winner” and so we observe the cat to be either alive or dead. This is called “collapsing the wave function”.

Quantum Mechanics says that reality is borne out of interaction and observation and that it does not uniquely exist otherwise. This is the clash of the “realism” with Quantum Mechanics. In the “realism” view, we assume that reality exists independent of us, and that we just may not know enough about it to be able to predict everything.

When it comes to the effects common to Quantum Mechanics, Information Physics says that there is a real and logical mechanism behind it. Information is what powers reality through its usage. We will deduce that there exists uncertainty, without the premises of Quantum Mechanics.

[][] Information Physics

*Spatial and Observable information *

In Information Physics there is only one kind of information, and that is the information used by physical particles, which we call spatial information. It is the information possessed and shared by fundamental physical entities that, as we will see, also exists in physical space, hence the name. By using this information, the fundamental entities create the reality we live in.

Conceptually then, the information we can possess is the subset of spatial information, and this is the usual observable information. This is the information we can measure about the fundamental physical entities. It is the information about how they behave. The observable information is the only notion of information present in modern physics – this is the notion of information we’re accustomed to.

In short, the spatial information is the kind of information used by the fundamental entities so they can behave the way they do. The observable information is the kind of information used by us to describe the behavior of fundamental entities.

The depiction below illustrates this. The spatial information is used by A and B – this is how A and B know how to behave. The observable information describes A and B’s behavior.

If you think about an analogy with virtual reality, the spatial information would be what’s [_in the computers that run the virtual reality. _]The observable information would be [_the information available to the characters in virtual reality. _]Obviously, spatial information is greater than observable information, i.e. it is a superset.

We can take the analogy of virtual reality, and apply it to our own. Electrons and protons move around to form an electric current, or get together to form atoms and molecules. To do that, they need to possess, use and share the spatial information, in order to know how to move around. Once electrons and protons form us and our tools, then we can go about measuring things. This means we can observe the actions electrons and protons take, but we cannot observe [_the exchange of information between them. _]The former is observable information, and the latter is spatial information.

*A generic concept of information *

In the context of this book, we will consider information to be a set of facts that cannot be further divided. Further division leads to new facts in an endless cycle, which leads to unlimited information content. We avoid infinite values in the essential concepts we use.

Information use means that two facts can produce a new fact. In general, information use means that two sets of facts combined can produce some result.

*Information exists in physical space *

We can start by thinking of a single point in space. By “space” we mean flat N-dimensional space (where N can be any positive integer number), not the 4-D Minkowski space-time usually associated with Relativity.

How does information relate to the space in which it exists? Where would the information be? How would it be exchanged and used?

We’ll start with a simple Universe made up of two tiny particles at some distance. The meaning of a “tiny particle” is to say that it is so small it is practically a dot. The particles contain some information.

We can assume that the information is present on the surface of the particles, since they are so small.

*All scales are equal *

Imagine the particles have doubled in size, and the distance between them has doubled too. As far as the particles are concerned, nothing has really changed. The Universe still looks the same because there is nothing else to compare this new (inflated) reality with the old one.

You can imagine an arbitrary point in space far from the particles, and then by “blowing up” everything in scale, this point in space can now be part of either particle.

We can now draw an interesting conclusion.

A “tiny particle” isn’t really tiny. It can be arbitrarily large or arbitrarily small, and there is no way to say which one is the “right size”. We conclude that the information of a “tiny particle” exists on an arbitrary sphere around it. This is because a “tiny particle” could be inflated to match an arbitrary sphere. Thus, all spheres around a “tiny particle” have the same information.

We say that information, originating in a given point in space, exists on any surface around it.

Since the information itself does not change, it means the density of information declines with the distance from the “tiny particle”. This is because the surface of a sphere grows with its radius.

The depiction below illustrates this. The dot in the middle is the information source. Its information exists everywhere, but its density declines with distance.

We can also put this in more formal terms: for information in N-dimensional space, there is no preference for scale. You can deflate or inflate the system however small or however big you want, or change the scale if you wish to think of it that way, without any difference.

*All directions are equal *

The picture we have now is of a particle centered in a dot in space, and of information density declining away from it. You can imagine a cloud of facts around the particle, with density of a cloud declining with distance.

The facts that comprise the information content can be anywhere on any sphere around a particle. This is the consequence of all directions being equal. We say that for information in N-dimensional space, there is no preference for direction. It means the facts are randomly scattered on any sphere around a particle.

However, these facts cannot remain so for long. If each fact remains where it is, as random as their positions may be, it would mean their present positions are somehow preferred. Why would these present positions be preferred?

There is no reason since all directions should have equal probability to contain a fact. Because of that the position of facts must change periodically, so that each position has equal chance to possess a fact. And then it must randomly change again, for the same reason. And again, and so forth it goes forever. This way every point on a given sphere around the particle will have equal chance to have any fact. In effect, the facts “randomize” periodically. This is illustrated below:

*Randomizing of information *

How frequent are these “randomizations” of information? If they were too slow, then some directions in space would be preferred, from time to time. It would appear, for example, that physical effects are present in one spot, but not the other: say, electric field would be present around electron only to the left of it, but not to the right of it.

What if the randomizing of information is too fast? You can think of it as a change of scenery. Each time scenery changes, there is new information to process. To use an everyday analogy, this is why shooting a video is more involved (in terms of information processing) than taking a picture, because in a video, scenery changes all the time. In the same way, randomizing of information would mean more information to process, and it would slow down the processing.

If the randomizing were too slow, we would notice some directions in space to be preferred every now and then. If it were too fast, the use of information would be too slow and the reality wouldn’t be smooth in its workings.

Because we generally don’t see either problem, it means the frequency of randomizations is somewhere in between, such that our reality may emerge.

If you wish you can think of randomizing as rain drops hitting the window. Suppose you live in a place where it rains a lot, like Seattle. Will rain drops always hit the exact same spots on the window? Every fresh batch of rain drops will likely be random compared to others because there are no “preferred” spots on your window. You can think that the positions on the window where the rain drops hit, “randomize” with each new wave of drops.

*Sharing of information *

We said that each particle has a “cloud” of information that spreads out from it and exists everywhere. When a particle moves, so does the cloud. For a particle to share information of other particles, it needs to do nothing. The partial information from all particles is already there, available from their “clouds”. Different particles will offer varying number of facts to share, since the density of facts declines with the distance from each one. This is illustrated below:

The obvious method of sharing information in full fidelity is by moving particles that contain them. A particle that moves from point A to point B has transferred its information in its entirety from one point to the other.

In essence, we have concluded there are two ways to communicate information: one is the instant sharing, where the information density declines with distance. The other is the actual moving of particles, where the entirety of information travels with the movement of a particle.

We will see that the actual moving of particles is subject to a speed limit. This speed limit in many important cases is what we call the speed of light. We will prove this point without using any notion of light to begin with.

*Available information *

Because the information of each particle declines with distance, it means that exact information available to any given particle depends on where it is. As a particle moves around, the amount of this information changes too.

For trivial reasons, we will call this information the “available information”.

The available information in any point is essentially the sum of all the particle’s information available in that point.

Some of the particles contribute very little information for example because they don’t have much to begin with, or because they are just too far away. Others contribute a lot because they have lots of information, or because they are close. We use the concept of “information influence” to express how much information a given particle contributes.

The concept of information influence is illustrated in the following diagram. fAC, fBC and fCC are information influences of A,B and C on a particle C, thus indexes AC, BC and CC. All information available at C has equal chance to be used, including its own. fAC, fBC and fCC represent the percentage of each of A,B and C’s information used by C.

Available information in N-dimensional space is also called the information field, because to an extent, it fits into a notion of field in modern physics. Because information is, by definition, a finite set of facts, this field is scalar, i.e. it has a finite value everywhere in space, but it has no direction.

*The resources *

A particle uses the information available (from all particles, including itself) and determines what action it will take. Since the information is always available, it can always make that choice. What exactly the facts contained in the information are, or what the logic the particles use is not relevant in Information Physics and is left abstracted. This is good, because it provides for a more generic model.

In order for particles to be able to use information, they need to collect the information available and store it. We will assume that however the use of information happens, it takes a bit of time to do it. We will also assume the information storage capacity of a particle is finite. We will assume the number of particles in the Universe to be finite. These assumptions are reasonable because by using them, we avoid infinite values to describe physical matter.

*Information use and acceleration in space *

If particles never moved, then reality would be static and we wouldn’t be here to talk about it.

The very act of relative movement implies acceleration. Before any movement, there has to be acceleration to achieve that movement. The simplest of all actions, acceleration is assumed to be the sole result of use of information. So whenever a particle uses information, the end result of this use is acceleration. This includes zero acceleration, i.e. uniform movement.

We will assume that the resources for information use and for acceleration are limited and equal for all particles. If so, then the very act of existence can perpetuate itself as far along as possible based on those resources. Once all those resources are exhausted, the use of information will stop and the Universe will eventually and gradually cease. The use of information will thus be considered such to minimize the consumption of resources.

*Preservation of resources *

The concepts of information use and acceleration are causal, but not equivalent. It is reasonable to assume that information is being used constantly at all times. This is because new information can become available at any random moment in time, and if information weren’t used continuously, such information would be missed. It would mean that particles would ignore some information and would not ignore other.

Imagine yourself watching TV. There is a constant stream of information coming to you. If you choose not to accept some information, then you may not be able to follow the story on TV. Just as you would ideally choose to absorb all the information, so would the particles.

This is to say that information use has no preference for time, meaning that all moments in time are equal with respect to computation. A particle will always process information because it’s always available – from itself, and from all other particles that make up the Universe. Thus, in terms of resources for information use, there is no need for any sort of preservation because information use has to happen continuously.

Unlike processing of information, the acceleration happens only when the use of information necessitates it. Because of that, as far as acceleration goes, there is a need for preservation of those resources. Whether acceleration happens or not at any point in time, depends on the result of information use at that particular time and place.

If you’re watching a comedy on TV, you will laugh when it’s funny. The exact moments when you laugh are determined by how funny the moment was. In other words, based on the information you get by watching a comedy, at some moments you will laugh, and at others, you won’t.

In comparison, watching a comedy is like using information, and laughing is just like acceleration. Even though you watch a comedy at all times, you laugh only sometime. Similarly, use of information (by the particles) happens at all times, but acceleration only happens when such use warrants it.

If you had limited “laughing” resources, you’d try to preserve them, so you could laugh as much as it suits you. In the same fashion, preservation of acceleration resources by the particles would be the simplest way of prolonging the time in which a reality can exist.

If resources are limited, then a particle would always attempt to minimize the use of acceleration resources if it is possible to do so, on a permanent basis. The meaning of “permanent basis” is simple: a particle may spend some acceleration resources now, in order to reduce their expenditure permanently. If it wasn’t permanent, then a particle may end up spending resources forever and eventually exhaust them faster than if no attempt was made in the first place.

Think of it in terms of refinancing a mortgage on your home. You can invest some money into doing it now, in order to reduce your expenses permanently.

*Memory and motion *

How can you in principle tell that something is in motion? In general, you have to remember the position of an object as it was a few moments earlier, and then compare it with its position now.

What we have described here is a concept we all know as “memory”, or being able to use information stored in the past.

Another way to describe “having a memory” is to say that a system has “states”. A state is a snapshot that holds information from some moment in time.

If we have a current state (meaning “now”) and a previous state (meaning “just a moment ago”) then we have the simplest, most elementary memory. Such system is said to “have a state”.

Without some form of memory on a foundational level, nothing that came before would influence what happens now. For example, consider an electron being attracted by a proton. How would an electron know in which direction to move? It can only do so by using the available information that comes from a proton. This information varies in space, but as we said, it is scalar, i.e. has no direction. Only by comparing the available information in different points in space, can a direction of motion be ascertained. The comparison is impossible without a rudimentary memory, because the information about the different points in space, and consequently from different moments in time, must be present at the same time.

In other words, particles must have some way to “remember”, which means they must have some way to take into account the information that happened prior to this moment.

The simplest form of such “memory” is “previous” and “current” state.

A state is effectively a snapshot of available information. It comes from all the particles that make up the Universe. We already talked about available information. So in a way, the state of a particle is its window into the world, i.e. it represents how a particle “sees” the Universe at different moments in time.

We could also suppose that a particle has more than two states, effectively remembering further into the past. However, the simplest and minimal requirement for a particle is to remember just the two sets of facts (current and previous). We will accept the simplest premise.

[*Past + Present = Future *]

When we observe the motion of an object, we use the information about the past and present location of an object in order to create new information, which is a realization that an object is moving.

This new information is the new state (or the future state), one that we reached by using the information from the past and the present. In a true sense of it, the past and the present combine to create the future.

In the same fashion, a particle must function by combining the previous and current state to reach a future state. This “combining” will be called “processing” of information, or a “processing cycle”. “Combining” facts is also called [_interaction of facts, _]a term we will use in the formal theory.

The processing cycle is the basis for unfolding of reality. It is founded on information use alone. It creates new information and accompanying actions that we observe as physical effects.

Because time goes forward constantly, the current state becomes the previous state in the next moment.

The following represents a logical mock information flow during the processing cycle. Steps 1,2,3 repeat. We call this a mock flow because we don’t consider the timing of the steps.

Step 1: previous and current information are used to create new information, i.e. future information. This is now available to all particles.

 

Step 2: current information becomes previous in the next moment. What was previous information, is now gone.

 

Step 3: information presently available is now particle’s current information. Go back to Step 1.

The “states” are called “sets” for easier presentation and understanding, so the current state and current set is the same thing.

A processing cycle combines the previous and the current set (Step 1). The result is the new information and an action caused by this information use, which is acceleration. What was the current set just a moment ago is now a previous set (Step 2). The new information then becomes a current set (Step 3). The cycle of combining and swapping the previous and current sets continues, and the reality unfolds.

It is interesting to ponder the nature of time in the context of information sets. We only call those sets “previous” and “current” to allude to their meaning in processing cycle, but both of them are processed at the same time. Only the fact that there can be two processing cycles with different information sets presents a proof that some time passed in between, even if a very long period of time has actually passed.

*Dark particles *

We said that acceleration resources are used when needed. Depending on usage, the acceleration resources for any given particle may get exhausted with the current processing cycle. If this happens, a particle would no longer be able to accelerate. However, the information of a particle would still exist and would still be available to other particles. We say that such a particle would go “dark”, meaning it would affect other particles, but itself could not accelerate. Eventually, all particles will become dark particles.

*Additional information *

During the processing cycle, new information can become available, in addition to what is present at the very beginning of the processing cycle. This information is called additional available information, or just additional information.

Relative motion is one reason for additional information. When a particle moves relative to another particle, it will visit more locations, and be exposed to more of its available information. Another example is randomization. When information “randomizes”, new information that didn’t exist before in some point in space may become available now.

During the processing cycle, previous and current information are used. During this time, available information and additional information are collected for the next processing cycle.

The following diagram illustrates this. A0 is the available information (collected at the beginning of the processing cycle) and A’1, A’2, A’3 etc. are the additional information (collected during the processing cycle, if any).

This concludes the introduction of the basic concepts in Information Physics. We will now talk about how, conceptually, the informational premise can explain the postulates of Relativity and Quantum Mechanics.

[][] Why space has three dimensions?

So far we have presumed that space has N dimensions. We can, however, deduce that space has to be three dimensional. The discussion we had so far wouldn’t change much, except that the surface of a sphere would decline differently with radius. In three-dimensional space it declines with the square of radius, in four-dimensional space with the cube of radius, etc.

In general, in N-dimensional space, it declines with the power of N-1. For example, in one-dimensional space, it does not decline, while in two-dimensional space it declines proportional to the radius. So for a radius x, the surface of a sphere would decline as x^N-1^.

Consider two particles moving from one another, from some finite distance, away to infinity. Let’s call them A and B. During this journey, A and B will obtain certain amount of available information from each other. The amount of information that A collects from B must be finite, or otherwise, A may never get to use its own information, regardless of the distance. The same goes for B.

At the same time, the amount of information either A or B collects must be the maximum possible. This stems from the starting premise of Information Physics that use of information is the cause of all physical processes.

Thus we come at the conclusion that when A and B move away from one another (or towards each other), the amount of information they use should be the maximum finite amount possible.

To express this mathematically, we would look for a finite solution of the following expression, where N (the number of spatial dimensions), is the variable:

You don’t need to know calculus to understand the above equation. It approximates the amount of information either particle would collect moving from distance R to infinity (∞ sign). The number of spatial dimensions is N, and the expression under the integral sign represents the way a surface of the sphere declines.

The equation above is easy to solve and the solution is N=3, and here is why: for N being 1 or 2, the amount of information collected by A and B is infinite. For N>2, the maximum amount of finite information collected is for N=3. For N>3, the amount of information declines. Therefore, the number of spatial dimension must be 3.

[][] Unification by information

*Einstein in Information Physics *

When particles move faster, everything they do slows down. The same is true when they get closer to one another. Why? It is not difficult to picture this. Imagine you are a particle, i.e. a fundamental physical entity:

The information of a particle exists all around it. If you move relative to it, you will now visit more locations in space than if you don’t move. So you will see more information in the same period of time, kind of like catching more droplets of rain if you run.

And what about the effect when the distance changes? When you move a bit closer, you see more information. How much more? If you’re very close, the step-up would be large. If you’re far away, it’s negligible.

Either way, you see more information, either due to motion, or the proximity to a source of information. If there is more information to process than the storage available, for either reason, a particle has to discard some information it already has. In the process, less information will be processed in the unit of time. The end result is that everything a particle does will slow down.

What we intuited here is what Einstein called “time dilation”, which is a phenomenon where physical processes slow down when in motion, or when close to other masses. Einstein said that time itself slows down.

In view of Information Physics, time does not slow down. Rather, it is the “inner workings” of all physical processes that slow down.

*Heisenberg in Information Physics *

The consequence of information use is acceleration, which determines the motion of a particle. When some information is lost, the motion has to be unpredictable.

We can deduce that the motion of physical particles is not deterministic, which was postulated by the Heisenberg’s Principle of Uncertainty, in one form or another.

Our reasoning so far presents us with a simple qualitative derivation of time-dilation of both Einstein’s theories of Relativity and the uncertainty in general. Another source of uncertainty lies in physical particles forming and interacting with waves of other particles.

*Acceleration *

When a particle moves faster, it is exposed to more and more information that needs to be processed. This continuously changes the rate of its usage.

In uniform movement, this doesn’t change, and the use of information proceeds at the same pace.

This exemplifies why physical processes behave differently in accelerated systems. The special place the uniform movement holds in the postulates of physics comes out as a consequence of information use. This means relativity is a consequence of Information Physics and not a postulate.

*Speed of light *

Think about a particle moving faster and faster.

More and more information causes processing to be slower and slower. At some speed, the throughput of processing will go down to practically zero. When that happens, a particle can’t accelerate any more. There must be a speed limit, and it cannot depend on the initial speed of a particle.

This is a qualitative deduction of Einstein’s light speed postulate. We will provide a quantitative one soon.

*Mass *

The time needed to process the same information can be used as a measure of a particle. If one particle takes 10 seconds to process information, and the other takes 20 seconds, we could say the latter is twice as massive as the former. The slower a particle processes information, the more mass it has. We will show that mass is nothing but the time needed to process information.

This is how we perceive inertial mass too. Something that’s massive takes longer to bend to our will, assuming the power we apply is the same.

*Speed limit *

We’ve said that the speed limit exists because there’s more and more information to process as a particle moves faster.

However, if the influence of other particles changes, then the information to process from them, changes too. In this case, the speed limit will be different in different situations. We’ve said that the influence of particles changes due to relative movement and varying distance.

What this implies is that the speed limit varies and can be higher than the “c” (the speed of light: 300,000 kilometers per second). But, near Earth, or any other massive body, the influence of such bodies is so high that a particle cannot move faster than 300,000 km/s. That’s the reason experiments with accelerators can’t detect superluminal motion.

But, a tiny electron near large mass is not the same as a 1,000,000 kg. machine far from the massive celestial bodies. A tiny electron’s information use is overwhelmed by the presence of Earth. But a million kilograms machine far from Earth may not be overwhelmed. So surprisingly, the big machine far from Earth has a better chance to break the so-called light barrier than a tiny electron here on Earth.

*Frames of reference and Magic *

Think of the thought-experiments of Relativity dealing with relative motion. In them, ever since Galileo, physicists have strived to make all the different frames of reference to appear the same, in terms of physical laws. They did this in order to explain how Nature works. From the standpoint of informational Universe, we don’t have to do this. There is a simpler approach:

A particle processes information the same regardless of where it is or how it moves. This is easy to accept without the need for translation between frames of reference and to postulate their equality, which is how Relativity works.

In Information Physics, relative motion means that more information is gathered, and that change in motion will be slower and less precise. In other words, relative movement does affect the information use of physical particles, and by extension, physical effects.

Modern physics says that different frames of reference obey the same laws. From there, through translating the coordinates between such frames of reference, certain physical effects appear as a consequence. However, there is no need to start with such an assumption, which is that Nature will make every observer see the same laws no matter the state of motion.

That’s like believing in a “judge” in Nature that gives out the same cookie to everyone, so no one gets more cookies than anyone else. Relativity has a good deal of notions like this borrowed from our own sense of social justice. While certainly worthy in other respects, those notions are clearly anthropomorphizing Nature. They are difficult to explain, and thus likely not the final truth.

*Two clocks walk into a bar… *

Suppose you and I are driving on a highway in opposite directions. What does your clock look like to me? And mine to you? The question is obviously posed in the context of Einstein’s Relativity where time is said to slow down for moving observers.

The answer: after we turn around and meet at the roadside bar, both our clocks will be lagging behind the bar’s clock. The two clocks have processed information slower due to movement relative to everything else that has any influence on them, but mostly Earth, having the highest influence.

Both clocks will be slower relative to a place where there is no additional information to process. This place would be far away from anything else, because at that location there’s nothing to add to processing of information. Hence, clocks slow down relative to the same baseline throughput of ticking, and not relative to any other object. This is conceptually different from Special Relativity where clocks slow down relative to each other which is difficult to comprehend.

How much of a slow-down does each clock experience exactly? It depends on the information influences of everything else on each clock. This influence depends on the distances and masses of everything else, most notably Earth itself, being the largest and closest mass.

[*Quantum physics: nothing weird *]

When more information is available than the storage to hold it, some information must be lost. If some information is lost, we can never truly figure out the particle’s movements. We can figure it out to some degree. A result of information use and by extension, a reality itself, must be fundamentally uncertain. This is a deduction, not a declaration of a principle.

If a particle has limited storage, all of its results must come out as integers. There cannot be such a thing in the Universe as an irrational number. That’s where the “quantum” behavior comes from.

To put it in the context of practical mathematics, it means there is no need to calculate π forever. The π exists only in the minds of intelligent beings that can imagine such things. In reality there is actually the last digit of any consequence.

*The nature of mathematics *

Laws of physics exist in the same fashion as economic, political and societal laws. Each of us makes our own decisions, based on information available. All those individual decisions, taken as a whole, often fit certain patterns. We summarize such patterns in the form of societal laws. Physical “laws” are laws in the same sense, meaning they only approximate reality.

This is because it’s not a mathematical reality but an informational one.

A reason why people like mathematics is because it provides a shortcut to describe complex systems. But mathematics is only an emerging quality, not a foundational one.

Think of it this way: you can predict the results of information use by being a math wizard. But being a math wizard doesn’t absolve reality from using an entirely separate physical process to do what you predicted.

For example, consider this analogy: if you write a C program that adds 1 to number 10 in a loop of 20 iterations, the result will be 10+20=30. You can say that without actually doing the additions one by one. That’s math. But the additions will be actually performed one by one. That’s reality. In this trivial case they match and that’s why we love math. But think about computations where information is lost: so is the love.

Modern physics can’t explain why the laws of physics are “augmented” by accidental behavior to make up the reality. It can be explained in Information Physics without postulation.

[][] The Math: Proof of Concept

Kinematic time dilation in Relativity is the slowing down of physical processes, when in relative motion. This has been observed aboard GPS satellites that carry extremely precise clocks, and in other experiments as well. It was explained in Relativity as the slowing down of time.

*A video camera analogy *

To put the question of “slowing down” in perspective, let’s go back to the computing analogy.

For instance, if a mobile device has a camera and it uses it to capture imagery, then a mobile device will slow down if it is moving, in order to capture a movie of similar quality. Why is this? This is because there is more information to process when in motion. In other words, a camera in motion has to handle changing imagery which contains more information.

What this analogy points to, is that the slowing down of physical processes is the result of the use of information. The currently accepted view, i.e. Relativity, rests on promoting experimental facts to be the laws of Nature. In comparison, usage of information as a cause of the slow-down seems more satisfactory.

[*Time: Information Physics versus Relativity *]

Information Physics uses the trivial concept of time. We show it is the throughput of information use that can slow down or speed up, not the time itself. Because of it, what’s known as “time dilation” in Relativity, is known in Information Physics as process lag.

A simple derivation of kinematic time dilation will be shown in this chapter.

Ever since the original Einstein paper in 1905, the equation for time dilation has been derived by using Special Relativity. To my knowledge, no other way was thought possible.

*The basics *

Let’s start with the summary of what we have so far:

Any physical effect occurs only due to possession and use of spatial information, the kind of information used by the basic physical constituents. We will call such constituents that possess and use information “particles” and the usage itself “processing”.

A particle has fixed processing throughput and memory storage. A particle has memory in the form of previous and current information sets. Each computation combines previous and current sets to produce new information.

A particle’s information exists in space around it and is available instantly to any particle. If there is more information than storage, then some information is lost.

After current information is processed, it becomes previous information:

where i~current~ is the current information set and i~previous~ is the previous information set. This equation means that what was present a moment ago (at time t-Δt) is now past at time t.

The current and previous information fill the entire storage:

This means there is a limited and finite storage that information can be stored into.

If available-information increases by Δi, the storage for previous information must decrease:

Think of an analogy with a desktop computer: if you have 1000 bytes to store previous and current information, you’d end up having 500 bytes for current and 500 bytes for previous information. If you need to store more of current information, say 100 bytes extra, which is Δi, you’ll have to use 600 bytes for the current information (500 + 100 = 600) and what’s left can be used for previous information (and that is 500-100=400 bytes). The point is, if you need more storage for one of the sets, the other set will have to use less storage, and some information in it will have to be lost.

This is illustrated in the following diagram:

*Two particles in relative motion *

Let us analyze a case of two isolated particles: C moves with speed v relative to M. Suppose that C is small enough for virtually all of its available-information to originate from M. In other words M is much larger than C, and M’s information is overwhelming.

Particle C would “see” more information from M compared to when at rest.

This is easy to understand. If the relative speed doubles, C would visit twice as many locations and see twice as much information from M. It is somewhat akin to running in the rain. The faster you run, the more droplets you will catch.

The following mock diagram illustrates the additional information Δi due to relative movement.

Thus the change of available-information at C is proportional to speed v :

where s is a constant of proportion, i is the storage of C and Δi is the additional information due to movement. The additional information cannot be greater than the storage C has:

.

*Time to process information *

The number of fact pairs from the previous and current sets, when there is no additional available-information (i.e. Δi=0), is proportional to

This is because each fact from the previous set will pair with each fact from the current set. This is combining past and present information to create future information. The total storage available is i+i, meaning i for both current and previous sets. The two are combined to produce new information.

Think of two groups of people as being two information sets (for instance 10 in each group). Two groups have never met before and now they need to work together. As a first step towards that goal, the people from each group need to get to know the people from the other group. In order to do that, each person from one group will meet and greet each person from the other. There will be a total of 10 × 10 = 100 introductions. This is the minimum number of “get-to-know-you” interactions needed in order not to have gaps in knowledge about all the people involved.

In terms of what’s known as the “big O” notation, the complexity of this problem would be O(n^2^[_)_], or the square complexity.

When there is additional available-information (i.e. Δi > 0) the number of fact pairs is proportional to:

In above equation, we combine the previous set (now having only i-Δi of storage) and the current set (now expanded to have i+Δi storage). The total storage information stays the same: i-Δi+i+Δi=i+i.

The following diagram illustrates combining facts to use the information from two sets (depicted as black and white):

Time and physical processes

Combining a pair of facts takes some small period of time. In this period of time, any other pair of facts can combine as well. All facts used are independent of each other, and so there is no need to wait for one pair to complete before the other starts. In other words, all pairs of facts are combined at the same time. This means that using 100 pairs of facts takes the same time as using a single pair, or using 100,000 pairs. We can say that processing cycle always takes the same time.

Particle spends the same period of time using any amount of information. It is only the amount of information used that changes.

To quantify this, we should ask what is the throughput of information processing when there is no loss? It is apparently i/t~0~ where t~0~ is the time it takes a processing cycle to complete.

In the case where there is a loss, we can say that the throughput is i~lossy~/t~0~. In this case the equivalent amount of useful information processed can be obtained from:

The result is:

This result is conceptually easy to justify. When some information is lost, then less information is processed in a unit of time.

We can also write our conclusions in this form:

What this simple reorganization of equations shows is, when there is more information to process, the throughput is lower.

*Putting it all together *

Imagine that our particle C is a clock. This clock is made of fundamental entities that process information. So the rate at which a clock ticks will vary. If the fundamental entities process the information slower, a clock will be slower too.

Consider the additional-information when a clock changes speed: when the relative speed of a clock is v~1~ the additional-information is Δi~1~ and when relative speed is v~2~ the additional information is Δi~2~. As we just explained, the throughput of computation varies with the additional information:

This means if a clock moves with different speeds, it will show different times.

Imagine that a clock does something every 5 ticks (for example, it beeps). As far as a clock is concerned, 5 ticks are equal to 5 seconds. However, according to a stationary observer, this will be different. It could be 7 seconds, or 8 seconds, depending on how fast a clock is moving. This is what above equations refer to. Regardless though, from a clock’s standpoint, a beep happens every 5 seconds, no more, no less. So, when an event is measured in time that a clock is showing, it’s always after the same period of time.

Thus, if we introduce t~1~ and t~2~ as such times measured by a clock (we call it “physical-time” t~1~ and t~2~ versus “absolute-time” t), then, it must be:

Now we can establish the relationship between physical times, as measured by a clock moving at different speeds:

Physical-times t~1~ and t~2~ effectively measure how much slower or faster a clock ticks.

How can we express the above equation in terms of relative speed? We have already established the correlation between the additional-information Δi and the relative speed:

What is the meaning of the constant s? To answer that, consider what happens when the relative speed is so high that additional-information fills all the storage (i.e. Δi=i)? In this case, the throughput of information processing converges to zero, because most of the information is lost. At this speed, the throughput of useful processing is practically zero, and the acceleration stops.

This speed is then the maximum attainable speed of a particle in the scenario we’re considering. If we denote this maximum speed as c, then from the previous (if Δi=i then v=c) it follows s=1/c, i.e. constant s is the reciprocal of the maximum attainable local speed.

The following mock diagram illustrates the maximum local relative speed. At this speed, the additional information equals the storage available and the most information is lost.

We have deduced there has to be a maximum local speed. Remember though, in Information Physics there is no concept of light to begin with.

So the previous equation becomes:

Knowing that s=1/c, as we have deduced above, with c being the maximum local speed, it is:

A well-known case is a comparison of relative motion to a state of rest (v~1~=0, v~2~=v≠0), thus we have from above:

Amazingly this is the same equation that Einstein derived in 1905. Except that in the informational Universe the light doesn’t exist, let alone Einstein’s postulate about its constancy. Not only that, but we have at the same time deduced that there has to be maximum local speed, i.e. we deduced that the light has to exist!

As we can see, time does not slow down and only the rate of processing information, and thus the rate at which a clock ticks, varies.

[*Faster Than Light? Yes, and here’s why and when *]

Note that the above equation is derived under special circumstances. In this case, it is the two isolated particles where one is much larger than the other. We will now show a more generic derivation.

Our original equation for additional information Δi, earlier in this chapter, was:

As we mentioned, this was written under the assumption that particle C is small enough for virtually all of its available-information to originate from M. In other words M is much larger than C.

What would happen if C were far from M, or was larger? In this case, a significant part of available-information of C would come not just from M but from itself as well.

We’ve said that available-information declines with the square of distance from a source of information, and we can write:

Here, i~R~ denotes the information content at the distance R from the source i. In the case of two isolated particles C and M, in some point in space at the distance R~C~ and R~M~ (from C and M, respectively) the total available information would be proportional to:

If this point in space is at C, then this available-information would be stored in its limited storage. So, the percentage of C’s storage that would be used for information from M would be:

The quantity f~MC~ is called information influence of M at C. It represents the percentage of C’s storage used to hold information from M. The higher this number, the higher is M’s information influence on C, hence the name.

We can now say that our original assumption was that the information influence of M at C was 1 (or 100%). If it is less than 1, then there is less additional information for C to “see”, and so the starting equation for additional-information would be, if we write just f instead of f~MC~:

You can think of the above equation this way: when C is far from M, it will gather less additional-information from M. This is because, M’s information declines the further away from it. As a result, C would have to move at a higher speed to “see” the same additional-information from M.

Hence, our final equation will look like this, the derivation of which is the same as before:

It’s clear that when f is 1, this reduces to Einstein’s well-known equation for kinematic time dilation. However, when f is for example ½, then the net result is as if speed v is half of what it really is.

When f is close to zero, then the kinematic time dilation practically vanishes. Thus, under those circumstances, the maximum local speed can become arbitrarily high. In a picture from earlier in this book:

When an object is at a certain distance from a large mass, its maximum local speed can greatly exceed that of the light (300,000 km/s)! Interestingly, the larger an object, the lesser the distance needed. The above diagram also shows that the speed of light is the lowest of all possible speed limits.

Smaller objects, such as elementary particles, cannot break the light barrier in a practical sense, the way we’re trying to do it here on Earth in large accelerators. This is because information influence for them is always 1 (i.e. f=1).

From the above equation it is clear that the maximum speed of C when information influence f is ½, would be 2 × c, or about 600,000 km/s. It means that faster than light travel is practically possible.

*Time dilation isn’t symmetrical *

Another important consequence is that kinematic time dilation is not symmetrical. In Special Relativity, time dilation is symmetrical. This means (according to Relativity), time would slow down for both C and M equally, relative to one another. This is hard to imagine, but it is a defining characteristic of Relativity in general.

In Information Physics, that is not the case. Let us consider our original assumption, when M is much larger than C, i.e. f=1. In this case, the time dilation at C is, as we have shown:

This means the clock at C would slow down. This is the same prediction by both Information Physics and Special Relativity.

However, if the information influence of M at C is practically 1, then it means the information influence of C at M is practically 0. This is easy to derive mathematically. We’ll skip that for brevity here (please see the paper further in this book). But this is also trivial to understand. If information of M overwhelms C, then information of C is underwhelmed by that of M. _]So if [_f~MC~ is 1, then f~CM~ is 0 (note the reverse indexes!).

Because of this, the time dilation of a clock at M is:

In other words, there will be virtually no time dilation at M.

In order to visualize this, you can think of M as Earth, which is a large particle, and of C as a clock, which is a small particle. Information Physics states that the clock will slow down, while Earth will not. If you think about the experiments conducted to date, this is exactly how we see things.

Even though we only derived the generalization of the kinematic time dilation in this chapter, in the paper, it will be done for both kinematic and gravitational effects, as a single phenomenon. Singularity of cause indicates that the informational approach may be a better explanation than Relativity is.

[][] Speed of light

In the early days of physics, light had a meaning only as a method of seeing objects around us. It was purely a matter of study in optics. Over the centuries it became apparent that its speed is important in the whole spectrum of phenomena such as with electromagnetic fields.

With the advent of Special and General Relativity, the speed of light took on an even bigger role. The rate of all physical processes, i.e. “time dilation” phenomena, depends on it.

This indicates the pervasiveness of the speed of light, as a concept. However, the reason why it is so remained unexplained.

Information Physics takes a different approach when it comes to speed of light and that is to ignore the fact that light exists, let alone that its speed has any special meaning. The necessity for its existence is deduced and so are some other qualities of light we know.

In Information Physics, as we’ve shown, the speed of light turns out to be the minimum of all possible maximum speeds.

No one has ever before thought of the speed of light being the minimum of all possible maximum speeds, just because this speed is the highest one we know of, at the moment.

*Addition of velocities, up to a maximum local speed *

The maximum speed in Information Physics depends on where you are. If you are near a large object, such as Earth, then the maximum speed relative to it is about 300,000 km/s.

To exemplify, if your car moves at 200,000 km/s relative to Earth and you try to throw a ball forward at 200,000 km/s relative to the car, then the speed of the ball relative to Earth won’t be 200,000 + 200,000 = 400,000 km/s for the very same reason:

At the point of about 300,000 km/s relative to Earth, all resources of the ball are tied up in processing all the information collected during a single processing cycle. As the ball moves so very fast, it crosses a great deal of distance during a single processing cycle. So much distance is crossed that lots of additional information, present along the way of this distance, is collected. Much of the information is lost because there is now too much information and its actual useful information processing slows down to a crawl. The ball simply becomes incredibly slow to make any further acceleration and so is stuck at 300,000 km/s.

We can apply this insight to photons as well:

The simple conclusion is that no matter what, the speed of a photon near Earth will always be 300,000 km/s relative to Earth. This is why we observe a photon emitted from a moving electron always departing with this speed relative to Earth.

*Maximum speed is local, not universal *

We’ve seen in the derivation of kinematic time dilation, that the point where a particle reaches the maximum speed depends on how much information it has relative to other particles. It also depends on how far it is from other particles. In general, it depends on how much information there is to process.

The end result is that the maximum speed of a particle depends on a locale. What is the maximum speed of a particle relative to?

The maximum speed is relative to what’s known as a “constraint group”.

A constraint group is a set of masses that have dominant information influence on an object. When we’re near Earth, the constraint group is made up of Earth alone and the situation is very simple, and so are Einstein’s equations that Information Physics reduces to.

However, out in space, many objects may comprise the constraint group. Now the maximum speed is different relative to each object from the constraint group.

For example, the maximum speed relative to Earth may change depending on where you are relative to other objects, such as the Sun or other massive bodies. Information Physics provides the exact mathematical equation to derive the maximum speed relative to each object.

Since a constraint group may have many objects in it, we cannot make sweeping generalizations that are often possible in Relativity. In some circumstances, Information Physics and Relativity diverge. In others cases, that includes reliable experiments to date, Information Physics reduces to Relativity and as such they are in agreement.

We have provided a simple explanation as to why the speed of light appears to always be 300,000 km/s. It turns out the speed of light is constant only relative to the dominant information source, which in our case happens to be Earth.

One can speculate that Einstein had learned of an experiment that demonstrates this (Michelson-Morley) and thought that if photons always move at 300,000 km/s relative to Earth, they move at that speed relative to everything. Einstein may have been wrong about that. In addition, most of our experiments are conducted on Earth, making it difficult to spot the difference. To test Information Physics, experiments involving outer space are proposed later in this book.

[][] Mass

In physics we distinguish between two kinds of mass based on some of its qualities. We speak of inertial mass when we think in terms of its resistance to acceleration. Large mass is more difficult to accelerate or decelerate. The other kind of mass is gravitational mass. We all know gravity works and Earth’s mass is the cause of it.

We measure inertial mass by examining its resistance to change of velocity (i.e. resistance to acceleration). Gravitational mass is measured by how much it is attracted by other mass, or how much it attracts other masses. As best as we can tell, the two are identical.

This is rather tricky, because we use the same term “mass” for both, so that implies the two are already the same thing. The reason why they would be equal is unknown in modern physics. If you think about resistance to acceleration (on one hand) and ability to attract other masses (on the other), there’s really no obvious connection between them. Einstein said they are the same. He did not offer any proof of it, though. We call his proposition the Equivalence Principle, and it goes something like this:

If you stand in the elevator that is accelerating upwards in outer space (where there is no gravity), the acceleration will push you down towards the floor. If the elevator has no windows, can you tell it’s not gravity that’s pushing you down? Einstein’s answer is no, you can’t tell the difference. However, Einstein’s statement isn’t obvious, rather it is an empirical observation. In this line of reasoning, gravity is not a force, but a bending of space-time. This was how General Relativity started.

*Mass is a measure of information content *

Information Physics does not have a concept of mass to begin with. In Information Physics, the concept of mass is deduced from scratch. We can also deduce the connection between the gravitational and inertial mass.

The central concept we’re considering is the throughput of the use of information. We will show that the more information there is in an object, the slower this object will process any external information. It is fairly straightforward to show mathematically that this fits the concept of inertial mass.

The concept of gravity in Information Physics emerges as a second order consequence of information use. We’ve already said a particle will randomize its information periodically.

As a result of randomizing, even when two particles are at rest, there is constantly new additional-information. This additional-information slows down processing of information. The closer the two particles are, the slower the processing and consequently, the information throughput is lower.

It is not difficult to calculate the rate of this slowdown, and it matches Einstein’s result in General Relativity. Deducing it from scratch without Relativity, to my knowledge, has never been done before.

An interesting consequence emerges from this derivation. As we mentioned, in the case of two particles, the closer they are, the less the information throughput. As a result, less acceleration resources are expended. We’ve said that particles will try to permanently reduce their use of acceleration resources if that’s possible. Because of this, the end result is that they will move closer to one another in order to spend less of those resources. This can be expressed mathematically and the resulting approximation is the exact form of Newton’s Law of Gravitation.

Note that we didn’t start from the premise that mass and gravitation exist but rather we have deduced them. This is unlike in General Relativity, where concepts of mass and gravity exist a priori.

[][] Information Physics and the Principle of Uncertainty

Nature seems to prefer probabilities rather than certainties. When an electron is hit by a photon, we won’t be able to predict exactly the position and momentum of scattered particles. It would certainly be simpler if there was some sort of definitive rule that didn’t involve probabilities. However, no such rule has been found.

It has been shown experimentally that the more we’re able to narrow down the position of a particle, the less we can do the same for its momentum (and vice versa).

Heisenberg has noted this behavior and formalized it, without a meaningful explanation. It could be said that uncertainty is connected to the wave-particle duality, which by itself is essentially postulated as such. Regardless, there are non-trivial assumptions underneath it that only shift around the fundamental lack of understanding. As well as tested and formalized Quantum Mechanics is, its core principles have been willed into existence in order to match experimental results.

In Information Physics, however, limited resources for information use are given as the fundamental reason for uncertainty.

Imagine an analogy where your personal computer has two tasks to perform. If more memory is given to one task, less memory is available for the other. This is a simple consequence of finite and fixed memory storage.

Imagine that the tasks running are computing π. If the same amount of memory is given to both tasks they will calculate π to the same precision. However, if one task has more memory than the other, then it will compute π to a higher precision. The other task, having less memory, will compute it to a lower precision. This evokes similarities with the nature of the Uncertainty Principle. There is more to it, and it has to do with waves of physical matter, but we won’t get to it here.

*Quantizing the outcomes *

Another consequence of Information Physics is that the outcome of information use must be an integer. Limited information storage means that only a limited number of facts can be used and that only a limited number of facts will come out as a result.

This means, conceptually, the result of computation is ultimately an integer number because it consists of a limited number of facts.

Hence, the change in motion can only have a certain finite number of possible outcomes.

The elementary need for Nature to be quantized is a direct consequence of using information to produce all physical effects. Information is by definition a set of facts. It is difficult to imagine half of a fact. While we can have 3 facts, we can’t have π facts. This by itself does not necessitate quantum behavior. The limited storage of fundamental particles does. Assuming any information can eventually be expressed as a number, then a number with limited storage is always an integer in an appropriate system of measurements.

The empirical foundations of Quantum Mechanics emerge as a consequence of Information Physics.

[][] De Sitter effect without Relativity

In 1913 Willem de Sitter studied the double stars and the light emitted from them. Imagine two stars orbiting one another at fairly high speeds. Because of that, most of the time, these stars move in opposite directions relative to us. One of the stars is moving away from us and the other toward us. This is an elementary consequence of their rotation around one another.

Here is the premise, according to Newtonian physics. If a photon is emitted towards us from a star moving towards us as well, then such a photon should move faster, because the star’s velocity and that of a photon would combine. Similarly, if a photon is emitted towards us when a star is moving away from us, then this photon should move slower relative to us.

This sounds reasonable, except that no such effect was observed.

This was one of the important tests of Relativity that contributed to its acceptance. In Information Physics, this effect is predicted and explained without postulating the constancy of the light speed.

Note that the explanation of Relativity is simply saying that it is so. It is done by postulating that the light speed is constant for all observers.

Let’s consider how a photon would move through space. Suppose it would go from one large body and then through space it would pass near other large bodies and so on. A photon always achieves maximum speed depending on how much information there is in the space through which it moves.

For example, if a photon moves from planet A to planet B to planet C, then a photon’s maximum speed would be determined by how much information there is in any given point in space. In this case the information would come from all of the planets (A, B and C combined) and the maximum speed would differ depending on a locale. As the total amount of information, coming from planets, changes during a photon’s movement, so would a photon’s maximum speed relative to each.

The set of planetary masses in our example that affect a photon is called the constraint group. The name is self-explanatory because the constraint group is a set of masses that constrain the maximum possible speed in any given point in space. We already discussed the notion of constraint group.

Let’s consider how Information Physics explains de Sitter effect. Imagine a photon being thrown off a star. A photon will move at maximum possible speed relative to a star. This speed (relative to us), however, will be lesser or greater than 300,000 km/s because the stars are moving away or toward us.

However, once photons are sufficiently away from the stars, and that happens fairly quickly due to how fast they are moving, the speed of a photon will change. This speed is now determined not just by the information influence of the two stars (which by this time appear equidistant) but also by other masses it passes by, i.e. it is relative to its constraint group.

In deep space this constraint group is not only the stars left behind but also the mass of the Galaxy through which the photons move. So, very quickly, the two photons that started with different speeds will start moving at the same maximum speed because in deep space they will have the same constraint group. This is because the original stars are now far away and at approximately the same distance from both photons.

The photons will continue moving at the same speed for a long time, and the initial difference in speed becomes by far negligible by the time these photons reach Earth. This is why the photons from the two stars will appear to have moved at the same speed.

The photons emitted from distant stars do indeed start off with different speeds (relative to us) for a short period of time. However, those speeds soon become equal. Because the photons spend a very long period of time moving with equal speed, they would appear, as far as our instruments can tell, to have moved at the same speed.

[][] The Proof: Beyond Michelson-Morley

[*Pivotal moment: the Michelson-Morley experiment *]

The stalwart pillar of Special Relativity is the Michelson-Morley experiment. Performed in 1887 by the two scientists it’s named after, it has been repeated many times since. Each time the precision was greater than before to the point where now its accuracy is beyond reproach. It is considered a perfect confirmation of Einstein’s work. We show in Information Physics that its application is limited to a fortunate selection of a locale where it’s performed.

The idea for this experiment came from trying to prove the long-held belief in ether. Ether was thought to be the substance that fills the empty space in which light waves propagate. So when Earth moves through it, the light passing through ether would travel with different speeds in different directions, relative to Earth.

The result, however, was that speed of light was the same in all directions (relative to Earth at least, but that part was unfortunately neglected!). It is ironic that Michelson and Morley tried to prove the existence of ether and they ended up disproving it. This is the crucial moment in history.

*From jumping to conclusions to suspect conclusions *

In 1905 Einstein came along with Special Relativity. What Einstein proposed was to postulate that the speed of light is equal for all observers.

In case of the Michelson-Morley experiment it would mean that no matter the direction of movement, the speed of light must be the same.

This can be very confusing. The physicists were trying to figure out why the speed of light is the same in all directions. The emphasis is on the question of why. Einstein said that the constancy of light speed is something that we should not explain, but rather accept as a fact (i.e. call it a postulate).

If you want to understand why this proposal was eventually accepted, you have to consider it in historical context. The notion of light comes from Maxwell’s theory of electromagnetic waves. The mathematical formulas of Maxwell describe the behavior of light, much like Newton’s mathematical formulas describe the behavior of other objects. Now, Newton’s laws work regardless of whether you are stationary or move with some uniform speed. Einstein added a concept of light to this, by saying that light is truly fundamental, and as such, it should travel at the same speed in all inertial frames of reference. This sounded good and sounded solid to many physicists, even if they hadn’t and still don’t have an inkling as to why.

*Taking a different path *

Information Physics offers a different take on this story. We show that there exists a speed limit, and that near massive bodies, it is the speed of light. Away from massive bodies, the maximum speed varies. The reason is that the amount of information to process varies with distance.

When on Earth, the maximum speed is 300,000 km/s relative to Earth. The important part is “relative to Earth”. This is because Earth is a dominant information influence on any object near it. We already talked about the constraint group and the fact that Earth is practically a sole member of the constraint group anywhere near Earth. This is why light moves at the same 300,000 km/s relative to Earth, in whatever direction. This is the explanation for the Michelson-Morley experiment in Information Physics.

It could be that Einstein took the fact that light moves at the same speed in all directions in the stationary setup on Earth, as a reason to believe it would behave that way [_in general. _]This may not be so.

*Experiments to prove Information Physics is right *

A suitable experiment is to send a probe into deep space with a clock on board. The probe should take a path out of the Solar system as far away from large masses as possible. Its speed should become substantial, so that kinematic time dilation can be measured. The path and distance need to be such that, according to equations of Information Physics, the kinematic time dilation will decrease enough to be measured. Upon returning to Earth, the probe should show less time dilation than anticipated by Einstein’s Relativity.

If so, then Information Physics is right and acceleration past the speed of light is possible away from large masses (i.e. in deep space). Because this is the simplest and most direct experiment, it is probably the best.

Another way to test Information Physics is to have measuring equipment in space, near Earth, but at rest relative to the Sun. Every time Earth passes by, the speed of light will be about 300,000 km/s (mostly) relative to Earth. In all other cases, it will be 300,000 km/s (mostly) relative to the Sun. When Earth is the dominant mass, it will constitute most of the constraint group. When the Sun is the dominant mass, it will be the constraint group. The maximum local speed is always determined by the local constraint group.

By accounting for time dilation and possible length contraction (depending on the setup), the speed of light can be proven to be relative to a constraint group, and not relative to every inertial observer. This kind of an experiment though, would be fraught with possibly significant errors, and the first method if preferable.

[][] FTL (Faster Than Light) Motion

In Information Physics, one of the consequences predicted by the theory is the possibility of FTL, or Faster-Than-Light motion under certain circumstances. Most of consequences of Information Physics are by far the same as those predicted by Special and General Relativity. Some are different, though.

In Relativity, practical Faster-Than-Light travel cannot be achieved. This is a direct consequence of Einstein’s postulates.

In Information Physics, the speed beyond that of light can be achieved away from large masses. This is a consequence of the informational approach.

If other large objects in the Solar system were ignored, for an object with mass of 60,000 kg (m~0~=6×10^4^[_ kg_]), assuming as an approximation mass of Earth to be m~E~=6×10^24^[_ kg_], at the distance of 10 million km (D=10^10^m) the information influence of Earth at the object is, as we have shown:

This means when an object moves relative to Earth, it will have to process half as much information from Earth’s, as it would if it were on Earth. This is the meaning of information influence. On Earth, information influence on this object is 1. At the distance of 10 million kilometers, it is ½. Further still, it will be smaller until it becomes negligible.

In Information Physics, the central equation describing the rate of physical processes is:

Here, V~1~ is the information speed of an object at the moment of a small physical-time interval dt~1~ and V~2~ is the information speed at the moment of a small physical-time interval dt~2~. Recall that physical-time is time as measured by a clock. The proof of the above equation is in the paper further in this book.

This looks similar to Einstein’s relativistic equation regarding time dilation, and indeed it reduces to it in many cases. But this has nothing to do with Relativity at all. In the paper, you will find that the “information speed” is a generalization of the spatial speed, the kind of speed we’re all familiar with. It includes the spatial speeds relative to other objects in the vicinity, as well as their masses and distances.

In Relativity, we speak of relative speeds between two objects. In Information Physics, we speak of speeds relative to all other objects in the Universe. We account for speeds relative to all objects, and each relative speed has a weighted factor assigned to it. This weighted factor is information influence, and it depends on how close and how massive the other object is. This is the meaning of information speed and information velocity (as a vector). Interestingly, information speed is shown to be limited by the speed of light. Spatial speed, however, isn’t. More on this is in the paper further in the book.

For example, on Earth, where its information-influence on an object is 1, the information speed is practically reduced to spatial speed, relative to Earth. When an object is far from Earth, like in our previous example where information-influence was ½ at the distance of 10 million kilometers, the information speed is one-half of the spatial speed. Farther more, the information speed declines to practically zero.

In general with two isolated masses, information speed declines with distance, as in the following illustration diagram:

To apply this to the above equation, this means the information speed would eventually reach zero, thus the above equation becomes:

This means the effect known as kinematic time dilation vanishes. Similarly, the mass increase would vanish too. This means that in Information Physics superluminal motion far from large masses is possible and is not subject to the asymptotic slowing of clocks and infinite increase in mass. In short, practical interstellar space travel is possible and does not require warping of space. It merely requires a better acceleration mechanism.

In Information Physics, collisions at superluminal speed are always resolved to the speed of light. For example, two objects moving towards each other at superluminal speed would change their speed so that at the moment of collision their relative speed is never superluminal. A smaller mass would change its speed more significantly than a larger mass. This is discussed in the formal theory in a more precise manner. I only mention it here to suggest that seemingly many questions arising out of the comparison of Information Physics and the current Relativistic view of the world can be addressed.

A slew of questions could be posed about FTL motion. For example, how would be the problem of collisions solved? Even a grain of sand when hit at speeds near 300,000 km/s would normally release lots of energy. To solve this issue, artificial gravity, which is discussed in the following chapter, can be theoretically used both as a propulsion mechanism and a collision avoidance system.

[][] Artificial Gravity

Before General Relativity, gravity was simply given a mathematical expression for its force, in the form of Newton’s law of gravitation. In General Relativity, gravity is thought to be a curvature of space-time and not a force.

In Information Physics, gravity is a result of preservation of resources. Particles move to a location in space where the expenditure of information resources is lower. Because of the randomizing effect, which we explained earlier in the book, a particle causes information throughput to be lower near it. For that reason, all physical processes slow down, more so the closer to a particle. This means the use of information resources declines in the same fashion. As a result, particles move toward each other, in other to permanently reduce the use of their resources.

You can think of an analogy with paying bills. Imagine you live in a place where you have to pay bills every month (bear with me as I understand imagination is not required for that!). Then you hear about another place where you can get the same service but you pay the same bills every other month. All other things being equal, you will move in the direction of the other place because the pace of paying bills is slower there.

The interesting aspect of gravity in Information Physics, is that it fundamentally comes as a reaction to change in information throughput. However, change in information throughput can be achieved by relative motion as well.

In Information Physics, both gravitational and kinematic time dilation are a single phenomenon, caused by the change in information throughput. Information Physics says that gravity is produced by time dilation, and not by presence of mass per se. Hence, the conclusion is that relative motion can create gravity, since it produces time dilation as well. In terms of what causes what, this is the opposite from contemporary physics. In terms of experimental results, [_both are correct with respect to currently performed experiments. _]We have already proposed an experiment to prove Information Physics correct.

This means that gravity can be made by using relative motion as a tool. We’ve already talked about how kinematic time dilation declines with distance and depending on masses involved. A setup could be engineered where the decline of information throughput is steep. In such a case, an object would move in the direction of this gradient.

The simplest form of such motion would be rotational motion since it’s self-sustaining in a confined area of space. The actual motion does not necessarily have to be rotation around a fixed point in space. It could also be rotation on a smaller scale, such as induced by electromagnetic fields. The only end result required would be the need for a much greater degree of rotation of matter, in whatever shape or form. In this case, the resulting directional change in information throughput would be exactly the same as gravity.

The acceleration produced could allow an object to effectively float or accelerate in a chosen direction. Needless to say, this is entirely different from using centripetal force to simulate gravity in an enclosed area, or levitating using magnetic fields etc. What’s described is a way to produce gravity, the fundamentally same gravity as it exists on Earth for example, but in an arbitrary direction away from a departure point.

The implication is that in deep space we could make an object accelerate as if it’s falling towards a large body, except that no such body is present. We could also avoid collisions by having the debris move in a desired direction.

It would mean the better method of interstellar acceleration would be a pull-based artificial gravity, rather than push-based combustion. This presents us with a theoretical method of rapid acceleration without inertial consequences (i.e. without bones crushing against the back wall).

Given the information influence of Earth as a massive body, any such technology would be more effective away from Earth (or other massive bodies). Since we already said that a large mass could accelerate to speeds higher than 300,000 km/s away from massive bodies, a technology based on the principles of Information Physics would allow a large object to achieve practical interstellar flight in spite of what’s predicted by Relativity and without certain detrimental side-effects predicted by Relativity, such as exponential mass increase or slowing down of time.

For a full paper, go to http://spacetravelscience.com/


On the Interstellar Travel

Information Physics starts by intentionally ignoring Relativity, Quantum Mechanics and Newtonian physics, but reducing to all of them as a special case. The idea is that information plays a more fundamental role in Nature than we currently suspect. One prediction of Information Physics is a physical possibility of faster-than-light motion in deep space, under conditions that cannot be achieved near large mass such as Earth or the Sun. Because Information Physics starts before the first principles, which includes Relativity, Einstein's work is not debated, other than in a context of a historical frame of reference. For example, equations that look similar to that of Relativity are derived without it. Information Physics uses only three-dimensional space and linear time. In order to explain relativistic phenomena, the need for more complex notions does not arise. This book isn't about philosophical aspects of information in Nature, but rather about its inner workings. The main topic is the throughput of information use in physical systems. In the latter part of the book, we will focus on its formal mathematical results.

  • ISBN: 9781311362766
  • Author: Sergio Michelson
  • Published: 2016-05-03 07:50:12
  • Words: 19732
On the Interstellar Travel On the Interstellar Travel