Loading...
Menu

Facts And Speculations Of Science

FACTS AND SPECULATIONS OF SCIENCE

by Manjunath.R

 

Copyright Manjunath. R 2015

This free ebook may be copied, distributed, reposted, reprinted and shared, provided it appears in its entirety without alteration, and the reader is not charged to access it.

If you wish to contact the author you can send e-mail to:

[email protected]

Web addresses where you can find my work:

http://www.Shakespir.com/profile/view/manjunath

 

“There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.”

Lord Kelvin

 

I

 

Subaltern notable − which took us on a journey from the time when Aristotle and the world of that era believed that Earth was the center of the universe and supported on the back of a giant tortoise to our contemporary age when we know better − regards body of knowledge as painterly truth. Rather it is absolutely-absolutely false. Science has weighty limitations and it’s a journey not a destination and the advance of knowledge is an infinite progression towards a goal that forever recedes. And it’s our main ingredient for understanding − a means of accepting what we’ve learned, challenging what we think, and knowing that in some of the things that we think, there may be something to modify and to change.

 

II

After sleeping through a hundred million years in wisps, ashes and smoke we − the rational beings developed from the Darwin’s principle of natural selection− have finally awakened our eyes on a cooled cinder, sparkling with color, bountiful with life, reciting an African creation myth ( : that in the beginning, there was only darkness, water, and the great god Bumba. One day Bumba, in pain from a stomach ache, vomited up the sun. The sun dried up some of the water, leaving land. Still in pain, Bumba vomited up the moon, the stars, and then some animals. The reptiles, mammals, and ultimately the human race) and rapidly moving on to big questions such as, if the big bang was perfectly symmetrical, and then we should expect equal amounts of matter and antimatter to be formed. In other words, if matter and antimatter can be made or destroyed only in matching amounts, and the laws of physics are exactly same for the both, then how can it be that the universe contains so much matter but so little antimatter? So why do we now see only matter except for the tiny amounts of antimatter that we make in the lab and observe in cosmic rays? Is that the original big bang was not perfectly symmetrical at all?

We Humans, a curious species, are accustomed into an inquisition. The question is not ‘do we know everything?’ or it is ‘do we know enough?’ But how perfectly we know about things? For many people this might sound like a startling claim. But scientific knowledge is often transitory: some (but not all) unquestionably fraught with misinterpretation. This is not a weakness but strength, for our better understanding of the events around us, and of our own existence. However, all that we can say how far we are from the truth, ‘the reciprocal of uncertainty.’ The very existence of certainty is a lot more baffled than it exists, even if we begin from a point of thinking it’s pretty damn baffled in the first point. Moreover, the very expression “certainly proven” is a contradiction in terms. There’s nothing that is certainly proven. The deep core of science is the deep awareness that we have wrong ideas, we have misinterpretations. And the fact that we human beings − who are ourselves mere collections of fundamental particles in a truly elegant fashion — still facing with the question: “What is truth,” or rather “who is Truth?” — have been able to live with doubt and uncertainty. We think it’s much more interesting to live not knowing than to have answers which might be false.

Ever since the beginning of human civilization, we have not been in a state of satisfaction to watch things as incoherent and unexplainable. While we have been thinking whether the universe began at the big bang singularity and would come to an end either at the big crunch singularity, we have converted at least a thousand joules of energy in the form of thoughts. This has decreased the disorder of the human brain by about few million units. Thus, in a sense, the evolution of human civilization in understanding the universe has established a small corner of the order in a human brain. However, the burning questions still remain unresolved, which set the human race to keep away from such issues. Many early native postulates have fallen or are falling aside -- and there now alternative substitutes. In short, while we do not have an answer, we now have a whisper of the grandeur of the problem. With our limited brains and tiny knowledge, we cannot hope to have a complete picture of unlimited speculating about the gigantic universe we live in. For lack of other theories, we forcibly adore the theories like the big bang, which posits that in the beginning of evolution all the observable galaxies and every speck of energy in the universe was jammed into a very tiny mathematically indefinable entity called the singularity (or the primeval atom named by the Catholic priest Georges Lemaitre, who was the first to investigate the origin of the universe that we now call the big bang). This extremely dense point exploded with unimaginable force, creating matter and propelling it outward to make the billions of galaxies of our vast universe. It seems to be a good postulate that the anticipation of a mathematically indefinable entity by a scientific theory implies that the theory has ruled out. It would mean that the usual approach of science of building a scientific model could anticipate that the universe must have had a beginning, but that it could not prognosticate how it had a beginning. Between 1920s and 1940s there were several attempts, most notably by the British physicist Sir Fred Hoyle and his co-workers: Hermann Bondi and Thomas Gold, to avoid the cosmic singularity in terms of an elegant model that supported the idea that as the universe expanded, new matter was continually created to keep the density constant on average. The universe didn’t have a beginning and it continues to exist eternally as it is today. This idea was initially given priority, but a mountain of inconsistencies with it began to appear in the mid 1960’s when observational discoveries apparently supported the evidence contrary to it. However, Hoyle and his supporters put forward increasingly contrived explanations of the observations. But the final blow to it came with the observational discovery of a faint background of microwaves throughout space in 1965 by Penzias and Wilson, which was the “the final nail in the coffin of the big bang theory” i.e., the discovery and confirmation of the cosmic microwave background radiation in 1965 secured the Big Bang as the best theory of the origin and evolution of the universe. Though Hoyle and Narlikar tried desperately, the steady state theory was abandoned. With many bizarre twists and turns, super strings − a generalized extension of string theory which predicts that all matter consists of tiny vibrating strings and the precise number of dimensions: ten. The usual three dimensions of space − length, width, and breadth ‒ and one of time are extended by six more spatial dimensions) − blinked into existence. The best choice we have at the moment is the super strings, but no one has seen a superstring and it has not been found to agree with experience and moreover there’s no direct evidence that it is the correct description of what the universe is. Are there only 4 dimensions or could there be more: (x, y, z, t) + w, v,…? Can we experimentally observe evidence of higher dimensions? What are their shapes and sizes? Are they classical or quantum? Are dimensions a fundamental property of the universe or an emergent outcome of chaos by the mere laws of nature? And if they exist, they could provide the key to unlock the deepest secrets of nature and Creation itself? We humans look around and only see four (three spatial dimensions and one time dimension i.e., space has three dimensions, I mean that it takes three numbers − length, breadth and height− to specify a point. And adding time to our description, then space becomes space-time with 4 dimensions) – why 4 dimensions? where are the other dimensions? Are they rolled the other dimensions up into a space of very small size, something like a million million million million millionth of an inch − so small that our most powerful instruments can probe? Up until recently, we have found no evidence for signatures of extra dimensions. No evidence does not mean that extra dimensions do not exist. However, being aware that we live in more dimensions than we see is a great prediction of theoretical physics and also something quite futile even to imagine.

For n spatial dimensions: The gravitational force between two massive bodies is: FG = GMm / (r n−1) where G is the gravitational constant, M and m are the masses of the two bodies and r is the distance between them. The electrostatic force between two charges is: FE = Qq/ 4πε~0~ (r n−1) where ε~0~ is the absolute permittivity of free space, Q and q are the charges and r is the distance between them. What do we notice about both of these forces? Both of these forces are proportional to 1/ r n −1. So in a 4 dimensional universe (3 spatial dimensions + one time dimension) forces are proportional to 1/r2; in the 10 dimensional universe (9 spatial dimensions + one time dimension) they’re proportional to 1/r8. Not surprisingly, at present no experiment is smart enough to solve the problem of whether or not the universe exists in 10 dimensions or more (i.e., to prove or disprove both of these forces are proportional to 1/r8 or proportional to > 1/r8). However, yet mathematically we can imagine many spatial dimensions but the fact that that might be realized in nature is a profound thing. So far, we presume that the universe exists in extra dimensions because the mathematics of superstrings requires the presence of ten distinct dimensions in our universe or because a standard four dimensional theory is too small to jam all the forces into one mathematical framework. But what we know about the spatial dimensions we live in is limited by our own abilities to think through many approaches, many of the most satisfying are scientific. Among many that we can develop, the most well-known, believed theory at the present is the standard four dimensional theory. However, development and change of the theory always occurs as many questions still remain about our universe we live in. And if space was 2 dimensional then force of gravitation between two bodies would have been = to GMm/r (i.e., the force of gravitation between two bodies would have been far greater than its present value). And if the force of gravitation between two bodies would have been far greater than its present value, the rate of emission of gravitational radiation would have been sufficiently high enough to cause the earth to spiral onto the Sun even before the sun become a black hole and swallow the earth. While if space was 1 dimensional then force of gravitation between two bodies would have been = GMm (i.e., the force of gravitation between two bodies would have been independent of the distance between them). The selection principle that we live in a region of the universe that is suitable for intelligent life which is called the anthropic principle (introduced by Carter in 1974) would not have seemed to be enough to allow for the development of complicated beings like us. The universe would have been vastly different than it does now and, no doubt, life as we know it would not have existed. And if spacial dimensions would have been > than 3, the force of gravitation between two bodies would have been decreased more rapidly with distance than it does in three dimensions. (In three dimensions, the gravitational force drops to 1/4 if one doubles the distance. In four dimensions it would drops to 1/5, in five dimensions to 1/6, and so on.) The significance of this is that the orbits of planets, like the earth, around the sun would have been unstable to allow for the existence of any form of life and there would been no intelligent beings to observe the effectiveness of extra dimensions.

Although the proponents of string theory predict absolutely everything is built out of strings (which are described as patterns of vibration that have length but no height or width—like infinitely thin pieces of string), it could not provide us with an answer of what the string is made up of? And one model of potential multiple universes called the M Theory − has eleven dimensions, ten of space and one of time, which we think an explanation of the laws governing our universe that is currently the only viable candidate for a “theory of everything”: the unified theory that Einstein was looking for, which, if confirmed, would represent the ultimate triumph of human reason− predicts that our universe is not only one giant hologram. Like the formation of bubbles of steam in boiling water − Great many holograms of possible shapes and inner dimensions were created, started off in every possible way, simply because of an uncaused accident called spontaneous creation. Our universe was one among a zillion of holograms simply happened to have the right properties − with particular values of the physical constants right for stars and galaxies and planetary systems to form and for intelligent beings to emerge due to random physical processes and develop and ask questions, Who or what governs the laws and constants of physics? Are such laws the products of chance or a mere cosmic accident or have they been designed? How do the laws and constants of physics relate to the support and development of life forms? Is there any knowable existence beyond the apparently observed dimensions of our existence? However, M theory sounds so bizarre and unrealistic that there is no experiment that can credit its validity. Nature has not been quick to pay us any hints so far. That’s the fact of it; grouped together everything we know about the world and ourselves and it is still nothing more than a tiny dip in the vast cosmic ocean.

And as more space comes into existence, more of the dark energy (an invisible and unexpected cosmological force that hides in empty space and works against the universe’s slowing expansion) would appear. Unfortunately, no one knows what exactly it is. Is it a pure cosmological constant (an arbitrary parameter from general relativity, has been taken to be zero for most of the twentieth century for the simple and adequate reason that this value was consistent with the data) or is it a sign of extra dimensions? What is the cause of the dark energy? Why does it exist at all? Why is it so different from the other energies? Why is the composition of dark energy so large (of about 73% of our universe − we only make up 0.03% of the universe)? String theory gives us a clue, but there’s no definitive answer. Well, all know is that it is a sort of cosmic accelerator pedal or an invisible energy what made the universe bang and if we held it in our hand; we couldn’t take hold of it. In fact, it would go right through our fingers, go right through the rock beneath our feet and go all the way to moon. It would reverse direction and come back from moon all the way here to earth and go back and forth. How near are we to understand the dark energy? The question lingers, answer complicates and challenges everyone who yearns to resolve. And once we understand the dark energy, can we understand the birth and the death of the universe is also an?

The entire universe is getting more disordered and chaotic with time. And this observation is elevated to the status of a law, the so called Second law of thermodynamics (which was discovered by the German physicist, Ludwig Boltzmann) i.e., the total amount of disorder in the universe (which is measured by a quantity called entropy) always increases with time and that there is nothing we have to do about it. No matter how advanced our conditions would be right for the generation of thoughts to predict things more or less, even if not in a simplest way, it can never squash the impending threat of the second law of thermodynamics nor it can bring us to the answer of why was the entropy ever low in the first place.

It has been an endeavor of science to find a single theory which could explain everything, where every partial theory that we’ve read so far (in school) is explained as a case of the one cogent theory within some special circumstances. Despite being a mystery skeptic, the Unified Field Theory presents an infinite problem. This is embarrassing. Because we now realize before we can work for the theory of everything, we have to work for the ultimate laws of nature. At the present, we’re clueless as to what the ultimate laws of nature really are. Are there new laws beyond the apparently observed dimensions of our universe? Do all the fundamental laws of nature unify? At what scale? Ultimately, however, it is likely that answers to these questions in the form of unified field theory may be found over the next few years or by the end of the century we shall know can there really be a complete unified theory that would presumably solve our problems? Or are we just chasing a mirage? Is the ultimate unified theory so compelling, that it brings about its own existence? However, if we − a puny and insignificant on the scale of the cosmos − do discover a unified field theory, it should in time be understandable in broad principle by everyone, not just a few people. Then we shall all be able to take part in the discussion of the questions of how and when did the universe begin? Was the universe created? Has this universe been here forever or did it have a beginning at the Big Bang? If the universe was not created, how did it get here? If the Big Bang is the reason there is something rather than nothing, and then before the Big Bang there was NOTHING and then suddenly we got A HUGE AMOUNT OF ENERGY where did it come from? What powered the Big Bang? What is the fate of the Universe? Is the universe heading towards a Big Freeze, a Big Rip, a Big Crunch, or a Big Bounce? Or is it part of an infinitely recurring cyclic model? Is inflation a law of Nature? Why the universe started off very hot and cooled as it expanded? Is the Standard Big Bang Model right? Or is it the satisfactory explanation of the evidence which we have and therefore merits our provisional acceptance? Is our universe finite or infinite in size and content? What lies beyond the existing space and time? What was before the event of creation? Why is the universe so uniform on a large scale (even though uncertainty principle implies that the universe cannot be completely uniform because there are some uncertainties or fluctuations in the positions and velocities of the particles)? Why does it look the same at all points of space and in all directions? In particular, why is the temperature of the cosmic microwave back-ground radiation so nearly the same when we look in different directions? Why are the galaxies distributed in clumps and filaments? When were the first stars formed, and what were they like? Why most of the matter in the Universe is dark? Is anthropic principle a natural coincidence? If we find the answers to them, it would be the ultimate triumph of human reason i.e., we might hold the key to illuminating the eternal conundrum of why we exist. It would bring to an end a long and glorious lesson in the history of mankind’s intellectual struggle to understand the universe. For then we would know whether the laws of physics started off the universe in such an incomprehensible way or not.

Up until recently, we do not know about what is the exact mechanism by which an implosion of a dying star becomes a specific kind of explosion called a supernova. All that we know is that: When a massive star runs out of nuclear fuel, the gravitational contraction continues increasing the density of matter. And since the internal pressure is proportional to the density of matter, therefore the internal pressure will continually increase with the density of matter. And at a certain point of contraction, internal pressure will be very much greater than gravitational binding pressure and will be sufficiently high enough to cause the star of mass M and radius r to explode at a rate = total energy released × time, spraying the manufactured elements into space that would flung back into the gas in the galaxy and would provide some of the raw material for the next generation of stars and bodies that now orbit the sun as planets like the Earth. The total energy released would outshine all the other stars in the galaxy, approaching the luminosity of a whole galaxy (will nearly be the order of 10 to the power of 42 Joules) which is = (Total energy of the star – its Gravitational binding energy).

Why are there atoms, molecules, solar systems, and galaxies?

What powered them into existence?

How accurate are the physical laws and equations, which control them?

The answers have always seemed well beyond the reach of Dr. Science since the dawn of humanity − until now. But the questions are still the picture in the mind of many scientists today who do not spend most of their time worrying about these questions, but almost worry about them some of the time. All that science could say is that: The universe is as it is now. But it could not explain why it was, as it was, just after the Big Bang. This is a disaster for science. It would mean that science alone, could not predict how the universe began. Every attempt is made to set up the connection between theoretical predictions and experimental results but some of the experimental results throw cold water on the theoretical predictions. Back in 1700s, people thought the stars of our galaxy structured the universe, that the galaxy was nearly static, and that the universe was essentially unexpanding with neither a beginning nor an end to time. A situation marked by difficulty with the idea of a static and unchanging universe, was that according to the Newtonian theory of gravitation, each star in the universe supposed to be pulled towards every other star with a force that was weaker the less massive the stars and farther they were to each other. It was this force caused all the stars fall together at some point. So how could they remain static? Wouldn’t they all collapse in on themselves? A balance of the predominant attractive effect of the stars in the universe was required to keep them at a constant distance from each other. Einstein was aware of this problem. He introduced a term so-called cosmological constant in order to hold a static universe in which gravity is a predominant attractive force. This had an effect of a repulsive force, which could balance the predominant attractive force. In this way it was possible to allow a static cosmic solution. Enter the American astronomer Edwin Hubble. In 1920s he began to make observations with the hundred inch telescope on Mount Wilson and he found that stars were not uniformly distributed throughout space, but were gathered together in vast collections called galaxies and nearly all the galaxies were moving away from us with recessional velocities that were roughly dependent on their distance from us. He reinforced his argument with the formulation of his well-known Hubble’s law. The observational discovery of the stretching of the space carrying galaxies with it completely shattered the previous image of a static and unchanging cosmos (i.e., the motivation for adding a term to the equations disappeared, and Einstein rejected the cosmological constant a greatest mistake).

We story telling animals often claim that we know so much more about the universe. But we must beware of overconfidence. We have had false dawns before. At the beginning of this century, for example, it was thought that earth was a perfect sphere, but latter experimental observation of variation of value of g over the surface of earth confirmed that earth is not a perfect sphere. Today there is almost universal agreement that space itself is stretching, carrying galaxies with it, though it continues to stretch forever is still in question. However, personally, we’re sure that the accelerated expansion began with a hot Big Bang. But will it expand forever or there is a limit beyond which gravity pulls everything in or the expansion and contraction are evenly balanced? We’re less sure about that because events cannot be predicted with complete accuracy but that there is always a degree of uncertainty.

The picture of standard model of the Forces of Nature is in good agreement with all the observational evidence that we have today. Nevertheless, it leaves a number of important questions unanswered like the unanswered questions given in The Hitchhiker’s Guide to the Galaxy: Why are the strengths of the fundamental forces (electromagnetism, weak and strong forces, and gravity) are as they are? Why do the force particles have the precise masses they do? Do these forces really become unified at sufficiently high energy? If so how? Are there unobserved fundamental forces that explain other unsolved problems in physics? Why is gravity so weak? May because of hidden extra dimensions? Very likely, we are missing something important that may seem as obvious to us as the earth orbiting the sun – or perhaps as ridiculous as a tower of tortoises. Only time (whatever that may be) will tell.

The theory of evolution lined up pictures of apes and humans and claimed that humans evolved from apes (i.e., the chimpanzee and the human share about 99.5 per cent of their evolutionary history). This spilled out onto the corridors of the academy and absolutely rocked Victorian England to the extent that people just barely raised their voice contradicting the biblical account of creation in the lecture hall rips of the architrave. And despite more than a century of digging straight down and passing through the fossil layers, the fossil record remains maddeningly sparse and provides us with no evidence that show evolutionary transition development of one species into another species. However, we are convinced that the theory of evolution, especially the extent to which it’s been believed with blind faith, which may turn to be one of the great fairy tales for adults in the history books of the future. Like raisins in expanding dough, galaxies that are further apart are increasing their separation more than nearer ones. And as a result, the light emitted from distant galaxies and stars is shifted towards the red end of the spectrum. Observations of galaxies indicate that the universe is expanding: the distance D between almost any pair of galaxies is increasing at a rate V = HD ‒ beautifully explained by the Hubble’s law. However, controversy still remains on the validity of this law. Andromeda, for example, for which the Hubble relation does not apply. And quantum theory (The revolutionary theory of the last century clashed with everyday experience which has proved enormously successful, passing with flying colors the many stringent laboratory tests to which it has been subjected for almost a hundred years) predicts that entire space is not continuous and infinite but rather quantized and measured in units of quantity called Planck length i.e., the entire space is divided into cells of volume i.e., Planck length to the power of 3, the smallest definable volume (i.e., the Planck volume) and of area i.e., Planck length to the power of 2, the smallest definable area (i.e., the Planck area) and time in units of quantity called Planck time. And each cell possesses energy equal to the Planck energy. And energy density of each cell is = Planck energy / Planck volume. However, at the present there is no conclusive evidence in favor of quantization of space and time and moreover nobody knows why no spatial or time interval shorter than the Planck values exists?

For length: Planck length ∼1.6 × 10 −33 centimeter.

For time: Planck time ∼5 × 10 −44 seconds.

On the other hand, there is no evidence against what the quantum model inform us about the true nature of reality. But in order to unify general relativity with the quantum physics that describe fundamental particles and forces, it is necessary to quantize space and perhaps time as well. And for a universe to be created out of nothing, the positive energy of motion should exactly cancel out the negative energy of gravitational attraction i.e., the net energy of the universe should be = zero. And if that’s the case, the spatial curvature of the universe, Ω~k~, should be = 0.0000 (i.e., perfect flatness). But the Wilkinson Microwave Anisotropy Probe (WMAP) satellite has established the spatial curvature of the universe, Ω~k~ , to be between − 0.0174 and +0.0051. Then, how can it cost nothing to create a universe, how can a whole universe be created from nothing? On the other hand, there is a claim that the sum of the energy of matter and of the gravitational energy is equal to zero and hence there is a possibility of a universe appearing from nothing and thus the universe can double the amount of positive matter energy and also double the negative gravitational energy without violation of the conservation of energy. However, energy of matter + gravitational energy is = zero is only a claim based on Big Bang implications. No human being can possibly know the precise energy content of the entire universe. In order to verify the claim that the total energy content of the universe is exactly zero, one would have to account for all the forms of energy of matter in the universe, add them together with gravitational energy, and then verify that the sum really is exactly zero. But the attempt to verify that the sum really is exactly zero is not an easy task. We need precision experiments to know for sure.

Gazing at the at the blazing celestial beauty of the night sky and asking a multitude of questions that have puzzled and intrigued humanity since our beginning − WE’VE DISCOVERED a lot about our celestial home; however, we still stand at a critical cross road of knowledge where the choice is between spirituality and science to accomplish the hidden truth behind the early evolution of the universe. In order to throw light on a multitude of questions that has so long occupied the mind of philosophers: Where did we and the universe come from? Where are we and the universe going? What makes us and the universe exists? Why we born? Why we die? Whether or not the universe had a beginning? If the universe had a beginning, why did it wait an infinite time before it began? What was before the beginning? We must either build a sound, balanced, effective and extreme imaginative knowledge beyond our limit. Many theories were put forth by the scientists to look into the early evolution of the universe but none of them turned up so far. And if, like me, you have wondered looking at the star, and tried to make sense of what makes it shine the way it is. Did it shine forever or was there a limit beyond which it cannot or may not shine? And, where did the matter that created it all come from? Did the matter have a beginning in time? Or had the matter existed forever and didn’t have a beginning? In other words, what cause made the matter exist? And, what made that cause exist? Some would claim the answer to this question is that matter could have popped into existence 13.9 billion years ago as a result of just the eminent physical laws and constants being there. This might sound like physicists are pulling your leg, just to see how long it will be before somebody is willing to say that almost an anxious searching in the dark, with their intense longing, their alterations of confidence and exhaustion and the final emergence into the light – Because there is a law such as gravity, the matter can and will create itself out of nothing. But how can matter come out of nothing? This apparently violates the conservation of matter. But there is a simple answer. Matter, of course, is what a makes up a hot star, a sun, a planet – anything you think of that occupies space. And if you divide the matter what do you get? Tiny masses… Well, because E= mc squared each tiny mass locks up tremendous amount of positive energy. And according to new model what’s called the exchange theory of gravity, there is a continuous exchange of a massless particle of spin 2 called the graviton between one mass and the other. This result in an exchange force called gravity and keeps them bound together. Well if you add up the sum total positive energy of masses to the sum total negative energy of gravity what you get? Zero, the net energy of the matter is zero. Because the net energy of the matter is zero, the matter can and will create itself from literally nothing. A thought of nothing must have somehow turned into something is interesting, and significant, and worth writing a note about, and it’s one of the possibilities. However, if this admittedly speculative hypothesis is correct, then the question to the ultimate answer is shouldn’t we see at least some spontaneous creation of matter in our observable universe every now and then? No one has ever observed a matter popping into existence. This means that any “meta” or “hyper” laws of physics that would allow (even in postulate) a matter to pop into existence are completely outside our experience. The eminent laws of physics, as we know them, simply are not applicable here. Invoking the laws of physics doesn’t quite do the trick. And the laws of physics are simply the human-invented ingredients of models that we introduce to describe observations. They are all fictitious, as far as we find a reference frame in which they are observed. The question of matter genesis is clear, and deceptively simple. It is as old as the question of what was going on before the Big Bang. Usually, we tell the story of the matter by starting at the Big Bang and then talking about what happened after. The answer has always seemed well beyond the reach of science. Until now. Over the decades, there have been several attempts to explain the origin of matter, all of them proven wrong. One was the so-called Steady State theory. The idea was that, as the galaxies moved apart from each other; new galaxies would form in the spaces in between, from matter that was spontaneously being created. The matter density of the universe would continue to exist, forever, in more or less the same state as it is today. In a sense disagreement was a credit to the model, every attempt was made to set up the connection between theoretical predictions and experimental results but the Steady State theory was disproved even with limited observational evidence. The theory therefore was abandoned and the idea of spontaneous creation of matter was doomed to fade away into mere shadows. As crazy as it might seem, the matter may have come out of nothing! The meaning of nothing is somewhat ambiguous here. It might be the pre-existing space and time, or it could be nothing at all. After all, no one was around when the matter began, so who can say what really happened? The best that we can do is work out the most vain imaginative and foolish theories, backed up by numerous lines of scientific observations of the universe.

Cats are alive and dead at the same time. But some of the most incredible mysteries of the quantum realm get far less attention than Schrödinger’s famous cat. Due to the fuzziness of quantum theory (that implies: the cosmos does not have just a single existence or history), and specifically Heisenberg’s uncertainty principle, one can think of the vacuum fluctuations as virtual matter –antimatter pairs that appear together at some time, move apart, then come together and annihilate one another and revert back to energy. Spontaneous births and deaths of so called virtual matter –antimatter pairs occurring everywhere, all the time – is the evidence that mass and energy are interconvertible; they are two forms of the same thing. If one argue that matter was a result of such a fluctuation. So then the next question is what cause provided enough energy to make the virtual matter –antimatter pairs materialize in real space. And if we assume some unknown cause has teared the pair apart and boosted the separated virtual matter –antimatter into the materialized state. The question then is what created that cause. In other words, what factor created that cause? And what created that factor. Or perhaps, the cause, or the factor that created it, existed forever, and didn’t need to be created. The argument leads to a never-ending chain that always leaves us short of the ultimate answer. Unfortunately, Dr. Science cannot answer these questions. So, the problem remains. However, quantum origin and separation of the matter still delights theoretical physicists but boggles the mind of mere mortals, is the subject of my thought; have the quantum laws found a genuinely convincing way to explain matter existence apart from divine intervention? If we find the answer to that, it would be the ultimate triumph of human reason – for then we would know the ultimate Cause of the Matter. Over the decades, we’re trying to understand how the matter began and we’re also trying to understand all the other things that go along with it. This is very much the beginning of the story and that story could go in, but I think there could be surprises that no one has even thought of. Something eternal can neither be created nor destroyed. The first law of thermodynamics asserts that matter or energy can neither be created nor destroyed; it can be converted from one form to another. The overwhelming experience of experimental science confirms this first law to be a fact. But if the matter prevails in the boundary of understanding in that it neither started nor it ends: it would simply be. What place then for an evidence exposing that we live in a finite expanding universe which has not existed forever, and that all matter was once squeezed into an infinitesimally small volume, which erupted in a cataclysmic explosion which has become known as the Big Bang. However, what we believe about the origin of the matter is not only sketchy, but uncertain and based purely on human perception. There is no reliable and genuine evidence to testify about how the matter began and what may have existed before the beginning of the matter. The laws of physics tell us that the matter had a beginning, but they don’t answer how it had begun. Mystery is running the universe in a hidden hole and corner, but one day it may wind up the clock work with might and main. The physical science can explain the things after big bang but fails to explain the things before big bang. We know that matter can be created out of energy, and energy can be created out of matter. This doesn’t resolve the dilemma because we must also know where the original energy came from.

The electrostatic and gravitational forces according to Coulomb’s and Newton’s laws are both inverse square forces, so if one takes the ratio of the forces, the distances cancel. For the electron and proton, the ratio of the forces is given by the equation: FE / FG = e2 / 4πε0GMm where e is the charge = 1.602 × 10 – 19 Coulombs, G is the gravitational constant, ε~0~ is the absolute permittivity of free space = 8.8 × 10 – 12 F/m, M is the mass of the proton = 1.672 × 10 –27 kg and m is the mass of the electron = 9.1 × 10^–31^ kg. Plugging the values we get: FE / FG = 10 39 which means: FE is > FG. So, it was argued by a German mathematician, theoretical physicist and philosopher (some say it was Hermann Weyl), if the gravitational force between the proton and electron were not much smaller than the electrostatic force between them, then the hydrogen atom would have collapsed to neutron long before there was a chance for stars to form and life to evolve. FE > FG must have been numerically fine – tuned for the existence of life. Taking FE / FG = 10 39 as an example in most physics literature we will find that gravity is the weakest of all forces, many orders of magnitude weaker than electromagnetism. But this does not make sense any way and it is not true always and in all cases. Note that the ratio FE / FG is not a universal constant; it’s a number that depends on the particles we use in the calculation. For example: For two particles each of Planck mass and Planck charge the ratio of the forces is 1 i.e., FE / FG = 1. Moreover, when the relativistic variation of electron mass with velocity is taken into account then the ratio FE / FG becomes velocity dependent.

Does our universe exist inside a black hole of another universe? The question lingers, unanswered until now. Even though our universe lies inside a black hole of another universe, we cannot prove or disprove this conjecture any way. Meaning that the event horizon of a black hole is boundary at which nothing inside can escape and then how might one can cross its event boundary and testify whether or not our universe exist inside a black hole of another universe. Thus we cannot answer the central question in cosmology: Does our universe exist inside a black hole of another universe? However, the fact that we are simply an advanced breed of talking monkeys surviving on a sumptuous planet, have been reckoning at least from last hundred years − turning unproved belief into unswerving existence through the power of perception and spending our brief time in the sun working at understanding the deepest mysteries of nature by doing repeated calculations and getting some answer that seem very likely makes us feel something very special.

The physicist has been spending a month, as he or she does each year, sequestered with colleagues, such as fellow theoretical physicists, to discuss many great mysteries of the cosmos. But despite its simple approximation as a force, and its beautifully subtle description as a property of space-time which in turn can be summarized by Einstein’s famous equation, which essentially states:

Matter-energy → curvature of space-time

, we’ve come to realize over the past century that we still don’t know what gravity actually is. It has been a closed book ever since the grand evolution of human understanding and all physicists hang this book up on their wall and distress about it. Unhesitatingly you would yearn to know where this book comes from: is it related to metaphysical science or perhaps to the greatest blast puzzles of physics still to be discovered, like cosmic string and magnetic monopoles? Nobody knows and for the moment, nature has not said yes in any sense. It’s one of the 10,000 bits puzzling cosmic story with a cracking title. You might say the laws of physics designed that book, and we don’t know how they designed that book. The elevated design of this book, an extract of which appears in the cosmic art gallery, sets out to the belief that it must have designed as it could not have created out of chaos. In some sense, the origin of the cosmic problem today remains what it was in the time of Newton (who not only put forward a theory of how bodies move in space and time, but he also developed the complicated mathematics needed to analyze those motions) – one of the greatest challenges of 21st Century science certainly keep many an aficionado going. Yet, we have made a bold but brilliant move. In less than a hundred years, we have found a new way to wonder what gravity is. The usual approach of science of constructing a set of rules and equations cannot answer the question of why if you could turn off gravity, space and time would also vanish. In short, we don’t have an answer; we now have a whisper of the grandeur of the problem. We don’t know exactly how it is intimately related to space and time. It’s a mystery that we’re going to chip at from quantum theory (the theory developed from Planck’s quantum principle and Heisenberg’s uncertainty principle which deals with phenomena on extremely small scales, such as a millionth of a millionth of an inch). However, when we try to apply quantum theory to gravity, things become more complicated and confusing.

Mankind’s deepest desire for scientific intervention introduced a new idea that of time. Most of the underlying assumptions of physics are concerned with time. Time may sound like a genre of fiction, but it is a well-defined genuine concept. Some argue that time is not yet discovered by us to be objective features of the mundane world: even without considering time an intrinsic feature of the mundane world, we can see that things in the physical world change, seasons change, people adapt to that drastic changes. The fact that the physical change is an objective feature of the physical world, and time is independent of under whatever circumstances we have named it. Others think time as we comprehend it does not endure beyond the bounds of our physical world. Beyond it, maybe one could run forward in time or just turn around and go back. This could probably mean that one could fall rapidly through their former selves. In a bewildering world, the question of whether the time never begin and has always been ticking, or whether it had a beginning at the big bang, is really a concern for physicists: either science could account for such an inquiry. If we find the answer to it, it would be the ultimate triumph of human justification for our continuing quest. And, our goal of a complete description of the universe we live in is self-justified. The understanding we have today is that time is not an illusion like what age-old philosophers had thought, but rather it is well defined mathematical function of an inevitable methodical framework for systematizing our experiences. If one believed that the time had a beginning, the obvious question was how it had started? The problem of whether or not the time had a beginning was a great concern to the German Philosopher, Immanuel Kant (who believed that every human concept is based on observations that are operated on by the mind so that we have no access to a mind-independent reality). He considered the entire human knowledge and came to the conclusion that time is not explored by humans to be objective features of the mundane world domain, but is a part of an inevitable systematic framework for coordinating our experiences. How and when did the time begin? No other scientific question is more fundamental or provokes such spirited debate among physicists. Since the early part of the 1900s, one explanation of the origin and fate of the universe, the Big Bang theory, has dominated the discussion. Although singularity theorems predicted that the time, the space, and the matter or energy itself had a beginning, they didn’t convey how they had a beginning. It would clearly be nice for singularity theorems if they had a beginning, but how can we distinguish whether they had a beginning? Inasmuch as the time had a beginning at the Big Bang it would deepen implication for the role of divine creator in the grand design of creation. But if it persists in the bounds of reason in that it has neither beginning nor end and nothing for a Creator to do. What role could ineffable benevolent creator have in creation? Life could start and new life forms could emerge on their own randomly sustaining themselves by reproducing in the environment fitted for the functional roles they perform. Personally, we’re sure that the time began with a hot Big Bang. But will it go on ticking forever? If not, when it will wind up its clockwork of ticking? We’re much less sure about that. However, we are just a willful gene centered breed of talking monkeys on a minor planet of a very average galaxy. But we have found a new way to question ourselves and we have learned to do them. That makes us something very special. Moreover, everything we think we understand about the universe would need to be reassessed. Every high school graduate knows cosmology, the very way we think of things, would be forever altered. The distance to the stars and galaxies and the age of the universe (13.7 billion years) would be thrown in doubt. Even the expanding universe theory, the Big Bang theory, and black holes would have to be re-examined. The Big Bang theory of universe assumes the present form of the universe originated from the hot fire ball called singularity and it assumes time did not exist before the Big Bang. But Erickcek deduced on the basis of NASA’s, Wilkinson Microwave Anisotropy Probe (WMAP) that the existence of time and empty space is possible before the Big Bang.

But what would happen if you travel back in time and kill your grandfather before he conceives your father? Would the arrow of time reverse? Because motion makes the clock tick slower, can we travel back in time and kill our grandfather before he conceive our father? If not, why the universe avoids the paradox? Time Travel − Science Fiction? Taking the laws of physics and punching them in the stomach and throwing them down the stairs – it’s possible for you to break the universal speed limit. It is mind boggling to think about it – you’re actually travelling backwards in time. What if you went back in time and prevented big bang from happening? You would prevent yourself from ever having been born! But then if you hadn’t been born, you could not have gone back in time to prevent big bang from happening. The concept of time travel may sound something impressive and allow science fiction like possibilities for people who survived from the past, but somewhat it seems to be incredible like seeing broken tea cups gathering themselves together off the floor and jumping back on the table promoting cup manufacturers go out of business. However, travelling through time may not be the far-fetched science fiction theory. At the same time, can we open a portal to the past or find a shortcut to the future and master the time itself is still in question and forbidden by the second law of thermodynamics (which states that in any closed system like universe randomness, or entropy, never decreases with time). Of course, we have not seen anyone from the past (or have we?).

We asked how stars are powered and found the answer in the transformations of atomic nuclei. But there are still simple questions that we can ask. And one is: Is our universe merely the by-product of a cosmic accident? If the universe were merely the by-product of a grand accident, then our universe could have been a conglomeration of objects each going its own way. But everything we see in the universe obeys rules which are governed by a set of equations, without exception − which give philosophy a lot more attention than science. However, this does not mean that the universe obey rules because it exists in a plan which is created and shaped by a grinding hand. Maybe the universe is a lucky coincidence of a grand accident emerged with ingredients such as space, time, mass, and energy exist in one-to-one correspondence with the elements of reality, and hence it obeys a set of rational laws without exception. At this moment it seems as though Dr. Science will never be able to raise the curtain on the mystery of creation. Moreover, traditional philosophy is dead, that it has not kept up with modern developments in science, and there is no reason at justifying the grinding hand because the idea of God is extremely limited and goes no further than the opening sentence of the classical theology, and much is still in the speculative stage, and we must admit that there are yet no empirical or observational tests that can be used to test the idea of an accidental origin. No evidence. No scientific observation. Just a speculation. For those who have lived by their faith in the power of reason, the story may end like a bad dream since free will is just an illusion.

From the Big Bang to the Bodies such as stars or black holes including basic facts such as particle masses and force strengths, the entire universe works because the laws of physics make things happen. But if Meta or hyper laws of physics were whatever produced the universe then what produced those laws. Or perhaps, the laws, or the cause that created them, existed forever, and didn’t need to be created. We must admit that there is ignorance on some issues, that is, we don’t have a complete set of laws …. We are not sure exactly does the existing laws hold everywhere and at all time. Dr. Science gives us a clue, but there’s no definitive answer to provide a purely natural, non-causal explanation for the existence of laws of physics and our place in it. So let’s just leave it at the hypothetical laws of physics. The question, then, is why are there laws of physics? And we could say, well, that required a biblical deity, who created these laws of physics and the spark that took us from the laws of physics to the notions of time and space. Well, if the laws of physics popped into existence 13.8 billion years ago with divine help whatsoever, like theologians say, why aren’t we seeing a at least one evidence of an ineffable creator in our observable universe every now and then? The origin of the Meta or hyper laws of physics remains a mystery for now. However, recent breakthroughs in physics, made possible in part by fantastic revolutionary understanding of the true nature of the mathematical quantities and theories of physics, may suggest an answer that may seem as obvious to us as the earth orbiting the sun – or perhaps as ridiculous as earth is a perfect sphere. We don’t know whatever the answer may be because the Meta or hyper laws of physics are completely beyond our experience, and beyond our imagination, or our mathematics. This fact leads us to a big mystery and awaits the next generation of high energy experiments, which hope to shed light on the far-reaching answer that might be found in the laws that govern elemental particles.

Who are we? We find that we live on an fragile planet of a humdrum star by a matter of sheer luck or by divine providence, lost in a galaxy tucked away in some forgotten corner of a universe in which there are far more galaxies than people.

Sending the Beatles song across the Universe and pointing the telescopes in Deep Space Network towards the North Star, Polaris, we seek to find intellectual beings like us outside the sheer number of planets, our solar system, and our own Milky Way galaxy. How awe hunting for them across the empty stretches of the universe would be to acquire a bit of confirmation that either we’re alone in this universe or we are not. However, we are not the only life-form in the universe, is reasonable to expect since we have no reason to assume that ours is the only possible form of life. Some sort of life could have happened in a universe of greatly different form, but

Where’s the evidence?

The Burden of evidence is only on the people who regard themselves as reliable witnesses that sightings of UFOs are evidence that we are being visited by someone living in another galaxy who are much more advanced enough to spread through some hundred thousand million galaxies and visit the Earth.

The known forces of nature can be divided into four classes:

Gravity: This is the weakest of the four; it acts on everything in the universe as an attraction. And if not for this force, we would go zinging off into outer space and the sun would detonate like trillions upon trillions of hydrogen bombs.

Electromagnetism: This is much stronger than gravity; it acts only on particles with an electric charge, being repulsive between charges of the same sign and attractive between charges of the opposite sign. More than half the gross national product of the earth, representing the accumulated wealth of our planet, depends in some way on the electromagnetic force.

Weak nuclear force: This causes radioactivity and plays a vital role in the formation of the elements in stars. And a slightly stronger this force, all the neutrons in the early universe would have decayed, leaving about 100 percent hydrogen, with no deuterium for later use in the synthesizing elements in stars.

Strong nuclear force: This force holds together the protons and neutrons inside the nucleus of an atom. And it is this same force that holds together the quarks to form protons and neutrons. Unleashed in the hydrogen bomb, the strong nuclear force could one day end all life on earth.

The inherent goal of unification is to show that all of these forces are, in fact, manifestations of a single super force. We can’t perceive this unity at the low energies of our everyday lives, or even in our most powerful accelerators at CERN. But close to the Big Bang temperatures, at inconceivably high energies…

If the forces unify, the protons − which make up much of the mass of ordinary matter‒ can be unstable, and eventually decay into lighter particles such as antielectrons. Indeed, several experiments were performed in the Morton Salt Mine in Ohio to yield definite evidence of proton decay. But none have succeeded so far. However, the probability of a proton in the universe gaining sufficient energy to decay is so small that one has to wait at least a million million million million million years i.e., longer than the time since the big bang, which is about ten thousand million years.

The strength of the gravitational force is measured by the dimensionless parameter α~G~, which in standard international units is Gm2/ħc (where m is the mass of the proton or the electron). And the ratio α~G~ / α is =136.25 × (m /Planck mass) 2. And since m is < than Planck mass (the fundamental unit of mass constructed solely out of the three fundamental constants, ħ = h /2π, G and c − which we can produce in a bubble chamber in the Fermi lab accelerator at the present time), it is clear that from the above equation α is > than α~G~ (i.e., the strength of electromagnetic force is > than the strength of gravitational force). But why? The answer is at the heart of the basic questions of particle physics. The eminent laws do not tell us why the initial configuration was such as to produce what we observe. For what purpose? Must we turn to the anthropic principle for an explanation? Was it all just a lucky chance? That would seem a counsel of despair, a negation of all our hopes of understanding the unfathomable order of the universe. However, this is an extended metaphor for many puzzles in physics uncovered with painstaking labor, and it is especially relevant to particle physics. Still, particle physics remains unfathomable to many people and a bunch of scientists chasing after tiny invisible objects.

If string theory is correct, then every particle is nothing but a tiny string. A string does something aside from moving – it oscillates in different ways. Each way represent a particular mode of vibration Different modes of vibration make the string appear as a dark energy or a cosmic ray, since different modes of vibration are seen as different masses or spins.

If Higgs theory is correct, then a new field called the Higgs field which is analogous to the familiar electromagnetic field but with new kinds of properties permits all over the space. Different masses of the particles are due to the different strengths of interaction of the particle with the Higgs field (more the strength of interaction of the particle with the Higgs field, more the mass of the particle).To make this easier for you, let’s say it is cosmic high-fructose corn syrup − the more you go through it, the heavier you get.

If both the theories are right, then the different masses of the particles are due to (the different modes of vibration of the string plus the different strengths of interaction of the string with the Higgs field).

Which explanation is right?

Higgs theory runs rampant in the popular media claiming that String Theory Is Not The Only Game In Town. However, by the end of the decade, we will have our first glimpse of the new physics, whatever it well may be

STRING or HIGGS

The new physics will point to even more discoveries at the TeV scale and opens the door beyond the Standard Model and raise new questions like: if the Higgs field generate masses for the W and Z, and for the quarks and leptons− does it generate its own mass and if so how? What is its mass?

As a remarkable consequence of the uncertainty principle of quantum mechanics (which implies that certain pairs of quantities, such as the energy and time, cannot both be predicted with complete accuracy) the empty space is filled with what is called vacuum energy − i.e., the empty space has energy and its energy density is constant and given by: ρ = Λc 2 /8πG where Λ is the dark energy (which give space-time an inbuilt tendency to expand) , c is the speed of light in vacuum and G is the universal gravitational constant. Since c 2 /8πG is constant, ρ and Λ are in fact equivalent and interchangeable. And since c 2 is >8πG, therefore Λ is < ρ which means: a very large amount of dark energy attributes to a fairly small vacuum energy density. Moreover, since c is not just the PHYSICAL constant but rather a fundamental feature of the way space and time are unified as space-time, does the equation ρ = Λc 2 /8πG mean that as a consequence of dominance of the unification of space and time over a force called gravity − a very large amount of dark energy attributes to a fairly small vacuum energy density? And c2 /8πG is = 5.36 × 10 25 kg/m. What does the value 5.36 × 10 25 kg per meter imply? Dr. Science remains silent on these profound questions. Ultimately, however, one would hope to find complete, consistent answers that would include all the mathematical techniques as approximations. The quest for such answers is known as the grand unification of the two basic partial theories: the general theory of relativity (which states that space and time are no longer absolute, no longer a fixed background to events. Instead, they are dynamical quantities that are shaped by the matter and energy in the universe) and quantum mechanics (a theory of the microcosm, where subatomic particles are held together by particle like forces dancing on the sterile stage of space-time, which is viewed as an empty arena, devoid of any content). Unfortunately, however, these two theories are inconsistent with each other – i.e., quantum mechanics and general relativity do not work together. How the ideas of general relativity can be consolidated with those of quantum theory is still a? until we progress closer toward the laws that govern our universe.

The latest theory of subatomic particles (the quantum theory) gives an estimated value of vacuum energy density that is about 120 orders of magnitude larger than the measured value — claiming our best theory cannot calculate the value of the largest energy source in the entire universe. Dr Science advances over the wreckage of its theories by continually putting its ideas to experimental test; no matter how beautiful its idea might be; it must be discarded or modified if it is at odds with experiment. It would have been clearly be nice for quantum theory if the value of vacuum energy density were in the order of 10 96 kg per cubic meter, but the measured value were in the order of 10 −27 kg per cubic meter. Thus, the best candidate we have at the moment, the quantum theory, brought about its downfall by predicting the value of vacuum energy density that is about 120 orders of magnitude larger than the measured value.

We a lot of exposure with darkness and disbelief and a state of not having an immediate conclusion, and this vulnerability is of great significance, I think. When we don’t comprehend the mind of nature, we are in the middle of darkness. When we have an intuitive guess as to what the outcome is; we are unsealed. And when we are fairly damn sure of what the final result is going to be, we are still in some uncertainty. And uncertainty being too complex to come about randomly is evidence for human continuing quest for justification. Sometimes, very hard, impossible things just strike and we call them thoughts. In most of the self-reproducing organisms the conditions would not be right for the generation of thoughts to predict things more or less, even if not in a simplest way, only in the few complex organisms like us spontaneous thoughts would generate and what is it that breathes fire into a perception. The human perception is enormous; it’s extensive and unlimited, and outrageous that we can ask simple questions. And they are: What the dark energy is up to? What it is about? Why this mysterious form of energy permeates all of space blowing the galaxies farther and farther apart? How accurate are the physical laws, which control it? Why it made the universe bang? Unfortunately, the laws that we are using are not able to answer these questions because of the prediction that the universe started off with infinite density at the big bang singularity (where all the known laws would break down). However, if one looks in a commonsense realistic point of view the laws and equations which are considered as inherent ingredients of reality − are simply the man-made ingredients introduced by the rational beings who are free to observe the universe as they want and to draw logical deductions from what they see − to describe the objective features of reality. The scientific data is fallible, changeable, and influenced by scientific understanding is refreshing. Here’s an example of what I mean. In most physics textbooks we will read that the strength of the electromagnetic force is measured by the dimensionless parameter α = e2/4πε~0~ħc (where e is the charge = 1.602 × 10 − 19 Coulombs, ε~0~ is the absolute permittivity of free space = 8.8× 10 – 12 F/m, c is the speed of light in vacuum and ħ is the reduced Planck’s constant), called the fine structure constant, which was taught to be constant became variant when the standard model of elementary particles and forces revealed that α actually varies with energy.

The Quantum theory of electrodynamics (which seems to govern everything small) and General Relativity (which dominates large things and is now called a classical theory which predicts that the universe started off with infinite density at the big bang singularity) both try to assign mass to the singularity. But according to generally accepted history of the universe, according to what is known as the hot big bang model. At some finite time in the past i.e., between ten and twenty thousand million years ago. At this time, all matter would have been on top of each other − which is called the singularity, the density ρ would have been INFINITE. If density → infinite then volume V which is M/ ρ approaches zero. So if V approaches zero then mass M which is density times volume approaches zero. Hence the singularity cannot have mass in a zero volume, by definition of mass and volume. However, a good mathematical theory can prove anything with that amount of wiggle room, and findings are really determined by nothing except its desire. For all theoreticians and tens of thousands of university graduates at least know, the universe started off with infinite density at the hot big bang singularity with infinitely hot temperatures. And at such high temperatures that are reached in thousands of H-bomb explosions, the strong and weak nuclear forces and the gravity and electromagnetic force were all unified into a single force. What was before the Big Bang? Was the Big Bang created? If the Big Bang was not created, how was this Big Bang accomplished, and what can we learn about the agent and events of creation? Is it the product of chance or was been designed? What is it that blocked the pre-Big Bang view from us? Is Big Bang singularity an impenetrable wall and we cannot, in physics, go beyond it? To answer one question, another question arises. Erickcek‘s model suggests the possibility of existence of space and time before the big bang. But the world famed Big Bang theory abandons the existence of space and time before the big bang. Both the theories are consistent and based upon sophisticated experimental observations and theoretical studies. Truth must be prejudiced with honest scientific inquiry to illuminate the words of Genesis. And this is possible only if the modern scientific community would simply open its eyes to the truth.

Do black holes really exist? If they exist, why we haven’t observed one hole yet? Can black holes be observed directly, and if so, how? If there are no black holes, what are these things we detect ripping gas off the surface of other stars?

Most people think of a black hole as a voracious whirlpool in space, sucking down everything around it. But that’s not really true! A black hole is a place where gravity has gotten so strong that even light cannot escape out of its influence.

How a black hole might be formed?

The slightly denser regions of the nearly uniformly distributed atoms (mostly hydrogen) which lack sufficient energy to escape the gravitational attraction of the nearby atoms, would combine together and thus grow even denser, forming giant clouds of gas, which at some point become gravitationally unstable, undergo fragmentation and would break up into smaller clouds that would collapse under their own gravity. As these collapses, the atoms within them collide with one another more and more frequently and at greater and greater speeds – the gas heats up i.e., the temperature of the gas would increase, until eventually it become hot enough to start nuclear fusion reactions. And a consequence of this is that the stars like our sun are born to radiate their energy as heat and light. But the stars of radius

r = 2GM/c2

or Mc2 = 2GM2/r

Since GM2/r = −5U/3 (where U = gravitational binding energy of a star):

Mc2 = − 3.33U

i.e., stars of rest mass energy = 3.33 times their negative gravitational binding energy further collapse to produce dark stars. And these dark stars are sufficiently massive and compact and possess a strong gravitational field that prevent even light from escaping out its influence: any light emitted from the surface of the star will be dragged back by the star’s gravitational attraction before it could get very far. Such stars become black voids in space and are what was coined in 1969 by the American scientist John Wheeler “the black holes.” Classically, the gravitational field of these black holes is so strong that they would prevent any information including light from escaping out of their influence i.e., any information swallowed by a black hole is forever hidden from the outside universe, and all one could say of the gravitational monster what the poet Dante said of the entrance to Hell: “All hope abandon, ye who enter here.” Anything or anyone who falls through the black hole will soon reach the region of infinite density and the end of time. However, quantum fields would scatter off a black hole. Because energy cannot be created out of nothing, the pair of short-lived virtual particles (one with positive energy and the other with negative energy) appears close to the event horizon of a black hole. The gravitational field of a black hole is so strong that pull the particle with negative energy even before it can annihilate its partner; its forsaken partner with positive energy escape to infinity, where it appear as a real particle (and to an observer at a distance, it will appear to have been emitted from the black hole). Because E= mc squared (i.e., energy is equivalent to mass), a fall of negative energy particle into the black hole therefore reduces its mass with its horizon shrinking in size. As the black hole loses mass, the temperature of the black hole (which depends only on its mass) rises and its rate of emission of particle increases, so it loses mass more and more quickly. We don’t know does the emission process continue until the black hole dissipates completely away or does it stop after a finite amount of time leaving black hole remnants. More precisely, the temperature of the black hole is given by the following formula:

T = ħc3/ 8πGMkB

In this formula the symbol c stands for the speed of light, ħ for reduced Planck’s constant, G for universal gravitational constant, and kB for Boltzmann’s constant. Finally M represents the mass of the black hole. This formula can also be rewritten as:

T / Planck temperature = Planck mass / 8π M

If T equals Planck temperature, then M equals Planck mass / 8π which mean: even if the temperature of the black hole approaches Planck temperature, the black hole cannot attain a mass = Planck mass. The factor 1/8π prevents the black hole from attaining a mass = Planck mass. We do not know what the factor 1/8π really means and why this factor prevents the black hole from attaining a mass = Planck mass because the usual approach of Dr. Science of constructing a set of rules and equations cannot answer the question of what and why but how. And if M equals the mass of the electron, then T becomes > than Planck temperature. If T becomes > than Planck temperature, then current physical theory breaks down because we lack a theory of quantum gravity (i.e., Temperature > than Planck temperature cannot exist only for the reason that the quantum mechanics breaks down at temperature > than 10 to the power of 33 Kelvin). However, it is only theoretically possible that black holes with mass M = mass of the electron could be created in high energy collisions. No black holes with mass M = mass of the electron have ever been observed, however – indeed, they would be extremely difficult to spot – and they are the large emitters of radiation and they shrink and dissipate faster even before they are observed. Though the emission of particles from the primordial black holes is currently the most commonly accepted theory within scientific community, there is some disputation associated with it. There are some issues incompatible with quantum mechanics that it finally results in information being lost, which makes physicists discomfort. However, most physicists admit that black holes must radiate like hot bodies if our ideas about general relativity and quantum mechanics are correct. Thus even though they have not yet managed to find a primordial black hole emitting particles after over two decades of searching. Despite its strong theoretical foundation, the existence of this phenomenon is still in question. Alternately, those who don’t believe that black holes themselves exist are similarly unwilling to admit that they emit particles.

In the nuclear reaction mass of reactants is always greater than mass of products. The mass difference is converted to energy, according to the equation which is as famous as the man who wrote it.

For a nuclear reaction: p +Li 7 → α + α + 17.2 MeV

Mass of reactants:

p= 1.0072764 amu

Li7 = 7.01600455 amu

Total mass of reactants = 7.01600455 amu + 1.0072764 amu = 8.02328095 amu

Mass of products:

α= 4.0015061amu

Total mass of products = α + α = 2α= 8.0030122 amu

As from above data it is clear that

Total mass of reactants is greater than Total mass of products. The mass difference (8.02328095 amu − 8.0030122 amu = 0.02026875 amu) is converted to energy 18.87 MeV, according to the equation E = mc2. However, the observed energy is 17.2 MeV.

Expected energy = 18.87 MeV (i.e., 0.02026875 amu × c2)

Experimentally observed energy = 17.2 MeV

Expected energy is ≠ observed energy

Energy difference = (18.87 − 17.2) MeV = 1.67 MeV

Where the energy 1.67 MeV is gone? The question is clear and deceptively simple. But the answer is just being blind to the complexity of reality. Questions are guaranteed in Science; Answers aren’t.

The 4 dimensional fabric of space-time is simply the lowest energy state of the universe. It is neither empty nor uninteresting, and its energy is not necessarily zero (which was discovered by Richard Dick Feynman, a colorful character who worked at the California Institute of Technology and played the bongo drums at a strip joint down the road‒ for which he received Nobel Prize for physics in 1965). Because E = mc squared, one can think that the virtual particle-antiparticle pairs of mass m are continually being created out of energy E of the 4 dimensional fabric of space-time consistent with the uncertainty principle of quantum mechanics, and then, they appear together at some time, move apart, then come together and annihilate each other giving energy back to the space-time without violating the law of energy conservation. Spontaneous births and deaths of virtual particles so called quantum fluctuations occurring everywhere, all the time − is the conclusion that mass and energy are interconvertible; they are two different forms of the same thing. However, spontaneous births and deaths of so called virtual particles can produce some remarkable problem, because infinite number of virtual pairs of mass m can be spontaneously created out of energy E of the 4 dimensional fabric of space-time, does the 4 dimensional fabric of space-time bears an infinite amount of energy, therefore, by Einstein’s famous equation E = mc2, does it bears an infinite amount of mass. If so, according to general relativity, the infinite amount of mass would have curved up the universe to infinitely small size. But which obviously has not happened. The word virtual particles literally mean that these particles cannot be observed directly, but their indirect effects can be measured to a remarkable degree of accuracy. Their properties and consequences are well established and well understood consequences of quantum mechanics. However, they can be materialized into real particles by several ways. All that one require an energy = energy required to tear the pair apart + energy required to boost the separated virtual particle-antiparticles into real particles (i.e., to bring them from virtual state to the materialize state).

When Einstein was 26 years old, he calculated precisely how energy must change if the relativity principle was correct, and he discovered the relation E = mc2 (which led to the Manhattan Project and ultimately to the bombs that exploded over Hiroshima and Nagasaki in 1945). This is now probably the only equation in physics that even people with no background in physics have at least heard of this and are aware of its prodigious influence on the world we live in. And since c is constant (because the maximum distance a light can travel in one second is 3 ×10 to the power of 8 meter), this equation tells us that mass and energy are interconvertible and are two different forms of the same thing and are in fact equivalent. Suppose a mass m is converted into energy E, the resulting energy carries mass = m and moves at the speed of light c. Hence, energy E is defined by E= mc squared. As we know c squared (the speed of light multiplied by itself) is an astronomically large number: 9 × 10 to the power of 16 meters square per second square. So if we convert a small amount of mass, we’ll get a tremendous amount of energy. For example, if we convert 1kg of mass, we’ll get energy of 9 × 10 to the power of 16 Joules (i.e., the energy more than 1 million times the energy released in a chemical explosion. Perhaps since c is not just the constant namely the maximum distance a light can travel in one second but rather a fundamental feature of the way space and time are married to form space-time. One can think that in the presence of unified space and time, mass and energy are equivalent and interchangeable. But WHY? The question lingers, unanswered. Until now.

However, the equation E = mc2 has some remarkable consequences. Because E = mc2, the energy which a body possess due to its motion will add to its rest mass. This effect is only really significant for bodies moving at speeds close to the speed of light. For example, at 10 percent of the speed of light a body’s mass M is only 0.5 percent more than its rest mass m, while at 90 percent of the speed of light it would be more than twice its rest mass. And as an body approaches the speed of light, its mass raise ever more quickly, it acquire infinite mass and since an infinite mass cannot be accelerated any faster by any force, the issue of infinite mass remains an intractable problem. For this reason all the bodies are forever confined by relativity to move at speeds slower than the speed of light. Only photons that have no intrinsic mass can move at the speed of light. There is little agreement on this point. Now, being more advanced, we do not just consider conclusions like photons have no intrinsic mass. We constantly test them, trying to prove or disprove. So far, relativity has withstood every test. And try as we might, we can measure no mass for the photon. We can just put upper limits on what mass it can have. These upper limits are determined by the sensitivity of the experiment we are using to try to weigh the photon. The last number we can see that a photon, if it has any mass at all, must be less than 4 ×10 to the power of − 48 grams. For comparison, the electron has a mass of 9 × 10 to the power of − 28 grams. Moreover, if the mass of the photon is not considered to zero, then quantum mechanics would be in trouble. And it also an uphill task to conduct an experiment which proves the photon mass to be exactly zero. Tachyons the putative class of hypothetical particles (with imaginary mass) is believed to travel faster than the speed of light. But, the existence of tachyons is still in question and if they exist, how can they be detected is still a? However, on one thing most physicists agree. In expanding space, recession velocity keeps increasing with distance. Beyond a certain distance, known as the Hubble distance, it exceeds the velocity greater than the speed of light in vacuum. But, this is not a violation of relativity, because recession velocity is caused not by motion through space but by the expansion of space.

The first step toward quantum theory had come in 1900, when German scientist Max Planck in Berlin discovered that the radiation from a body that was glowing red-hot was explainable if light could be emitted or absorbed only if it came in indivisible discrete pieces, called quanta. And each quanta behaved very much like point particles of energy E = hυ. In one of his groundbreaking papers, written in 1905 when he was at the patent office, Einstein showed that Planck’s quantum hypothesis could explain what is called the photoelectric effect, the way certain metals give off electrons when light falls on them. He attributed particle nature to a photon (that made up a crisis for classical physics around the turn of the 20th century and it provided proof of the quantization of light) and considered a photon as a particle of mass m = hυ/c2 and said that photoelectric effect is the result of an elastic collision between a photon of incident radiation and a free electron inside the photo metal. During the collision the electron absorbs the energy of the photon completely. A part of the absorbed energy hυ of the photon is used by the electron in doing work against the surface forces of the metal. This part of the energy (hυ~1~) represents the work function W of the photo metal. Other part (hυ~2~) of the absorbed energy hυ of the photon manifests as kinetic energy (KE) of the emitted electron i.e.,

(hυ~2~) = KE

But hυ2 = p2c (p2 is the momentum and c is the speed of light in vacuum) and KE = pv/2 where p is the momentum and v is the velocity of ejected electron. Therefore: p2c = pv/2. If we assume that p2 = p i.e., momentum p2 completely manifests as the momentum p of the ejected electron, then

v = 2c

Nothing can travel faster than the speed of light in vacuum, which itself frame the central principle of Albert Einstein’s special theory of relativity. If the electron with rest mass = 9.1 × 10 to the power of –31 kg travels with the velocity v = 2c, then the fundamental rules of physics would have to be rewritten. However, v=2c is meaningless as the non-relativistic electron can only travel with velocity v<<c. Hence: p2 is ≠ p. This means: only a part (p2A) of the momentum p2 manifests as the momentum p of the ejected electron.

p2 = (p2A) + (p2B)

p2 = p +?

E= hυ − because h is constant, energy and frequency of the photon are equivalent and are different forms of the same thing. And since h − which is one of the most fundamental numbers in physics, ranking alongside the speed of light c − is incredibly small (i.e., 6 × 10 to the power of –34 — a decimal point followed by 33 zeros and a 6 — of a joule second), the frequency of the photon is always greater than its energy, so it would not take many quanta to radiate even ten thousand megawatts. And some say the only thing that quantum mechanics (the great intellectual achievement of the first half of this century) has going for it, in fact, is that it is unquestionably correct. Since the Planck’s constant is almost infinitesimally small, quantum mechanics is for little things. Suppose this number would have been too long to keep writing down i.e., h would have been = 6.625×10 to the power of 34 Js, then the wavelength of photon would have been very large. Since the area of the photon is proportional to the square of its wavelength, photon area would have been sufficiently large to consider the photon to be macroscopic. And quantum mechanical effects would have been noticeable for macroscopic objects. For example, the De Broglie wavelength of a 100 kg man walking at 1 m/s would have been = h/mv = (6.625 ×10 34 Js) / (100kg) (1m/s) = 6.625 × 10 to the power of 32 m (very large to be noticeable).The work on atomic science in the first thirty five years of this century took our understanding down to lengths of a millionth of a millimeter. Then we discovered that protons and neutrons are made of even smaller particles called quarks (which were named by the Caltech physicist Murray Gell-Mann, who won the Nobel Prize in 1969 for his work on them). We might indeed expect to find several new layers of structure more basic than the quarks and leptons that we now regard as elemental particles. Are there elementary particles that have not yet been observed, and, if so, which ones are they and what are their properties? What lies beyond the quarks and the leptons? If we find answers to them, then the entire picture of particle physics would be quite different.

From each gene’s point of view, the ‘background’ genes are those with which it shares bodies in its journey down the generations. DNA. That is a bit of an exaggeration. Most forms of life including vertebrates, reptiles, Craniates or suckling pigs, chimps and dogs and crocodiles and bats and cockroaches and humans and worms and dandelions, carry the amazing complexity of the information within the some kind of replicator—molecules called DNA in each cell of their body, that a live reading of that code at a rate of one letter per second would take thirty-one years, even if reading continued day and night. Just as protein molecules are chains of amino acids, so DNA molecules are chains of nucleotides. Linking the two chains in the DNA, are pairs of nucleic acids (purines + pyrimidines). There are four types of nucleic acid, adenine “A”, cytosine “C”, guanine “G”, and thiamine “T.” An adenine (purine) on one chain is always matched with a thiamine (pyrimidine) on the other chain, and a guanine (purine) with a cytosine (pyrimidine). Thus DNA exhibits all the properties of genetic material, such as replication, mutation and recombination. Hence, it is called the molecule of life. We need DNA to create enzymes in the cell, but we need enzymes to unzip the DNA. Which came first, proteins or protein synthesis? If proteins are needed to make proteins, how did the whole thing get started? We need precision experiments to know for sure.

A theory is a good theory if it satisfies one requirement. It must make definite predictions about the results of future observations. Basically, all scientific theories are scientific statements that predict, explain, and perhaps describe the basic features of reality. Despite having received some great deal, discrepancies frequently lead to doubt and discomfort. For example, the most precise estimate of sun’s age is around 10 million years, based on linear density model. But geologists have the evidence that the formation of the rocks, and the fossils in them, would have taken hundreds or thousands of millions of years. This is far longer than the age of the Earth, predicted by linear density model. Hence the earth existed even before the birth of the sun! Which is absolutely has no sense. The linear density model therefore fails to account for the age of the sun. Any physical theory is always provisional, in the sense that it is only a hypothesis: it can be disproved by finding even a single observation that disagrees with the predictions of the theory. Towards the end of the nineteenth century, physicists thought they were close to a complete understanding of the universe. They believed that entire universe was filled by a hypothetical medium called the ether. As a material medium is required for the propagation of waves, it was believed that light waves propagate through ether as the pressure waves propagate through air. Soon, however, inconsistencies with the idea of ether begin to appear. Yet a series of experiments failed to support this idea. The most careful and accurate experiments was carried out by Albert Michelson and Edward Morley at the Case School of Applied Science in Cleveland, Ohio, in 1887 ‒ which proved to be a serve blow to the existence of ether.

There were several attempts such as quantum mechanics, the “big bang,” probability theory, the general relativity to answer the questions that have so long occupied the mind of philosophers and scientists. However, we must admit that there is ignorance on some issues, for example, “we don’t have a complete theory of universe …. We are not sure exactly how universe happened.” However, the generally accepted history of the universe, according to what is known as the hot big bang model has completely changed the discussion of the origin of the universe. In such model one finds that the universe was hotter and denser than anything we can imagine and was very rapidly expanding and cooling. As the universe was expanding, the temperature was decreasing. Since the temperature was decreasing, the universe was cooling and its curvature energy was converted into matter like a formless water vapor freezes into snowflakes whose unique patterns arise from a combination of symmetry and randomness. Approximately 10^−37^ seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially by a factor e3Ht (where H was a constant called Hubble parameter and t was the time) – just as the prices grew by a factor of ten million in a period of 18 months in Germany after the First World War and it doubled in size every tiny fraction of a second – just as prices double every year in certain countries. After inflation stopped, the rate of expansion of the universe was no longer proportional to its volume since H was no longer constant. At that time, the entire universe consisted of high energetic quarks as well as leptons. There were a number of different varieties of quarks: there were six “flavors,” which we now call up, down, strange, charmed, bottom, and top. And among the leptons the electron was a stable object and muon (that had mass 207 times larger than electron) and the tauon (that had mass 3,490 times the mass of the electron) were allowed to decay into other particles. And associated to each charged lepton, there were three distinct kinds of neutrinos:

the electron neutrino

the muon neutrino

the tauon neutrino

Temperatures were so high that these quarks and leptons were moving around so fast that they escaped any attraction toward each other due to nuclear or electromagnetic forces. However, they possessed so much energy that whenever they collided, particle – antiparticle pairs of all kinds were being continuously created and destroyed in collisions. At some point an unknown reaction led to a very small excess of quarks and leptons over antiquarks and antileptons — of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the universe. The universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was decreasing with time. Symmetry breaking phase transitions placed the fundamental forces of physics and the parameters of elementary particles into their present form. After about 10^−11^ seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle physics experiments. At about 10^−6^ seconds, there was a continuous exchange of gluons between the quarks and this resulted in a force that pulled the quarks to form baryons (such as protons and neutrons) as well as other particles. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The proton was composed of two up quarks and one down quark and the neutron was composed of two down quarks and one up quark. And other particles contained other quarks (strange, charmed, bottom, and top), but these all had a much greater mass and decayed very rapidly into protons and neutrons. The charge on the up quark was = + 2/3 e and the charge on the down quark was = – 1/3 e. The other quarks possessed charges of + 2/3 e or – 1/3 e. The charges of the quarks added up in the combination that composed the proton but cancelled out in the combination that composed the neutron i.e.,

Proton charge was = (2/3 e) + (2/3 e) + (–1/3 e) = e

Neutron charge was = (2/3 e) + (–1/3 e) + (–1/3 e) = 0

And the force that confined the mass of the proton or the neutron (i.e., its constituent particles) to its radius was = its rest mass energy divided by its radius i.e., for the proton of radius ≈ 1.112 × 10 −15 meter: F was = 13.52 × 10 to the power of 26 Newton. And this force was so strong that it is now proved very difficult if not impossible to obtain an isolated quark. As we try to pull them out of the proton or neutron it gets more and more difficult. Even stranger is the suggestion that the harder and harder if we could drag a quark out of a proton this force gets bigger and bigger – rather like the force in a spring as it is stretched causing the quark to snap back immediately to its original position. This property of confinement prevented one from observing an isolated quark. However, now it has been revealed that experiments with large particle accelerators indicate that at high energies the strong force becomes much weaker, and one can observe an isolated quark. Each quark possessed baryon number = 1/3: the total baryon number of the proton or the neutron was the sum of the baryon numbers of the quarks from which it was composed. And the electrons and neutrinos contained no quarks; they were themselves truly fundamental particles. And since there were no electrically charged particles lighter than an electron and a proton, the electrons and protons were prevented from decaying into lighter particles – such as photons and less massive neutrinos (with very little mass, no electric charge, and no radius — and, adding insult to injury, no strong force acted on it). And a free neutron being heavier than the proton was not prevented from decaying into a proton (plus an electron and an antineutrino). The temperature was now no longer high enough to create new proton–antiproton pairs, so a mass annihilation immediately followed, leaving just one in 1010 of the original protons and neutrons, and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons (with a minor contribution from neutrinos).

And a few minutes into the expansion, when the temperature was about a billion (one thousand million; 10 to the power of 9) kelvin and the density was about that of air, protons and neutrons no longer had sufficient energy to escape the attraction of the strong nuclear force and they started to combine together to produce the universe’s deuterium and helium nuclei in a process called Big Bang nucleosynthesis. And most of the protons remained uncombined as hydrogen nuclei. Within only a few hours of the big bang, the Big Bang nucleosynthesis stopped. And after that, for the next million years or so, the universe just continued expanding, without anything much happening. Eventually, once the temperature had dropped to a few thousand degrees, there was a continuous exchange of virtual photons between the nuclei and the electrons. And the exchange was good enough to produce — what else? — A force (proportional to a quantity called their charge and inversely proportional to the square of the distance between them). And that force pulled the electrons towards the nuclei to form neutral atoms. And these atoms reflected, absorbed, and scattered light and the resulted light was red shifted by the expansion of the universe towards the microwave region of the electromagnetic spectrum. And there was cosmic microwave background radiation.

The irregularities in the universe meant that some regions of the nearly uniformly distributed atoms had slightly higher density than others. The gravitational attraction of the extra density slowed the expansion of the region, and eventually caused the region to collapse to form galaxies and stars. And the nuclear reactions in the stars transformed hydrogen to helium to carbon with the release of an enormous amount of energy via Einstein’s equation E = mc2. This was the energy that lighted up the stars. And the process continued converting the carbon to oxygen to silicon to iron. And the nuclear reaction ceased at iron. And the star experienced several chemical changes in its innermost core and these changes required huge amount of energy which was supplied by the severe gravitational contraction. And as a result the central region of the star collapsed to form a neutron star. And the outer region of the star got blown off in a tremendous explosion called a supernova, which outshone an entire galaxy of 100 billion stars, spraying the manufactured elements into space. And these elements provided some of the raw material for the generation of cloud of rotating gas which went to form the sun and a small amount of the heavier elements collected together to form the asteroids, stars, comets, and the bodies that now orbit the sun as planets like the Earth.

The earth was initially very hot and without an atmosphere. In the course of time the planet earth produced volcanoes and the volcanoes emitted water vapor, carbon dioxide and other gases. And there was an atmosphere. This early atmosphere contained no oxygen, but a lot of other gases and among them some were poisonous, such as hydrogen sulfide (the gas that gives rotten eggs their smell). And the sunlight dissociated water vapor and there was oxygen. And carbon dioxide in excess heated the earth and balance was needed. So carbon dioxide dissolved to form carbonic acid and carbonic acid on rocks produced limestone and subducted limestone fed volcanoes that released more carbon dioxide. And there was high temperature and high temperature meant more evaporation and dissolved more carbon dioxide. And as the carbon dioxide turned into limestone, the temperature began to fall. And a consequence of this was that most of the water vapor condensed and formed the oceans. And the low temperature meant less evaporation and carbon dioxide began to build up in the atmosphere. And the cycle went on for billions of years. And after the few billion years, volcanoes ceased to exist. And the molten earth cooled, forming a hardened, outer crust. And the earth’s atmosphere consisted of nitrogen, oxygen, carbon dioxide, plus other miscellaneous gases (hydrogen sulfide, methane, water vapor, and ammonia). And then a continuous electric current through the atmosphere simulated lightning storms. And some of the gases came to be arranged in the form of more complex organic molecules such as simple amino acids (when, when linked together, formed proteins) and carbohydrates (which were very simple sugars). And the water vapor in the atmosphere probably caused millions of seconds of torrential rains, during which the organic molecules reached the earth. And it took two and a half billion years for an ooze of organic molecules to react and built earliest cells as a result of chance combinations of atoms into large structures called macromolecules and then advance to a wide variety of one – celled organisms, and another billion years to evolve through a highly sophisticated form of life to primitive mammals. But then evolution seemed to have speeded up. It only took about a hundred million years to develop from the early mammals to Homosapiens. This picture of a universe that started off very hot and evolved as it expanded is in agreement with all the observational evidence which we have today. Nevertheless, it leaves an important question unanswered: whether the laws of physics had any choice in the creation of the world. Yes, it would have had many choices if it had wanted to set the value of the speed of light much smaller than its actual value and the values of electron mass, proton mass, and constants determining the magnitudes of electromagnetic interaction, strong interaction, and weak interaction much larger than their actual values. However, in order to produce so much diversity from grains of sand to massive stars, and of course, to have sun-like stars in the universe which can sustain life to embrace and celebrate the profound uncertainty that propels rather than hinders human knowledge; it seems that it had only limited choices.

 

III

FINE TUNING

(For Argument)

E= mc squared

If c would have been = 3×10 to the power of −8 meters per second, then

For 1 kg of mass: E= 9 × 10 to the power of −16 Joules i.e., 1 kg of mass would have yielded only 9 × 10 to the power of −16 joules of energy. Hence, thousands and thousands of hydrogen atoms in the sun would have to burn up to release 4 × 10 to the power of 26 joules of energy per second in the form of radiation. Therefore, sun would have ceased to black hole even before an ooze of organic molecules would react and built earliest cells and then advance to a wide variety of one – celled organisms, and evolve through a highly sophisticated form of life to primitive mammals.

F~G~ = GMm/r^2^

If the value of G would have been far greater than its actual value, then

Each star in the universe would have been attracted toward every other star by a force far greater than its present value, so it seemed the stars would have got very near each other, the attractive forces between them would have become stronger and dominate over the repulsive forces so that the stars would have fell together at some point to form a sphere of roughly infinite density.

Vacuum energy density = Λc^2^ /8πG

If Λ would have been = 0, then

The entire vacuum would have been empty. The empty vacuum though unstable would have ceased to exist.

U = −3GM^2^ /5r

If the value of G would have been far greater than its actual value, then

The gravitational binding energy of a star would have been far greater than its present value, so it seemed the matter inside the star would have been very much compressed and far hotter than it is. And the distance between the constituents of the star would have been decreased beyond the optimum distance (maximum distance below which the gravitational force is no longer attractive it turns to a repulsive force) then all the stars would have exploded spraying the manufactured elements into space. No sun would have existed to support life on the earth.

No uncertainty principle

The two quarks would have occupied precisely the same point with the same properties, and then would not have stayed in the same position for long. And quarks would have not formed separate, well-defined protons and neutrons. And nor would these, together with electrons have formed separate, well-defined atoms. And the world would have collapsed before it ever reached its present size.

dE/dB = c

If E and B would have been invariant, then the speed of light c which is dE/dB would have been undefined and all nuclear physics would have to be recalibrated. Nuclear weapons, nuclear medicine and radioactive dating would have been affected because all nuclear reactions are based on Einstein’s relation between matter and energy i.e., E= mc squared.

What would have happened if the Boltzmann’s constant was a variable?

The universal gas constant (which is Boltzmann’s constant times the Avogadro number) would have been a variable. And kinetic theory of gases would have been much different if the universal gas constant would have been a variable.

c = 1 / square root of (ε~0~ × μ~0~) (where: ε~0~ = absolute permittivity of free space and μ~0~ = absolute permeability of free space)

If any one of the constants (ε~0~ or μ~0~) were zero, then c would have been UNDEFINED. And if any one of the constants (ε~0~ or μ~0~) was a variable, then c would not have remained a constant.

 

(Against Argument)

What would have happened if the value of proton mass was far less than its actual value?

As we know that, inside the sun, we have N Protons (say), which can be calculated by the equation: N Protons = M / m, where M = mass of the sun and m = rest mass of the proton. If m was still smaller than 1.672 × 10 −27 kg, then N Protons would have been larger than 1.196 × 10 57. Hence, the stellar life time of the sun would have been slightly higher than its actual value.

An awful waste of space

The universe is a pretty big place seems like an awful waste of space

Nearest star: 4.22 light years.

Nearest galaxy: 2.44 million light years.

Galaxies within our horizon are now 40 billion light years away.

Universe beyond horizon: 10 to the 10 to the 100 times bigger.

The Goldilocks Planet is not all that well suited for human life.

2/3 salt water unfit for drinking.

Humans are restricted only to surface.

Atmosphere does not block harmful ultraviolet radiation which causes skin cancer and other genetic disorders.

Natural calamities like floods, earthquakes, famine and droughts, diseases like cancer, Aids kill millions millions of people yearly.

An awful waste of photons

Only two photons of every billion emitted by sun reach the earth surface. And lack of oxygen and cosmic microwave background radiation prevents humans from spending years in outer space.

 

IV

SELFISH DESIGN

Why the electron moves around the nucleus?

If it does not move around the nucleus, it cannot generate centrifugal force. If it does not generate centrifugal force, it will be pulled into the nucleus. The electron revolves around the nucleus because it wants to survive itself from being pulled into the nucleus due to the electrostatic force attraction of the nucleus.

Similarly,

in order to survive itself from being pulled into the sun due to the gravitational force attraction of the sun, earth moves around the sun.

in order to survive itself from being pulled towards the earth due to the gravitational force attraction of the earth, moon moves around the earth.

Why the earth spins?

If it does not spin, it cannot generate magnetic field. If it does not generate magnetic field, it cannot deflect and protect itself from the incoming asteroids. The earth spins because it wants to survive itself from the incoming asteroids.

Why the neutron combines with proton to form nucleus?

If it does not combine with proton, then it will remain unbound. If it remains unbound, it will decay into its constituent particles. The neutron combines with proton because it wants to survive itself from the decay into a proton (plus an electron and an antineutrino).

Why the cells are linked to each other?

If they do not, then they won’t be able to survive long.

Why the electron is elemental?

The electron is elemental because it wants to survive itself from the decay into lighter particles – such as photons and less massive neutrinos.

Why the earth holds the atmosphere?

If it does not hold the atmosphere, then it cannot protect itself from the space junk that would do damage to it. The earth holds the atmosphere because it wants to survive itself from the incoming space junks.

Why the camel bear hump?

If it does not, then it cannot store fat. If it does not store fat, then it cannot last for several months without food. The camel bear hump because it wants to survive successfully in desert conditions.

Why the empty space produces virtual particles?

The empty space produces virtual particles because it wants to survive itself from its instability. Though unstable it ceases to exist.

Why the universe expands?

If it does not, then gravity will collapse it into a hot fire ball called singularity. The universe expands because it wants to survive from the big crunch.

Why the objects scatter light?

The objects scatter light because they want to survive themselves from invisibility.

Why the green plants bear chlorophyll pigments?

If they do not, they cannot carry out the process of photosynthesis. The green plants bear chlorophyll pigments because they want to carry out the process of photosynthesis to manufacture their own food and survive.

Why a flying Bat emit ultrasonic waves?

If it does not, then it cannot catch its prey.

Why the star emits radiation?

If it does not, then it cannot balance the inward gravitational pull. The star emits radiation because it wants to survive itself from the gravitational collapse.

Why the black hole absorbs mass?

If it does not, then it will eventually disappear more rapidly due to the process of Hawking radiation. The black hole absorbs mass because it wants to survive long.

Why the green plants bear stomata?

If they do not, then they cannot respire through their leaves and they cannot exchange gases necessary for cellular processes such as photosynthesis. The green plants bear stomata because it wants to carry out cellular processes in order to survive.

Why Do Cactus bear painful Spines?

If it does not, then it cannot protect itself from the attack of javelina, tortoises and pack rats. The cactus bears painful spines because it wants to survive itself from the attack of animals and people.

Why do deer have long legs and narrow hooves?

If it does not, it cannot be swift runner and good jumper. The deer have long legs and narrow hooves because it wants to survive itself from the attack of humans, wolves, mountain lions, bears, jaguars, and coyotes.

Why do Polar bear possess thick layer of fur?

The Polar bear possess thick layer of fur because it wants to survive itself from the cold, snowy inhospitable climate.

If we observe anything so on, we will observe that the basic instinct of every design is survival. And every design is selfish to the core to survive.

 

V

THE HALL OF SHAME: HOW BAD SCIENCE CAN CAUSE REAL HARM IN REAL LIFE

 

We humans, who began as a mineral and then emerged into plant life and into the animal state and then to being aggressive mortal beings fought a survival struggle in caveman days, to get more food, territory or partner with whom to reproduce, now are glued to the TV set, marveling at the adventures of science and their dazzling array of futuristic technology from teleportation to telekinesis: rocket ships, fax machines, supercomputers, a worldwide communications network, gas-powered automobiles and high-speed elevated trains. The science has opened up an entirely new world for us. And our lives have become easier and more comfortable. With the help of science we have estimated about 8,000 chemotherapeutic exogenous non-nutritive chemical substances which when taken in the solid form by the mouth enter the digestive tract and there they are transformed into a solution and passed on to the liver where they are chemically altered and finally released into the blood stream. And through blood they reach the site of action and binds reversibly to the target cell surface receptors to produce their pharmacological effect. And after their pharmacological effect they slowly detaches from the receptor. And then they are sent to the liver. And there they are transformed into a more water soluble compound called metabolite and released from the body through urine, sweat, saliva, and excretory products. However, the long term use of chemotherapeutic drugs for diseases like cancer, diabetes leads to side effects. And the side effects — including nausea, loss of hair, loss of strength, permanent organ damage to the heart, lung, liver, kidneys, or reproductive system etc. — are so severe that some patients rather die of disease than subjecting themselves to this torture.

And smallpox was a leading cause of death in 18th century, and the inexorable spread of the disease reliably recorded the death rate of some hundred thousand people. And the death toll surpassed 5000 people a day. Yet Edward Jenner, an English physician, noticed something special occurring in his small village. People who were exposed to cowpox did not get smallpox when they were exposed to the disease. Concluding that cowpox could save people from smallpox, Edward purposely infected a young boy who lived in his village first with cowpox, then with smallpox. Fortunately, Edward’s hypothesis worked well. He had successfully demonstrated the world’s first vaccine and eradicated the disease. And vaccines which once saved humanity from the smallpox (which was a leading cause of death in 18th-century England), now have associated with the outbreaks of diseases like pertussis (whooping cough) which have begun showing up in the United States in the past forty years.

TOP 5 DRUGS WITH REPORTED SIDE EFFECTS

(withdrawn from market in September 2004)

Drug: Byetta

Used for: Type 2 diabetes

Side effect: Increase of blood glucose level

Drug: Humira

Used for: Rheumatoid arthritis

Side effect: Injection site pain

Drug: Chantix

Used for: Smoking cessation

Side effect: Nausea

Drug: Tysabri

Used for: Multiple sclerosis

Side effect: Fatigue

[*Drug: Vioxx]

Used for: Arthritis

Side effect: Heart attack

In 1930s, Paul Hermann Muller a research chemist at the firm of Geigy in Basel, with the help of science introduced the first modern insecticide (DDT) and it won him the 1948 Nobel Prize in Physiology and Medicine for its credit of saving thousands of human lives in World War II by killing typhus-carrying lice and malaria-carrying mosquitoes, dramatically reducing Malaria and Yellow Fever around the world. But in the late 1960s DDT which was a world saver was no longer in public favor − it was blamed moderately hazardous and carcinogenic. And most applications of DDT were banned in the U.S. and many other countries. However, DDT is still legally manufactured in the U.S., but only sold to foreign countries. At a time when Napoleon was almost disturbing whole of Europe due to his aggressive policies and designs and most of the world was at war − the science gave birth to the many inventions which took place in the field of textile industry and due to invention of steam engine and development of means of transportation and communication. Though it gave birth in England, yet its inventions spread all over the world in a reasonably period. And rapid industrialization was a consequence of new inventions and demand for expansion of large industrial cities led to the large scale exploitation of agricultural land. And socio-economic growth was peaking, as industries were booming, and agricultural lands were decreasing, as the world enjoyed the fruits of the rapid industrialization. As a result of this, the world’s population was growing at an exponential rate and the world’s food supply was not in the pace of the population’s increase. And this resulted in widespread famine in many parts of the world, such as England, and as starvation was rampant. In that time line, science suppressed that situation by producing more ammonia through the Haber Bosch Process (more ammonia, more fertilizers. more fertilizers, more food production). But at the same time, science which solved the world’s hunger problems also led to the production of megatons of TNT (trinitrotoluene) and other explosives which were dropped on all the cities leading to the death of some hundred million people. Rapid industrialization which once raised the economic and living standard of the people has now become a major global issue. The full impact of an industrial fuel economy has led to the global warming (i.e., the increase of Earth’s average surface temperature due to effect of too much carbon dioxide emissions from industrial centers which acts as a blanket, trap heat and warm the planet).And as a result, Greenland’s ice shelves have started to shrink permanently, disrupting the world’s weather by altering the flow of ocean and air currents around the planet. And violent swings in the climate have started to appear in the form of floods, droughts, snow storms and hurricanes.

And industries are the main sources of sulfur dioxide emission and automobiles for nitrogen oxides. And the oxides of nitrogen and sulfur combine with the moisture in the atmosphere to form acids. And these acids reach the Earth as rain, snow, or fog and react with minerals in the soil and release deadly toxins and affect a variety of plants and animals on the earth. And these acids damage buildings, historic monuments, and statues, especially those made of rocks, such as limestone and marble, that contain large amounts of calcium carbonate. For example, acid rain has reacted with the marble (calcium carbonate) of Taj Mahal causing immense damage to this wonderful structure.

And science once introduced refrigerators for prolonging storage of food but now refrigerators are the active sources of chlorofluorocarbons (CFC) which interact with the UV light during which chlorine is separated. And this chlorine in turn destroys a significant amount of the ozone in the high atmosphere admitting an intense dose of harmful ultraviolet radiation. And the increased ultraviolet flux produces the related health effects of skin cancer, cataracts, and immune suppression and produces a permanent change in the nucleotide sequence and lead to changes in the molecules the cell produce, which modify and ultimately affect the process of photosynthesis and destroy green plants. And the massive extinction of green plants may lead to famine and immense death of all living species including man.

Fertilizers which once provided a sufficient amount of the essential nitrates to plants to synthesize chlorophyll and increase crop growth to feed the growing population and satisfy the demand for food, has now blamed for causing hypertrophication i.e., fertilizers left unused in soil are carried away by rain water into lakes and rivers, and then to coastal estuaries and bays. And the overload of fertilizers induces explosive growth of algal blooms, which prevents light from getting into the water and thereby preventing the aquatic plants from photosynthesizing, a process which provides oxygen in the water to animals that need it, like fish and crabs. So, in addition to the lack of oxygen from photosynthesis, when algal blooms die they decompose and they are acted upon by microorganisms. And this decomposition process consumes oxygen, which reduces the concentration of dissolved oxygen. And the depleted oxygen levels in turn lead to fish kills and a range of other effects promoting the loss of species biodiversity. And the large scale exploitation of forests for industrialization and residential purposes has not only led to the loss of biodiversity but has led the diseases like AIDS to transmit from forests to cities.

At the dawn of the early century, the entire world was thoroughly wedded to fossil fuels in the form of oil, natural gas, and coal to satisfy the demand for energy. And as a result, fossil fuels were becoming increasingly rare and were slowly dooming to extinction. In that period, science introduced nuclear fission reaction as an alternate to the world’s energy supply and therefore prevented the world economy from coming to a grinding halt. But at the same time science introduced nuclear fission reaction to produce thousands of nuclear weapons, which were dropped on all the cities in World War II amounted to some two million tons, two megatons, of TNT, which flattened heavily reinforced buildings many kilometers away, the firestorm, the gamma rays and the thermal neutrons, which effectively fried the people. A school girl who survived the nuclear attack on Hiroshima, the event that ended the Second World War, wrote this first-hand account:

Through a darkness like the bottom of hell, I could hear the voices of the other students calling for their mothers. And at the base of the bridge, inside a big cistern that had been dug out there, was a mother weeping, holding above her head a naked baby that was burned red all over its body. And another mother was crying and sobbing as she gave her burned breast to her baby. In the cistern the students stood with only their heads above the water, and their two hands, which they clasped as they imploringly cried and screamed, calling for their parents. But every single person who passed was wounded, all of them, and there was no one, there was no one to turn to for help. And the singed hair on the heads of the people was frizzled and whitish and covered with dust. They did not appear to be human, not creatures of this world.”

Ninety-one percent of world adults and 60 percent of teens own this device that has revolutionized the most indispensable accessories of professional and social life. Science once introduced this device for wireless communication but now they are pointed to as a possible cause of everything from infertility to cancer to other health issues. And in a study conducted at the University of London, researchers sampled 390 cell phones to measure for levels of pathogenic bacteria. The results of the study showed that 92 percent of the cell phones sampled had heavily colonized by high quantities of various types of disease-prone bacteria with high resistances to commonly used antibiotics (around 25,000 bacteria per square inch) and the results concluded that their ability to transmit diseases of which the mobile phones are no exception. The fluoridation of water at optimal levels has been shown to be highly beneficial to the development of tooth enamel and prevention of dental cavities since the late 1800s. And studies showed that children who drink water fluoridated at optimal levels can experience 20 to 40 per cent less tooth decay. But now fluoridation of water has termed to cause lower IQ, memory loss, cancer, kidney stones & kidney failures − faster than any other chemical.

Science once introduced irradiation to prevent food poisoning by destroying molds, bacteria and yeast and control microbial infestation. But now it has been blamed to cause the loss of nutrients, for example vitamin E levels can be reduced by 25% after irradiation and vitamin C by 5-10% and damage food by breaking up molecules and creating free radicals. And these free radicals combine with existing chemicals (like preservatives) in the food to produce deadly toxins. This has caused some food manufacturers to limit or avoid the process and bills have even been introduced to ban irradiated foods in public cafeterias or to require irradiated food to carry sensational warning labels. And the rapid advancement of science combined with human aggression and aim for global supremacy has led even the smaller nations to weaponize anthrax spores and other viruses for maximum death and destruction. And thus the entire planet is gripped with fear that one day a terrorist group may pay to gain access to weaponized H5N1 flu and other viruses. And the rapid development of nuclear technology has led to the banking up of nuclear waste at every single nuclear site. And as a result, every nation is suffering from a massive case of nuclear constipation. And the enormous automation, capacity of artificial intelligence and their ability to interact like humans has caused the humans to be replaced by artificial intelligence. But now artificial intelligence is taking off on its own, and re-designing itself at an ever increasing rate. And this has turned out to be the biggest existential threat to human survival (i.e., one day artificial intelligence may plan for a war against humanity). Highly toxic gases, poisons, defoliants, and every technological state are planning for it to disable or destroy people or their domestic animals, to damage their crops, and/or to deteriorate their supplies, threaten every citizen, not just of a nation, but of the world.

 

VI

SCIENCE IS NOT ABOUT CERTAINTY: A MATHEMATICAL PHILOSOPHY OF SCIENCE

 

“Our quest for knowledge would have been much simpler if all the mathematical indeterminates like 0/0, 1/0, etc. would have been well-defined.”

VI A

If a force F acts on a particle of mass m0 at rest and produces acceleration a in it, then the force is = m0a. The particle remains at rest (i.e., a =0) when no external force acts on it (i.e., F=0). Under this condition the rest mass of the particle becomes UNDEFINED.

m0= F/a = 0/0

According to Albert Einstein’s law of variation of mass with velocity:

m = m0 / (1− v2/c2) ½

or m2c2 − m2v2 = m0^2c2^

where: m0 is the rest mass and m is the relativistic mass of the particle.

Differentiating the above equation, we get:

mv dv + v2dm = c2dm

 

or dm (c2 − v2) = mv dv

In relativistic mechanics, we define the kinetic energy of a particle to be = dmc2 = dp × v. Therefore:

dp (c2 − v2) = mc2 dv

or (dp/dt) = mc2 / (c2 − v2) (dv/dt)

Since: (dp/dt) = F (force) and (dv/dt) = a (acceleration), therefore:

F = mac2 / (c2 − v2)

(Note: For non-relativistic case (v<<c), the above equation reduces to F = m0a)

But

m = m0 / (1− v2/c2) ½

or c2 / (c2 − v2) = m2/m0^2^. Therefore:

F = m3a / m0^2^

or m= m0 2/3 (F/a) 1/3

Suppose no force acts on the particle (i.e., F = 0), then no acceleration is produced in the particle (i.e., a = 0). Now under this condition:

m = m0^2/3^ (0/0) 1/3 i.e., m becomes UNDEFINED. There can be no bigger limitation than this. m = m0 under the condition: F=0.

How big of a force does electron placed in an electric field feel? Well, the electric field is E newtons per coulomb and electron has a charge of e = – 1.602 × 10 –19 Coulombs, so you get the following:

F = e × E

That is, electron feels a force of eE Newton. Because F = e × E, if E = 0 then an electron would feel no force (F = 0). Under this condition the electron charge becomes UNDEFINED.

e = F/E = 0/0.

In physics, we find out that momentum is mass multiplied by velocity. Special relativity has something to say about momentum. In particular, special relativity gets its (1− v2/c2) ½ factor into the momentum mix like this: p = m0v / (1− v2/c2) ½. For non-relativistic case: v <<c. Therefore, we have

p = m0v

Suppose the particle is brought to rest, then (v = 0, p = 0). Under this condition the rest mass of the particle becomes UNDEFINED.

m0 = p/v = 0/0

For non-relativistic case (v<< c) the expression for kinetic energy is:

KE = m0v2/2, where m0 is the rest mass of a non-relativistic particle moving with a velocity v << c. Suppose the particle is brought to rest, then (v = 0, KE = 0). Under this condition the rest mass of the particle becomes UNDEFINED.

m0 = 2KE/v2 = (2 × 0) /0 = 0/0

The stopping potential VS required to stop the photoelectron of charge e with kinetic energy KE emitted from a metal surface is calculated using the equation: KE = e × VS

If KE = 0, then V~S~ required to stop the photoelectron = 0. Under this condition: e = KE / VS = 0/0 i.e., charge on the electron becomes UNDEFINED.

The change in kinetic energy ∆KE is related to the change in temperature ∆T by the equation:

∆KE = 3/2 × kB ∆T

Suppose ∆T → 0, then

∆KE = 0

Under this condition the Boltzmann’s constant ‘kB’ becomes UNDEFINED.

kB = (2 × 0) / (3 × 0) = 0/0

The quantity of electric charge flowing through the filament of an incandescent bulb is given by:

q = current × time

or q = I × t

If N is the number of electrons passing through the filament in the same time then

q = Ne

or I × t = Ne

or e = {I / (N/t)}

where: e is the electron charge = – 1.602 × 10 –19 Coulombs and (N / t) = rate of flow of electrons. Suppose no electrons flow through the filament of an incandescent bulb, then

I = 0 and (N/t) = 0

Under this condition the electron charge becomes UNDEFINED.

e= 0/0

The change in energy ∆E is related to the change in mass ∆m by the Einstein famous equation:

∆E = ∆mc2

Suppose ∆m = 0, then

∆E = 0

Under this condition the speed of light squared i.e., c2 becomes UNDEFINED.

c2 = 0/0

The change in energy ∆E is related to the change in frequency ∆υ by the Planck’s energy frequency relationship:

∆E = h∆υ

Suppose ∆υ = 0, then

∆E = 0

Under this condition the Planck’s constant becomes UNDEFINED.

h=0/0

When a charged electron accelerates, it radiates away energy in the form of electromagnetic waves. For velocities that are small relative to the speed of light, the total power radiated is given by the Larmor formula:

P = (e2 / 6πε~0[c^]3^) a2 where e is the charge on the electron and a is the acceleration of the electron, ε~0 is the absolute permittivity of free space; c is the speed of light in vacuum. If a = 0, then P = 0. Under this condition (e2 / 6πε~0[~c^]3^) becomes UNDEFINED.

(e2 / 6πε~0[~c^]3^) = 0/0

Considering the reversible reaction: A +B ↔ AB the change in free energy is given by the equation

ΔG = ΔG0 + RT ln Q

where R is the gas constant (8.314 J / K / mol), T is the temperature in Kelvin scale, ln represents a logarithm to the base e, ΔG0 is the Gibbs free energy change when all the reactants and products are in their standard state and Q is the reaction quotient or reaction function at any given time (Q = [AB] / [A] [B]). We may resort to thermodynamics and write for ΔG0: ΔG0 = − RT ln Keq where Keq is the equilibrium constant for the reaction. If Keq is greater than 1, ln Keq is positive, ΔG0 is negative; so the forward reaction is favored. If Keq is less than 1, ln Keq is negative, ΔG0 is positive; so the backward reaction is favored. It can be shown that

ΔG = − RT ln Keq + RT ln Q

The dependence of the reaction rate on the concentrations of reacting substances is given by the Law of Mass Action. This law states that the rate of a chemical reaction is directly proportional to the product of the molar concentrations of the reactants at any constant temperature at any given time. Applying the law of mass action to the forward reaction:

v1 = k1 [A] [B] where k1 is the rate constant of the forward reaction.

Applying the law of mass action to the backward reaction:

v2 = k2 [AB] where k2 is the rate constant of the backward reaction.

Further, the ratio of v1 / v2 yields:

v1 / v2 = (k1/ k2) Q.

But equilibrium constant is the ratio of the rate constant of the forward reaction to the rate constant of the backward reaction. And consequently:

v1 / v2 = Keq / Q.

On taking natural logarithms of above equation we get:

ln (v1 / v2) = ln Keq – ln Q.

On multiplying by –RT on both sides, we obtain:

–RT ln (v1 / v2) = – RT ln Keq + RT ln Q

Comparing Equations

ΔG = − RT ln Keq + RT ln Q and

– RT ln (v1 / v2) = – RT ln Keq + RT ln Q, the Gibbs free energy change is seen to be:

ΔG = −RT ln (v1 / v2)

or ΔG = RT ln (v2 / v1).

At equilibrium: v1 = v2

ΔG = 0

Under this condition RT becomes UNDEFINED.

RT = 0 / 0

The Unruh temperature, derived by William Unruh in 1976, is the effective temperature experienced by a uniformly accelerating observer in a vacuum field. It is given by: TUnruh = (ħa/2πckB), where a is the acceleration of the observer, kB is the Boltzmann constant, ħ is the reduced Planck constant, and c is the speed of light in vacuum. Suppose the acceleration of the observer is zero (a = 0), then

TUnruh = 0

Under this condition (ħ/2πckB) becomes UNDEFINED.

(ħ/2πckB) = 0/0.

The Compton wavelength of the electron can be calculated using the equation:

λ~Compton~ = Δλ / (1– cosθ)

where: θ is the scattering angle and Δλ is the change in wavelength of the incident photon.

It has been experimentally observed that for θ = 0o there is no change in wavelength of the incident photon (i.e., Δλ = 0). Under this condition the Compton wavelength of the electron becomes undefined.

λ~Compton~ = 0/0.

The change in entropy of the photon gas ∆S is related to the change in number of photons ∆N by the equation: ∆S = 3.6 kB ∆N. Suppose there is no change in number of photons (i.e., ∆N = 0), then

∆S = 0

Under this condition the Boltzmann’s constant ‘kB’ becomes UNDEFINED.

kB = 0 / (3.6 × 0) = 0/0

E= mgh

The energy required to lift an object of mass m up to a height of h meter is mgh i.e., E = mgh (where g stands for acceleration due to gravity). If h = 0, then the energy required to lift an object of mass m will be zero (i.e., E = 0). Under this condition the weight of the object ‘mg’ becomes UNDEFINED.

mg = 0/0

Work = Force × displacement × cosφ

W = F × S × cosφ, where W = work, F = force, S = displacement and φ is angle between force and displacement.

For an electron moving in a circular orbit,

F = mv2/r and S = rθ

W = mv2 × θ × cosφ

For one complete revolution

θ = 2π

W = 2π mv2cosφ

For an electron moving in a circular orbit, force and displacement are perpendicular to each other (i.e., φ = 90o). Now under the condition (φ = 90o):

W = 0

m = W / 2πv2cosφ = 0 / (2πv2 × 0)

m= 0/0 i.e., mass becomes UNDEFINED

For relativistic particle: particle velocity × phase velocity = speed of light squared

v × vP = c2 , where: v = particle velocity, vP = phase velocity and c = speed of light in vacuum.

mv × vP = mc2

Since λ = h/mv. Therefore:

hvP / λ = mc2

or hυ =mc2

A small change in the frequency of the wave associated with the particle (∆υ) is followed by a small change in the mass of the particle (∆m) i.e.,

hdυ = dmc2

If dυ = 0, then

dm = 0

h /c2 = dm/dυ = 0/0 i.e., h /c2 becomes UNDEFINED.

The change in number of moles is related to the change in number of molecules by the Avogadro constant L:

dn = dN/L

where: dn = small change in number of moles and dN = small change in number of molecules.

If dn = 0, then

dN = 0

Under this condition the Avogadro’s constant becomes UNDEFINED.

L = 0/0

According to Faraday’s law, the amount of a substance deposited on an electrode in an electrolytic cell is directly proportional to the quantity of electricity that passes through the cell. Faraday’s law can be summarized by: n = q / zF, where n is the number of moles of the substance deposited on an electrode in an electrolytic cell, q is the quantity of electricity that passes through the cell, F = 96485 C/ mol is the Faraday constant and z is the valency number of ions of the substance. Suppose no electricity passes through the cell (q = 0), the amount of the substance deposited on an electrode in an electrolytic cell is 0 (i.e., n= 0). Under this condition

q = 0, n = 0

F = q / (z × n) = 0 / (z × 0) = 0/0 i.e., Faradays constant becomes Undefined.

If a quantity of heat Q is added to a system of mass m, then the added heat will go to raise the temperature of the system by ΔT = Q/mC where C is a constant called the specific heat capacity. ΔT = Q/mC which on rearranging: m = Q / (C × ΔT). Suppose no heat is added to the system (Q = 0), then

ΔT = 0

m = 0/ (C × 0) = 0/0 i.e., the mass of a system becomes UNDEFINED.

 

VI B

Nuclear density

Mass of the neutron, mN = 1.6750 × 10 −27 kg

Mass of the proton, mP = 1.6726 × 10 −27 kg

mN / mP = 1.00143

Nuclear density = mass of the nucleus / its volume

ρ~N~ = M/V

But

M = (ZmP + NmN)

V = (4/3)πR03A

(where: Z = number of protons in the nucleus, N = number of neutrons in the nucleus, R0 = 1.2 × 10^−15^ m, A = Z +N)

Therefore:

ρ~N~ = 3mP (Z + 1.00143N) / 4πR03A

Which on rearranging:

A = (3mP / 4π R0^3^ρ~N~) Z + (3.00429mP / 4π R0^3^ρ~N~) N

Since A = (Z + N):

(Z + N) = (3mP / 4πR0^3^ρ~N~) Z + (3.00429mP / 4πR0^3^ρ~N~) N

Any equation is valid only if LHS = RHS. Hence the above equation is valid only if Z + N = Z +N.

Z + N = Z +N is achieved only if ρ N attains 2 values i.e.,

ρ~N~ = 3mP / 4π R0^3^ and ρ~N~ = 3.00429mP / 4π R0^3^ at the same time. But how ρ~N~ can attain 2 values at the same time? It’s highly impossible.

Protein ligand binding

The dissociation of a protein – ligand complex (PL) can be described by a simple equilibrium reaction: PL ↔ P + L the corresponding equilibrium relationship is defined K [PL] = [P] [L] (K = dissociation constant). In this equation [P] = [P] T – [PL] and [L] = [L] T – [PL] where [P] T and [L] T are the initial total concentrations of the protein and ligand, respectively.

 

Using the equilibrium relationship K [PL] = [L] [P] and substituting,

 

[P] T – [P] for [PL], [L] T – [PL] for [L] and [P] T – [PL] for [P] Gives:

 

K {[P] T – [P]} = {[L] T – [PL]} {[P] T – [PL]}

 

K [P] T – K [P] = [L] T [P] T – [PL] [L] T – [PL] [P] T + [PL] 2 which on rearranging:

 

K [P] T – [L] T [P] T + [PL] [P] T = – [PL] [L] T + [PL] 2 + K [P]

 

[P]~T~ {K – [L] T + [PL]} = [PL] {– [L] T + [PL]} + K [P]

 

Further, if we substitute [L] T = [PL] + [L]. Then we get

 

[P]~T~ {K – [PL] – [L] + [PL]} = [PL] {–[PL] – [L] + [PL]} + K [P]

 

[P]~T~ {K – [L]} = – [PL] [L] + K [P] which is the same as:

 

[P]~T~ {K – [L]} = K [P] – [PL] [L]

 

K – [L] = K {[P]/ [P] T} – {[PL]/ [P] T} [L]

 

Labeling [P] / [P] T as FFP (fraction of free protein) and [PL] / [P] T as FBP (fraction of bound protein) then above expression turn into

K – [L] = K FFP – FBP [L]

 

Any equation is valid only if LHS = RHS. Hence

If FFP = FBP=1, then the LHS = RHS, and the above Equation is true.

If FFP = FBP≠1, then the LHSRHS, and the above Equation is invalid.

Let us now check the validity of the condition

“FFP = FBP =1”.

As per the protein conservation law,

[P] T = [PL] + [P]

From this it follows that

1= FBP + FFP

If we assume FBP = FFP =1, we get:

1 = 2

The condition FFP = FBP =1 is invalid, since 1 doesn’t = 2. In fact, the only way it can happen that K – [L] = K – [L] is if both FFP = FBP =1. Since FFP = FBP ≠ 1, Equation K – [L] = K FFP – FBP [L] does not therefore hold well.

 

Conclusion: Using the equilibrium relationship K [PL] = [L] [P] and substituting [P] T – [P] for [PL], [L] T – [PL] for [L], [P] T – [PL] for [P] and simplifying we get the wrong result:

K – [L] = K FFP – FBP [L]

Hawking radiation

The rate of loss of energy of a black hole in the form of Hawking radiation is given by the equation:

− dMc2/dt = ħc6/ 15360πG2M2

Since the black hole temperature T = (ħc3 / 8πGMkB). Therefore:

dT/dt = (kB3Gπ^2^/30ħc5) T4

or dT/dt = bT4

where: b = (kB3Gπ^2^/30ħc5) = 1.629 × 10 – 65 Kelvin^– 3^ second^– 1^

On rearranging:

dT T −4 = b × dt which on integration we get:

− 1/ 3T3 = bt + constant

T = T1 (initial temperature of the black hole) when t= 0

− 1/ 3T1^3^ = b (0) + constant

− 1/ 3T1^3^ = constant

Solving for constant we get:

−1/ 3T3 = bt − 1/ 3T1^3^

T = T2 when t = half of the evaporation time i.e., tev /2 (where tev = evaporation time of the black hole).

−1/ 3T2^3^ = btev /2 − 1/ 3T1^3^

or 1/ 3T2^3^ = 1/ 3T1^3^− btev /2

For a black hole of initial mass = one solar mass (i.e., M = 2 × 1030kg):

tev= 6.7396 × 10 74 s

T1 = 6.156 × 10 – 8 K

1/ 3 T2^3^ = 1/3 × (6.156 × 10 – 8)^3^ − (1.629 × 10 – 65 × 3.369 × 10 74)

1/ 3 T2^3^ = 1.4288× 10 21 − 5.4894× 10 9

or T2 = 6.156 × 10 – 8 K

From the above calculation it is clear that: T1 = T2 i.e., temperature of the black hole when t = 0 is equal to the temperature of the black hole when t= tev/2. This means: T remains constant throughout the evaporation process.

If T remains constant throughout the evaporation process, then from the equation: T = ħc3/ 8πGMkB

M must remain constant throughout the evaporation process. But how can M remain constant because M varies throughout the evaporation process because the black hole loses its mass throughout its evaporation process.

(a2 – b2 ) = (a+ b) (a−b)

(a2 – b2 ) = (a+ b) (a−b)

On rearranging:

(a2 – b2 ) / (a − b) = (a+ b)

If a= b=1, then

0/0 = 2 (illogical and meaningless result).

tanθ = sinθ / cosθ

tanθ = sinθ / cosθ which on rearranging:

 

cosθ = sinθ / tanθ

If θ = 0o, then

1= 0/0 (illogical and meaningless result).

Absorbance = − log (Transmittance)

Absorbance = − 2.303 × ln (Transmittance)

If Transmittance = 1 (i.e., no light passed through the solution is absorbed), then Absorbance = 0. Now under this condition:

Absorbance / ln (Transmittance) = − 2.303 take the form

0/ln1 = − 2.303

0/0 = − 2.303 (illogical and meaningless result).

The Bohr model for an electron transition in hydrogen between quantized energy levels with different quantum numbers n yields a photon by emission with quantum energy:

A downward transition involves emission of a photon of energy:

E photon = hυ = E2 − E1

But E1 = − (2π^2m~]e~ e[^4 / n1^2h2^) and E2 = − (2π^2m~]e~ e[^4 / n2^2h2^)

Therefore:

hυ = (2π^2m~]e~ e[^4 / h2) [1/n1^2^ − 1/n2^2^]

Suppose hυ = 0, then

0 = (2π^2m~]e~ e[^4 / h2) [1/n1^2^ − 1/n2^2^]

From this it follows that

n1= n2

Now under the condition (hυ = 0, n1= n2):

(2π^2m~]e~ e[^4 / h2) = hυ / [1/n1^2^ − 1/n2^2^] = 0/0 i.e., (2π^2m~]e~ e[^4 / h2) becomes UNDEFINED.

 

Compton Effect:

An X- ray photon of energy E = hυ i and a momentum pi = h/ λ~i~ interacts with an electron at rest with momentum = 0 and energy equal to its rest energy, m0c2. The symbols h, υ, and λ are the standard symbols used for Planck’s constant, the photon’s frequency, its wavelength, and m0 is the rest mass of the electron. In the interaction, the X- ray photon is scattered in the direction at an angle θ with respect to the photon’s incoming path with momentum p s = h/ λ~s~ and energy E = hυ~s~. The electron is scattered in the direction at an angle θ with respect to the photon’s incoming path with momentum p= mev and energy E = mec2 where me is the relativistic mass of the electron after the interaction. The phenomenon of Compton scattering may be analyzed as an elastic collision of a photon with a free electron using relativistic mechanics. Since the energy of the photons (661. 6 keV) is much greater than the binding energy of electrons (the most tightly bound electrons have a binding energy less than 1 keV), the electrons which scatter the photons may be considered free electrons. Because energy and momentum must be conserved in an elastic collision, we can calculate the velocity of recoil of the scattering electron i.e., velocity of recoil of the scattering electron can be calculated using the

Law of Conservation of Energy.

Law of Conservation of Momentum.

 

Calculating the velocity of recoil of the scattering electron using the Law of Conservation of Energy

(For θ = 90o, hυ~i~ = 28.072 × 10^−36^ Joule, hυ~s~ = 27.226 × 10^−36^ Joules)

From the law of conservation of energy, the energy of the incident X-ray photon, hυ~i~, and the rest energy of the electron, m0c2, before scattering is equal to the energy of the scattered X-ray photon, hυ~s~, and the total energy of the electron, mec2, after scattering i.e.,

hυ~i~ + m0c2 = hυ~s~ + mec2

which on rearranging:

(hυ~i~ – hυ~s~) = mec2 – m0c2

But according to law of variation of mass with velocity

mec2 = m0c2 / (1− v2/c2) ½

Therefore:

(hυ~i~ – hυ~s~) = m0c2 {1 / (1− v2/c2) ½ − 1}

Since:

hυ~i~ = 28.072 × 10^−36^ Joules

hυ~s~ = 27.226 × 10^−36^ Joules

m0c2 = 81.9 × 10^−15^ Joules

 

Therefore:

(28.072 − 27.226) × 10^−36^= 81.9 × 10^−15^ × {1/ (1− v2/c2) ½ − 1}

(0.846 × 10^−36^ / 81.9 × 10^−15^) + 1 = 1/ (1− v2/c2) ½

½

Since: 1.0329 × 10 −23<<<< 1. Therefore: [1.0329 × 10 −23 + 1] ≈ 1

1 = 1/ (1− v2/c2) ½

From this it follows that

v = 0 (illogical and meaningless result).

 

Newton’s third law of motion

Newton’s third law of motion as stated in Philosophiae Naturalis Principia Mathematica:

To every action there is always an equal and opposite reaction.”

Let us consider a boy is standing in front of wooden wall, holding a rubber ball and cloth ball of same mass in the hands. Let the wall is at the distance of 5m from the boy.

Let the boy kicks the rubber ball at the wall with some force F.

Action: Boy kicks the rubber ball at the wall from distance of 5m.

Reaction: The ball strikes the wall, and comes back to the boy i.e. travelling 5m.Now action and reaction is equal and opposite.

Let the same boy kicks the cloth ball at the wall with same force F.

Action: Boy kicks the cloth ball at the wall from distance of 5m.

Reaction: The ball strikes the wall, and comes back to the boy i.e. travelling 2.5m. Now action and reaction are not equal and opposite. In this case Newton’s third law of motion is completely violated.

 

mv2/r = GMm/r2

As photon travel near the event horizon of a black hole they can still escape being pulled in by gravity of a black hole by traveling at a vertical direction known as exit cone. A photon on the boundary of this cone will not completely escape the gravity of the black hole. Instead it orbits the black hole. For a photon of mass m orbiting the black hole, the necessary centripetal force mv2/r is provided by the force of gravitation between the black hole and the photon GMm/r2. Therefore:

mv2/r = GMm/r2

where: m = mass of the photon orbiting the black hole of mass M in a circular orbit of radius r and G is the gravitational constant.

Since photon always travels with a speed equal to c. Therefore:

v=c

mc2/r = GMm/r2

or r = GM/c2

Since RG = 2GM/c2 (where RG = radius of the black hole). Therefore:

r = RG /2

WHICH MEANS:

r < RG i.e., photon orbit exist inside the black hole.

The photon orbit of radius r always exists in the space surrounding an extremely compact object such as a black hole. Hence r should be > RG. Therefore, it is clear that the condition mv2/r = GMm/r2 not always holds well.

 

VI C

 

Is the density of the Black Hole:

0.1253c6/ πG3M2 or 0.00585c6/ πG3M2?

The density of the black hole is given by the expression: ρ = 3M/ 4πRG^3^, where M is the mass and RG is the radius of the black hole.

Since RG = 2GM/c2. Therefore:

ρ = 3c6/ 24πG3M2

or ρ = 0.1253c6/ πG3M2

According to Stefan – Boltzmann-Schwarzschild – Hawking black hole radiation power law:

P = є × σ × T4 × (4π RG^2^)

or P = 1 × (π^2^ kB^4^ /60ħ^3c2^) × (ħc3/8πGM) 4 × (16πG2M2/c4)

or P = ħc6/ 15360πG2M2

Mario Rabinowitz discovered the simplest possible representation for Hawking radiation power in terms of black hole density ρ:

P = Gρħ/90

or P = ħc6/ 15360πG2M2 = Gρħ/90

or ρ = 90c6/ 15360πG3M2

or ρ = 0.00585c6/ πG3M2

Conclusion:

Two results for the density of the black hole:

ρ = 0.1253c6/ πG3M2

ρ = 0.00585c6/ πG3M2

 

Is the Life time of our power house the sun

 

2.63 × 10 18 or 3.98 × 1020 seconds?

 

*
p<>{color:#000;}. We can summarize the nuclear reaction occurring inside the sun, irrespective of pp or CNO cycle, as follows: 4 protons → 1 helium nucleus + 2 positrons + E, where E is the energy released in the form of radiation. Approximately it is 25 MeV ≈ 40 × 10 ^− 13^J.

 

Let’s calculate age of the sun according to nuclear considerations.

Inside the sun, we have NProtons (say), which can be calculated as follows

 

NProtons = M / mP = 2 × 1030 / 1.672 × 10 −27 = 1.196 × 10 57, where M = mass of the sun and mP = mass of the proton. Hence, the number of fusion reactions inside the sun is

 

N Reactions = 1.196 × 10 57 / 4 = 2.99 × 10 56

 

So, star has the capacity of releasing

 

0.196 × 10 56 × 40 × 10 − 13 = 1.19 × 10 45 J

 

The rate of loss of energy of the sun in the form of radiation i.e., power radiated by the sun, P = 4.52 × 10 26 J/s, the sun has the capacity to shine for

 

t = 1.19 × 10 45 /4.52 × 10 26 = 2.63 × 10 18 s.

 

*
p<>{color:#000;}. Let us consider,

NProtons = M / mP

 

M = NProtons × mP

Differentiating this with respect to time, we get

 

(dM/dt) = mP × (dNProtons /dt)

 

This can also be written as:

 

− (dMc2/dt) = mPc2 × − (dNProtons /dt)

 

Since − (dMc2/dt) = P = 4.52 × 10 26 J/s and mPc2 = 15.04 × 10 − 11 J. Therefore:

 

− (dN Protons /dt) = (4.52 × 10 26 / 15.04 × 10 − 11)

or − (dN Protons /dt) = 3.005 × 10 36 protons per second

 

0.196 × 10 36 protons are utilized per second to release energy in the radiation.

 

0.196 × 10 36 protons → one second

 

1.196 × 10 57 protons → t seconds

 

t = 1.196 × 10 57^/3.005 × 10 ^36 = 3.98 × 1020 s.

1.196 × 10 57 protons are utilized per 3.980× 1020 seconds to release energy in the radiation. Therefore, the sun has the capacity to shine for 3.98 × 1020 seconds.

Conclusion:

Two results for the LIFE TIME of the sun:

t = 2.63 × 10 18 s

t = 3.98 × 1020 s

 

Equations of motion:

The three kinematic equations that describe an object’s motion are:

d = ut + ½ at2

v2 = u2 + 2ad

v = u + at

There are a variety of symbols used in the above equations. Each symbol has its own specific meaning. The symbol d stands for the displacement of the object. The symbol t stands for the time for which the object moved. The symbol a stands for the acceleration of the object. And the symbol v stands for the final velocity of the object, u stands for the initial velocity of the object.

Assuming the initial velocity of the object is zero (u = 0):

d = ½ at 2

v2 =2ad

v = at

Since velocity is equal to displacement divided by time (i.e., v =d / t):

a = 2d /t2

a =d / 2t2

a = d / t2

Conclusion: 3 different results for a.

 

Albert Einstein’s law of variation of mass with velocity:

In physics, we define the kinetic energy of an object to be equal to the work done by an external impulse to increase velocity of the object from zero to some value v. That is,

KE = J × v

Impulse applied to an object produces an equivalent change in its linear momentum. The impulse J may be expressed in a simpler form:

J = ∆p = p2 − p1

where p2 = final momentum of the object = mv and p1 = initial momentum of the object = 0 (assuming that the object was initially at rest).

Impulse = mv

KE = mv2

In relativistic mechanics, we define the total energy of a particle to be equal to the sum of its rest mass energy and kinetic energy. That is, Total energy = rest energy + kinetic energy

mc2 = m0c2 + KE

Solving KE = mv2 we get:

m = m0/ (1− v2/c2)

But according to Albert Einstein’s law of variation of mass with velocity,

m = m0 / (1− v2/c2) ½

 

Rate equation:

Considering a bimolecular reaction, reactant A + reactant B ↔ Activated complex → Products, we can derive an expression for rate constant:

kr = k2 K*, where k2 is the rate constant for product formation and K* is the equilibrium constant for the formation of activated complex.

Taking natural logarithm of the above equation we get:

lnkr = lnk2 + lnK*

Differentiating the above equation we get:

dlnkr = dlnk2 + dlnK*

which is the same as:

dlnkr /dT = dlnk2/dT + dlnK*/dT

Since:

dlnkr /dT = Ea / RT2

dln K*/dT = ∆H*/ RT2

(where: Ea = energy of activation and ∆H* = standard enthalpy of activation).

Therefore:

Ea/ RT2 = dlnk2/dT + ∆H*/ RT2

It is experimentally observed that for reactions in solution,

Ea = ∆H*

Hence,

dlnk2/dT = 0

 

Since k2 = (k kBT/h) where k is the transmission coefficient (i.e., the fraction of activated complex crossing forward to yield the products), kB and h are the Boltzmann’s constant and Planck’s constant respectively, T is the temperature in kelvin.

Therefore:

dlnk /dT + dlnT/dT = 0

or dlnk = − dlnT

Integrating over dlnk from k~1~ to k~2~, and over dlnT from T1 to T2:

ln (k~1~ / k~2~) = ln (T2 / T1)

Taking ln −1 on both sides we get:

(k~1~ / k~2~) = (T2 / T1)

Which means: k~1~ is proportional to 1/ T1 and k~2~ is proportional to 1/ T2.

In general, k is proportional to 1/ T which means: higher the temperature, lower the value of transmission coefficient. Lower the value of transmission coefficient, the concentration of activated complex crossing forward to yield the products will be less. Lesser the concentration of activated complex crossing forward to yield the products, slower is the rate of reaction.

Conclusion: with the increase in temperature, the rate of reaction decreases.

Experimental observation: The rate of reaction always increases with temperature. But in the case of enzyme catalyzed reactions, the rate increases with temperature up to certain level (corresponding to optimum temperature) after which the rate decreases with the increase in temperature.

 

Failure to meet universal equality proves that the rest masses of neutrons and protons are Variant.

The rest masses of proton and neutron are regarded as fundamental physical constants in existing physics and it is believed that they are invariant.

Rest mass of proton plus neutron = 1.007825 + 1.008665 = 2.01649 u.

But inside the deuteron nucleus, it is experimentally confirmed that

rest mass of proton plus neutron = 2.01410 u i.e., rest mass of proton plus neutron inside the nucleus has decreased from 2.01649 u to 2.01410 u. The rest masses of neutrons and protons are fundamental constants only if they remain same universally (inside and outside the nucleus). Failure to meet universal equality proves that the rest masses of neutrons and protons are Variant.

VII

HAWKING FATAL FLAW MAY LEAD TO GRAND DESIGN

 

The image we often see of photons as a tiny bit of light circling a black hole in well-defined circular orbit of radius r = 3GM/c2 (where G = Newton’s universal constant of gravitation, c = speed of light in vacuum and M = mass of the black hole) is actually quite interesting.

The angular velocity of the photon orbiting the black hole is given by:

ω = c/r.

For circular motion the angular velocity is the same as the angular frequency. Thus

ω = c/r = 2πc/λ

or λ =2πr

The De Broglie wavelength λ associated with the photon of mass m orbiting the black hole is given by:

λ= h/mc. Therefore: r = ħ/mc, where ħ is the reduced Planck constant. The photon must satisfy the condition r = ħ/mc much like an electron moving in a circular orbit. Since this condition forces the photon to orbit the hole in a circular orbit.

r = 3GM/c2 = ħ/mc

or 3GM/c2 = ħ/mc

or 3mM = (Planck mass) 2

Because of this condition the photons orbiting the small black hole carry more mass than those orbiting the big black hole. For a black hole of one Planck mass (M = Planck mass),

m = 1/3 × Planck mass

Since the Hawking radiation is a Black Body radiation, the maximum energy an emitted Hawking radiation photon can possess is given by the equation:

Lmax = 2.821 kBT (where kB = Boltzmann constant and T = black hole temperature= ħc3 / 8πGM).

Lmax = 2.821 kBT

or Lmax = 2.821 (ħc3 / 8πGM)

which on rearranging:

GM / c2 = 2.821 (ħc / 8πLmax)

Since 3GM/c2 = ħ/mc. Therefore:

ħ/ 3mc= 2.821 (ħc / 8πLmax)

or mc2 = 2.968Lmax

which means: mc2 > Lmax

If a photon with energy mc2 orbiting the black hole can’t slip out of its influence, and so how can a Hawking radiation photon with maximum energy Lmax < mc2 is emitted from the event horizon of the Schwarzschild black hole?

FG = force of gravitation experienced by the Hawking radiation photon at the surface of the black hole and FP = force which moves the Hawking radiation photon.

Fg = GMm/ RG^2^ and FP = mc2 / λ (where G = Newton’s universal constant of gravitation, c= speed of light in vacuum and M = mass of the black hole, m and λ = mass and wavelength of the Hawking radiation photon, RG = 2GM/c2 (the radius of the black hole).

FG / FP = c2 λ/4GM

In MOST PHYSICS literature the energy of an emitted Hawking radiation photon is given by the equation: L = kBT (where kB = Boltzmann constant and T = black hole temperature).

L = kBT = (ħc3 / 8πGM)

By Planck’s energy-frequency relationship:

L = hc/λ

Hence:

hc/λ = (ħc3 / 8πGM) which on rearranging:

λ= 16π2GM/c2

Solving for λ in the equation (FG / FP = c2 λ/4GM) we get:

FG / FP = 16π^2^/ 4 = 39.43

FG = 39.43 FP

Which means: FG > FP

If the photon wants to detach from the surface of the black hole ‒ a voracious whirlpool in space ‒ it should obey the condition: FG = FP or FP > FG. Therefore, it is hard to claim the emission of Hawking radiation photon from the Schwarzschild black hole. However, Hawking radiation has not been observed after over two decades of searching. Despite its strong theoretical foundation, the existence of this radiation is still in question. If Schwarzschild black hole does not emit any radiation, then it will continue to grow by absorbing surrounding matter and radiation. The mass energy Mc2 of the black hole goes on increasing with time. Because Mc2 = − 3.33U the gravitational binding energy becomes more negative with the increase in mass energy of the black hole to shrink the black hole in size. And if we regard the nature of gravitational force so developed is similar to inter-molecular force. The gravitational force is attractive up to some extent [i.e., it is attractive until the distance between the constituents of the black hole is greater than or equal to the optimum distance (x Aº)] and when distance between the constituents of the black hole becomes < than x Aº it turns to a strong repulsive force. As the gravitational binding energy of the black hole become more negative, the distance between the constituents of the black hole decreases. As long as the distance between the constituents of the black hole is optimum, there is no considerable repulsion between the constituents. When the distance between the constituents of the black hole is further decreased i.e., the distance between the constituents of the black hole becomes < than x Aº and then at this stage, the singularity of the black hole may explode with unimaginable force, propelling the compressed matter into space. This matter then may condense into the stars, planets, and satellites that make up solar systems like our own. But perhaps not very scientific since no observational evidence available but still a nice mind exercise. However, if this is confirmed by observation, it will be the successful conclusion of a search going back more than 3,000 years. We will have found the grand design ‒ which no longer leaves God pretty much on the bench for a long, long time.

 

VIII

GRAVITATIONAL WAVES

Why the two massive bodies orbiting each other emit gravitational waves?

 

Suppose that the two masses are M and m, and they are separated by a distance r. The power given off by this system in the in the form of emitted gravitational waves is:

− dE/dt = 32 G4 (M × m) 2 (M + m) / 5c5 r5,

Where: −dE is the smallest decrease in the energy of the system with respect to time dt. Gravitational waves rob the energy of the system. As the energy of the system reduces, the distance between the masses decreases, and they rotate more rapidly. The rate of decrease of distance r between the masses versus time is given by:

− dr/dt = 64G3 (M × m) (M + m) / 5 c5 r3,

Where: −dr is the smallest decrease in distance between the orbiting masses with respect to time dt.

Dividing − dE/dt by − dr/dt, we get:

2 × (−dE/dt) = (GMm / r2) × (− dr/dt).

Since GMm / r2 = FG (the force of gravitation between the orbiting masses). Therefore:

2 (−dE/dt) = FG × (− dr/dt).

Suppose no gravitational waves is emitted by the system, then

(−dE/dt) = 0 and (−dr/dt) = 0

FG = 2 × {(−dE/dt) / (−dr/dt)} = 2 × (0/0)

FG = 0/0

i.e., the force of gravitation between the orbiting masses becomes UNDEFINED. The two masses orbiting each other should lose their energy in the form of gravitational waves in order to maintain a well-defined force of gravitation between them.

The life time of the orbit is given by the equation:

t life = 5c5r4 /256 G3 (M × m) (M + m).

Now comparing the above equation with the equation − dr/dt = 64G3 (M × m) (M + m) / 5 c5 r3 we get: − dr/dt = r /4t life

Representing the rate of orbital decay (− dr/dt) by the symbol R1 we get:

R1= r /4t life

However, the distance between the orbiting masses not only decrease due to the emission of gravitational radiation but also increase at the same time due to the Hubble expansion of the space. The rate of increase of distance between the orbiting masses due to the expansion of the space is given by the equation: R2= dr/dt = H × r, where H is the Hubble parameter.

On dividing R1 by R2 we get:

R1 / R2 = 1 / 4Ht life

Since H = 1/ tage (where tage = age of the universe). Therefore:

R1 / R2 = tage / 4t life

For a system like the Sun and Earth, r is about 1.5 × 1011m and M and m are about 2 × 1030 and 6 × 1024 kg respectively. In this case, t life is about 3.44 × 1030 s.

R1 / R2 = tage / 4 × (3.44 × 1030 s)

Since: tage ≈ 4.347 × 10 17s. Therefore:

R1 / R2 = 3.159 × 10 − 14

Which means: R2 > R1 i.e., the rate of increase of distance between the orbiting masses due to the Hubble expansion of space is far greater than the rate of decrease of distance between the orbiting masses due to the emission of gravitational radiation.

If tage = 4t life, then

R1 = R2

For a system like the Sun and Earth,

tlife = 3.44 × 1030 s.

tage = 4t life = 1.376 × 10 31s

i.e., when the age of the universe approaches 1.376 × 10 31s the rate of decrease of distance between the earth and the sun due to the emission of gravitational radiation is exactly equal to the rate of increase of distance between the earth and the sun due to the Hubble expansion of space (i.e., the distance between the earth and the sun neither contracts nor expands). However, even before tage approaches 1.376 × 10 31s the earth will be swallowed by the sun in the red giant stage of its life in a few billion years’ time.

 

IX

 

If a PART mc2 of the photon energy is absorbed by the electron at rest, then the absorbed energy mc2 manifests as the Kinetic energy KE of the electron and the momentum mc of the absorbed photon manifests as the momentum p of the electron. Therefore, the equation

KE = ∆p × v

where ∆p = p2 – p1, p2 = final momentum of the electron = p and p1 = initial momentum of the electron = 0 (since the electron was initially at rest).

Becomes:

mc2 = mc × v

From this it follows that

v = c

The idea which states that nothing with mass can travel at the speed of light is a cornerstone of Albert Einstein’s special theory of relativity, which itself forms the fundamental precept of modern physics. If the electron recoils with a velocity v=c, then the basic laws of physics have to be rewritten.

 

REFERENCES

Physics I For Dummies Paperback- June 17, 2011 by Steven Holzner.

Physics II For Dummies Paperback- July 13, 2010 by Steven Holzner.

Basic Physics by Nair.

Beyond Newton and Archimedes by Ajay Sharma.

Einstein, Newton and Archimedes GENERALIZED (detailed interviews) by Ajay Sharma.

http://en.wikipedia.org/wiki/ Gravitational wave.

Teaching the photon gas in introductory physics by HS Leffa.

Hand Book of Space Astronomy and Astrophysics by Martin V. Zombeck.

Astrophysical concepts by Martin Harwit.

Ma H. The Nature of Time and Space. Nat Sci 2003; 1(1):1-11.

What is the Strength of Gravity? Victor Stenger (Excerpted from The Fallacy of Fine Tuning, 2011).

Stephen W. Hawking, A Brief History of Time: From the Big Bang to Black Holes (New York: Bantam, 1988).

Defending The Fallacy of Fine-Tuning by Victor J. Stenger.

Victor J. Stenger, The Comprehensible Cosmos: Where Do the Laws of Physics Come From? (Amherst, NY: Prometheus Books, 2006).

Sharma, A Physics Essays Volume 26, 2013.

Cockcraft J D, and Walton, E.T.S Nature 129 649 (30 April 1932).

http://www.nobelprize.org/nobel_prizes/physics/laureates/1951/cockcroft-lecture.pdf.

Newton, Isaac Mathematical Principles of Natural Philosophy, London, 1727, translated by Andrew Motte from the Latin.

A.L.Erickcek, M Kamionkowski and Sean Carroll, Phys. Rev D 78 123520 2008.

Sharma, A. Concepts of Physics (2006).

Fadner, W. L. Am. J. Phys. Vol. 56 No. 2, February 1988.

Einstein, A. Annalen der Physik (1904 & 1907).

Arthur Beiser, Concepts of Modern Physics, 4th edition (McGraw-Hill International Edition, New York, 1987).

MISCONCEPTIONS ABOUT THE BIG BANG by Charles H. Lineweaver and Tamara M. Davis.

BEYOND EINSTEIN: from the Big Bang to Black Holes (prepared by The Structure and Evolution of the Universe Roadmap Team).

Alternatives to the Big Bang Theory Explained (Infographic) By Karl Tate.

The Origin of the Universe by S.W. Hawking.

The Beginning of Time by S.W. Hawking.

A Universe from Nothing by Lawrence M. Krauss.

Evolution: A Theory in Crisis by Michael Denton.

The Origin and Creation of the Universe: A Reply to Adolf Grunbaum by WILLIAM LANE CRAIG.

Weisskopf, Victor [1989]: ‘The Origin of the Universe’ New York Review of Books.

The Grand Design by Stephen Hawking and Leonard Mlodinow.

M. Planck, The Theory of Radiation, Dover (1959) (translated from 1906).

Black Holes and Baby Universes and Other Essays by S.W. Hawking.

David Griffiths, Introduction to elementary particles, Wiley, 1987. ISBN 0471-60386-4.

Feynman, Leighton, and Sands, The Feynman Lectures on Physics, Addison-Wesley, Massachusetts, 1964. ISBN 0-201-02117-X.

D.A. Edwards, M.J. Syphers, An introduction to the physics of high energy accelerators, Wiley, 1993. ISBN 0-471-55163-5.

The Universe: the ultimate free lunch by Victor J Stenger (1989).

A Case Against the Fine-Tuning of the Cosmos by Victor J. Stenger.

A Quantum Theory of the Scattering of X-rays by Light Elements by Arthur. H Compton (1923).

Derive the mass to velocity relation by William J. Harrison (the general science journal).

BLACK HOLE MATH by National Aeronautics and Space Administration (NASA).

The Gravitational Radius of a Black Hole by Ph.M. Kanarev.

The Gravitational Red-Shift by R.F.Evans and J.Dunning-Davies.

Matter, Energy, Space and Time: Particle Physics in the 21st Century by Jonathan Bagger (2003).

Quarks, Leptons and the Big Bang by Jonathan Allday.

String Theory FOR Dummies by Andrew Zimmerman Jones with Daniel Robbins.

Einstein, String Theory, and the Future by Jonathan Feng.

Cosmos by Carl Sagan.

The Theory of Everything by S.W. Hawking.

A Briefer History of Time by Stephen Hawking and Leonard Mlodinow.

The Grand Design by Stephen Hawking and Leonard Mlodinow.

The Grandfather Paradox: What Happens If You Travel Back In Time To Kill Your Grandpa? Written by Motherboard.

The human health effects of DDT … by MP Longnecker (1997).

Side Effects of Drugs Annual: A worldwide yearly survey of new data and … edited by Jeffrey K. Aronson.

Unstoppable Global Warming: Every 1,500 Years by Siegfried Fred Singer, ‎Dennis T. Avery (2007).

Acid Rain by Louise Petheram (2002).

Eutrophication: Causes, Consequences, Correctives; Proceedings of a Symposium edited by National Academy of Sciences (U.S.).

What’s wrong with food irradiation?

(https://www.organicconsumers.org/old_articles/Irrad/irradfact.php).

Ammonia: principles and industrial practice by Max Appl (1999).

An Edible History of Humanity by Tom Standage (2012).

Relativity: The Special and General Theory by Albert Einstein (1916).

Neutrinos: Ghosts of the Universe by Don Lincoln.

The Feynman Lectures on Physics (Volume I, II and III) by Richard Feynman.

The Evolution of the Universe edited by David L. Alles.

The Universe: Size, Shape, and Fate by Tom Murphy (2006).

Paul J. Steinhardt & Neil Turok (2007). Endless Universe: Beyond the Big Bang. New York: Doubleday.

Carroll, Sean (2010). From Eternity to Here. New York: Dutton.

Astronomy for beginners by Jeff Becan.

PARALLEL WORLDS: A JOURNEY THROUGH CREATION, HIGHER DIMENSIONS, AND THE FUTURE OF THE COSMOS by Michio Kaku.

Steven Weinberg, The First Three Minutes, 2nd ed., Basic Books, 1988.

Hugh Ross, Creation and the Cosmos, NavPress, 1998.

A Short History of Nearly Everything by Bill Bryson (2003).

The Universe in a Nutshell by Stephen W. Hawking (2001).

On the Radius of the Neutron, Proton, Electron and the Atomic Nucleus by Sha YinYue.

What Energy Drives the Universe? − Andrei Linde.

Endless Universe by Paul J. Steinhardt and Neil Turok.

Greene, Brian. Elegant Universe. New York: Vintage, 2000.

Davies, Paul. The Last Three Minutes. New York: Basic Books, 1994.

Lederman, Leon M., and David N. Schramm. From Quarks to the Cosmos. New York: W. H. Freeman, 1989.

Singh, Simon. Big Bang. New York: HarperCollins, 2004.

Greene, Brian. Fabric of the Cosmos. New York: Vintage, 2005.

FUNDAMENTAL UNSOLVED PROBLEMS IN PHYSICS AND ASTROPHYSICS by Paul S. Wesson.

Griffiths, D. 1987. Introduction to Elementary Particles. Harper and Row, New York.

What is the Strength of Gravity? Victor Stenger Excerpted from The Fallacy of Fine Tuning (2011).

PHYSICS OF THE IMPOSSIBLE by Michio Kaku.

E I N S T E I N ‘ S COSMOS by Michio Kaku.

A Tour of the Universe by Jack Singal.

The Gravitational Universe by Prof. Dr. Karsten Danzmann; Horgan, John. The End of Science. Reading, Mass.: Addison-Wesley, 1996.

The Origin of the Universe and the Arrow of Time by Sean Carroll; Weinberg, Steve. Dreams of a Final Theory: The Search for Fundamental Laws of Nature. New York: Pantheon Books, 1992.

Adams, Douglas. The Hitchhiker’s Guide to the Galaxy. New York: Pocket Books, 1979; Tyson, Neil de Grasse. The Sky Is Not the Limit. New York: Doubleday, 2000.

Chemistry For Dummies Paperback- May 31, 2011 by John T. Moore

Protein-Ligand Binding by MK Gilson.

 


Facts And Speculations Of Science

Subaltern notable − which took us on a journey from the time when Aristotle and the world of that era believed that Earth was the center of the universe and supported on the back of a giant tortoise to our contemporary age when we know better − regards body of knowledge as painterly truth. Rather it is absolutely-absolutely false. Science has weighty limitations and it’s a journey not a destination and the advance of knowledge is an infinite progression towards a goal that forever recedes. And it's our main ingredient for understanding − a means of accepting what we've learned, challenging what we think, and knowing that in some of the things that we think, there may be something to modify and to change.

  • Author: Manjunath R
  • Published: 2015-11-16 14:50:10
  • Words: 29989
Facts And Speculations Of Science Facts And Speculations Of Science