30 January 2015
Newton represents another extremity using the idea of an instantaneous “action at a distance”. According to his theory, gravitation can act instantly even from infinite distance. Newton “left to… [his] readers” to decide whether that agent caused gravity was “material or immaterial.” [Newton: letters to Bentley] This “action at a distance” was regarded to be too mysterious and too similar to the magical forces to his contemporaries, but thanks to Laplace’s, Lagrange’s and others’ interpretation of physics as a pure mathematical description the problem seemed to be vanish [Roland Omnés: Quantum Philosophy 1999, p. 35].
Einstein’s world different from his forerunners’. Newton believed in absolute space and absolute time – they functioned as the base of a reference system and motion happened relative to it.
Opposite to this interpretation, Einstein voted for the speed of light as the maximum speed of anything can reach in our Universe, and it was absolute to any inertial frame of reference. Thus neither space nor time, but this constant (speed of light) was absolute as a barrier. It was a solution for the result of the Michelson – Morley experiment which originally aimed to verify the existence of ether as the medium of light waves, but it was a failure (notice that even Maxwell’s equation based on his confidence in the existence of ether [Omnés, p. 44]).
Einstein’s model is surely more counterintuitive than Aristotle’s or Newton’s (and it is not an accident that this appeared finally). On the other hand, it is surprisingly simple: there is no a body with mas which can reach the speed of light; and a particle with zero rest mass moves with the speed of light. That’s all. Notice that Einstein’s solution is instrumentalist in a certain sense, since it doesn’t explain why the speed of light is so special – it simply accepts its absoluteness. An instrumentalist approach can results a working model (see the Ptolemaic worldview), but it is not connected to the physical reality, since its subject is not the real world, but only a description of the real world. This means that it can be proven false at any time (as it happened in Ptolemy’s case).
There are some possible scenarios at this point.
We can believe that Einstein’s model is flawless. Obviously, it is the simplest solution. Although it is true that all the previous models proved to be incorrect, it is not a necessity that this theory will be proven to be false. Similarly, we cannot be sure that because it seemingly immaculate, it well be unchallengeable in the future. In fact, there are critics of Einstein’s theory today: according to Joseph Lévy, “the speed of light was erroneously found to be constant” [From Galileo to Lorenz… and Beyond 2002, p. 2]. In the age of Galileo the Aristotelian science was replaced by the “New Physics” within some decades, but the institutional system of research was in embryotic form at that time. So the opinion of those who have some doubt about the flexibility of our textbook – academy – big science based system is understandable. After all, a theory’s acceptance is only its respect’s indicator, but not surely of its truth.
On the other hand, the simplicity of this mass means slower than light, zero mass means speed of light equation urges caution. The ancient Greeks believed, inter alia, in the harmony, beautifulness and mathematical nature of “natural laws.” I don’t believe that the adaptation of the mental organs of our hunter-gatherer ancestors to their environment prepared us to understand the physics (after all, we have to use higher mathematics and other tricks), so it is reasonable to introduce an anti-Ockham rule. According to it, nature is not necessarily organized by simple and transparent laws, and it is always suspicious if an explanation is too elegant, too regular etc. Circle is perfect – but the orbit of a planet isn’t a circle. The history of physics is the history of the appearance of more and more complicated theories.
26 January 2015
Since the 15th century says Dyson, only a half dozen concept-driven revolutions happened: Copernicus, Newton, Darwin, Maxwell, Freud and Einstein, but about two dozen tool-driven revolutions from Galileo’s telescope to the Watson-Crick DNA model based on X-ray diffraction [Imagined Worlds, p. 50].
It is not an accident that the idea of tool-driven revolution appeared in the last century. It was the age of the “big science” when industrial nations’ mega projects from Hubble Telescope to Large Hadron Collider and to HUGO became to dominate experimental sciences, and it was evident to plan the research programs to fit to the billion dollar equipment, and not vice versa.
Parallel with Dyson, scientific historian Peter Galison published a book about the “visual culture of microphysics” arguing that traditionally the scientific activities are divided into theoretical; instrument building and experiments [Image and logic, 1997, p. 799] and theory was regarded to be the most important part.
Historically, an experiment can served for three purposes. In the Middle Ages its function was to demonstrate the known truths (similarly to the universities’ public debates which aim was to demonstrate debate skills. A student had to defend, another had to attack an Aristotelian statement, but proving that Aristotle was “wrong” showed only the winner’s persuasive argumentation, not the ancient philosopher’s fault [John Henry: A Short History of Scientific Thought 2012, p. 49].)
Today an experiment can be used either for “verify” a theory or as a starting point of a new one and it can lead to a tool-driven revolution. In short, a theory-driven revolution is a new answer for old questions; tool-driven revolution is a new answer for new questions. A philosopher can says that a Kuhnian revolution is a posteriori, while a tool-driven one is a priori.
Obviously there is no a way to sharply divide theory from practice. The classical mechanics of 19th Century relied heavily on models where the forces, effects etc. were visualized by the imagined combinations of pulleys, levers etc. And Galison pointed out that Einstein’s position in the patent office was not isolation, but ideal to review the technical/mechanical innovations – including the problem of the synchronization of electric clocks [Jon Agar: Science in the Twentieth Century and Beyond, 2012, p. 29].
Similar to experiments which have different usage, there are some different methods to use mathematics in natural sciences.
- Newton developed a new kind of mathematics to solve certain physical problems. Opposite to it,
- Einstein choose an existing mathematical solution to answer a known physical question
- A third opportunity is to apply mathematical concepts for natural sciences directly. In mathematics not only potential, but actual infinity exists, so a physicist who simply use mathematics as a tool, probably accept the existence of actual infinity in physics without any reflection (see black holes and other exotic entities), although its existence in reality is not proven.
These mathematics-like thinking tools became more and more important, since there is no opportunity for a real experiment in connection with the multiverses, for example.
“Philosophy has not kept up with modern developments in science,” thus it “is dead” writes Hawking [The Grand Design 2011, p. 13]. But Roger Scruton underlines that philosophy’s aim is not to find real solution for real problems (i.e. the path of an asteroid), but to answer “abstract” and ultimate” questions [Modern Philosophy 1994, p. 3].
My prediction is that the traditional, experiment dominated science will be split up to tool-driven big science and a field dominated by thinking-tools. So we have two choices. If we want to believe that science is competent even there is no a chance for an experiment then we have to redefine the meaning of science. Or, we have to recognize that philosophy annexes some parts of modern natural sciences.
22 January 2015
It is not a surprise to found ourselves in a Universe which can be at least partially interpretable by laws –perhaps a primitive form of life could be able to evolve without a kind of predictability, but an intelligence able to interpret the World surely not. A kind of compressibility is the feature which allows to use shorthand to calculate the reality from certain pieces of information (i.e. a planetary orbits). Opposite to it, a string is random if we cannot present a representation of it which is shorter than the original sequence (Barrow: New Theories of Everything 2007, p. 11.).
Having studied a special musical instrument named lyre the Pythagoreans concluded that the reality ruled by numbers, and because of their intellectual heritage we regard self-evident that the description of the nature is compressible thanks to the numbers. It is not questionable that this mathematics based method proved to be really efficient to describe some aspects of reality.
On the other hand, there are incompressible things: i.e. it is trivial that there is no a shorter version for a poem which is equal to it, and it is a particular feature of the Universe that it can be divided into a compressible and an incompressible part.
Obviously, not “every known fact” mentioned by Deutsch is mathematically non-accessible, but our World’s another feature the proportion of the rules and the data we should memorize is unequal.
There are so few rules that some scientists hope to reduce all of them to a super formula. As it was mentioned earlier, the existence of intelligent life prescribes the existence of a very huge amount of data, since the evolution of thinking required billion years, and a long period of time means many events.
But on the one hand, why was it a so slow process? On the other hand: why do not exist many-many more laws even comparable in amount to the “every know facts?” Is it a requirement of a biofil Universe or only an accident?
And a concluding remark: we can draw up a categorization based on the relationship between "rules" (circa compressibility) and "data". Obviously the fifth subcategory is the most interesting where the efficiency of rules is determined either by a nonlinear function or changes randomly.
|1.||perfect (theoretically)||compressible macro physical laws|
|2.||zero||poem, human society's evolution, etc.|
|3.||less and less efficient over time||meteorology|
|4.||more and more efficient||calculation by iterations|
|5.||different patterns determines the effectifiency in time than the third or fourth subcategory||?|
18 January 2015
But as Stephen Jay Gould pointed out “any complex historical outcome – intelligent life on Earth, for example – represent a summation of improbabilities and becomes thereby absurdly unlikely” [ibid, p. 187]. So our special situation is not necessarily a consequence of the fine-tuning.
Similarly, Feynman joked on his luckiness, since he had observed accidentally the ARW357 license plate although the probability of it was practically zero.
And there are other problems with arguments about fine-tuning.
First of all, life is sensitive to some physical parameters. Opposite to it, computing is insensitive to the changes of almost all of them. But it doesn’t mean that our biology is extremely fine-tuned (or extremely matches) to our Universe’s conditions. Probably either a higher or a lower speed of light would not prevent the rise of life. We can play with the idea of a universe where the result of a minor modification of any parameter would result a dead world. It is not the case in our Universe, so we do not live in a so “super fine-tuned” world.
Second of all, “to be privileged” means that the given situation is not average but applying this to the biofil universe concept, there is an unspoken presupposition behind it. If we assume that there are some other universes then uniqueness has a meaning. In other words: if we hypothesize that only a finite number of other universes exist then the “privileged position” is interpretable. But if we are suppose that infinite other worlds (no matter how we define them) exist then we have to believe that infinitely many universes of them are identical with our one. What is more, in this case the number of lifeless; the biolfil and the computable universes are equal – after all, any kind of them have infinitely many identical copies. Thus the meaning of “privileged to some extent” is uninterpretable.
15 January 2015
According to Avi Loeb, the viewing conditions were optimal when the Universe was a mere half billion years old and the circumstances were excellent to study the cosmic perturbations. On the other hand, Lawrence M. Krauss and Glenn D. Starkman pointed out that in an ever-expanding universe “presently observable distant sources will disappear on a time-scale comparable to the period of stellar burning,” and we will forget the existence of our Universe within a hundred billion years. So our position in spacetime is between the two ends of the scale, but it is far from optimal. We can recall the words of King Alonso X who said that he had “some useful hints for” a Creator to produce a World.
To give a trivial example: it is impossible to decide directly whether light source is a distant and bright object or it is a weak source close to us [G. F. R. Ellis: Cosmology and Verifiability. In: Modern Cosmology and Philosophy 1998, p. 121.]. It is imaginable – theoretically at least – a physics where this kind of problem is eliminated, i.e. because every sources brightness is equal; or we could find other, more sophisticated solutions (Cepheids seems to be exceptions with the correlation between their light intensity and period).
All in all, it is a laborious and uncertain work to measure the distances in our Universe, and in the absence of real knowledge, we are willing to accept an “unproven cosmological assumption” about the homogeneity of space as an extension of the Copernican – Darwinian revolutions. Then this unproven assumption is used to interpret astronomical information (including distances) [ibid, p. 123.].
This problem is caused by the nature of our World, and is not a desirable situation for observers. But the worst scenario is a universe where one cannot able to observe anything – I strongly suspect that this world would be lifeless.
So there are two possible explanations. Perhaps we simply live in a Universe which is neither the best nor the worst from this point of view. It is imaginable as well, that a really observer-friendly Universe is impossible, and so this is the best one within the framework of our physics.
13 January 2015
The aim of the biological eschatology is to describe the possible futures of life in a long lasting Universe. It focuses on the effects which appear only on a very large time scale, and it is about only the description of a phase space of possibilities, but not a real future.
11 January 2015
But this bat-metaphor is seriously flawed. The “other minds” problem refers to that question in philosophy that how we could justify that other humans have similar minds than our own. From epistemological approach, the problem is that there is a fundamental difference between accessing our own and others’ experience. I have a firsthand experience about my visual, hearing, etc. impressions, but I cannot access another humans’ firsthand experiences. I don’t have a direct knowledge about it.
It is unquestionable that there are differences between the nature of an experience I experienced directly and another which was told me by you. But the fundamental question is not whether we: you and I are the same person (obviously not), but the similarities/differences between your and my perception, and although we cannot ascertain whether another human has the same experience, there are at least two solutions.
1. Historian David Hackett mentions “the fallacy of metaphysical question” which is “an attempt to resolve a nonempirical problem by empirical means” [Historians Fallacies 1970, p. 12]. In other words: the problem of other minds (or bat perception or alien thoughts, if you prefer) is simply unanswerable from a philosophical point of view. It is impossible to know “what is it like to be a bat” or another person or alien.
2. On the other hand, if one focuses not on the fact that different beings perform the observations, but on the causes which give reason to believe in similarities between their perceptions, then the overall picture would change. The operation of our senses adapted to our environment, so it is a sound argument that humans sensory organs operates similarly. Thus the effects they result are more or less similar too, and we do not have reason to presume that your and mine experiences are radically different. Because our common evolutionary origin I can imagine what you perceive.
A bat is our more distant relative than another humans. We can imagine with less certainty “what is it like to be a bat?” than “what is it like to be a human?”, but it doesn’t mean that we cannot imagine at all.
Hypothetical aliens are formed by the same evolutionary processes than us. Perhaps the gravitation, atmosphere and other parameters of their habitats are different, but the logic of evolutionary adaptation excludes certain solutions and supports others, so the space-phase of possibilities is restricted. Perhaps there are many forms of both life and intelligence in the Universe, but it isn’t mean that there aren’t rules. Similarly, the distance between us and an alien is surely bigger than between us and a chimpanzee. But difference not necessarily means that something is totally unimaginable.
09 January 2015
This short paper proposes an alternative theory to Anthropic Principle. According to our interpretation, the Universe is not "fine-tuned" for life, but "roughly-tuned" for computation and its biofilness is only a phenomenon. This standpoint allows us to extend Seth Lloyd's concept about the ultimate physical limits of computing to examine the computing capabilities of any imaginable universe. In addition, I draw up a universe classification based on it.
Read more: http://arxiv.org/abs/1501.01754
Read more: http://arxiv.org/abs/1501.01754
08 January 2015
Complexity science and biological adaptation
According to the complexity science, a hurricane is only complex, but an anthill both complex and adaptive system, since it can change itself and its operation to respond to the change of the environment. Biological evolution is a subcategory of complex adaptive systems, since it is a system which essential feature is the flexibility to adapt itself to a wild range of options. Life adapted successfully to the deep sea and the high atmosphere on our planet, and it seems to be a plausible hypothesis that it would be able to survive even on the surface of Mars. Notice that if this reasoning is true, then it is possible that the cause of other planets’ lifelessness in the Solar is not their hostile environment. Of course, they are lethal for Earthly life, since our life is adapted to our environment (inter alia, to a dangerous oxidant named oxygen). The problem is that life didn’t begin to evolve on those heavenly bodies, so it didn’t learn to adapt to those circumstances – perhaps because of the unfavorable initial conditions. All in all, it is useful to make a distinction between an environment hostile to the life, and an environment where life cannot able to begin to evolve.
Adaptation to our Universe’s physics
A complex adaptive system possibly can adapt to different circumstances. On the other hand, since this adaptation is a result of adaptive processes, the consequence will more or less strictly matches to the conditions and because of it, it is not a surprise that life seems to be “fine-tuned” for our Universe’s certain traits. Applying the complexity science’s approach to the problem of fine-tuning we can find a different solution than Anthropic Principle.
This logic raises further questions.
- First of all, it is a question whether different forms of life exist in our Universe which are either more or less fine-tuned our Universe’s conditions. It seems to me probable – but who knows.
- According to this interpretation, the rise of life seems to be possible in other universes which are characterized by other parameters (although it is not a necessity).
06 January 2015
- by evolution
- by chance
- by creation
- by laws
At this point it is worth to introduce the notion “compressibility”. In computing, a random string is not compressible, since there is no an opportunity to know the number at the nth place without viewing it. But if you have a simple algorithm to generate every elements of a sequence, then this sequence is compressible. Similarly, if you have to take a lot of steps before an intelligence emerges, then it is a not too compressible (or at least well compressible) process. Evolution seems to be very slow and sluggish: after the formation of our planet, we had to wait for billion years for the first thinking creature.
To create intelligence by pure chance is even more time-consuming: you have to wait even for an infinitely long time for the manifestation of a Boltzmann-brain.
The other two kinds of intelligence creation are different in the sense that chance doesn’t play a role in them – unless we suppose that a World Creator’s decision (“to create or not to create”) depends on chance, probability or anything else. But if a Creator is omnipotent, and there is no a power or law to influence him or her, then the act of creation can be even immediate. The “compressibility” of this process is very high: a simple decision leads to the formation of a whole World. Obviously, that Creator could decide to spin out the process for a very long time, but the result even in this case is a consequence a single decision.
Interestingly, a law driven creation in some respects is similar to an omnipotent entity’s activity. In the case of creation by evolution it can be questionable whether a thinking brain would be produced in a few step (instead of billions of changes). But it seems to be at least imaginable a law which prescribes the whole process step by step.
So both creating intelligence by Creator and by law result very compressible process – opposite to the creation either by evolution or chance. We can wonder if nature prefers laborious solutions or simply likes to hold back information. After all, I agree with King Alfonso X of Castile who said: "had I been present at the Creation, I would have given some useful hints for the better ordering of the universe".
05 January 2015
A strong form of Anthropic Principle (SAP) states that coincides which made possible life are too improbable to be an accident. So, continues the argument, it is not an accident, but our Universe is fine-tuned for our life. From here it is only a step to believe that if it is fine-tuned us, and it is a result of an intelligent entity’s creating action.
But there are problems with it.
One can interpret the multiverse hypothesis as a solution for SAP, since if we suppose that there are infinitely many universes, then we can lead back our Universe's fine-tuned features for a selection effect. According to this approach, we are simply lucky to find ourselves in a biofil universe, but there are many others without life.
But Robin Collins argues from a theistic point of view [The multiverse hypothesis: a theistic perspective. In: Universe or Multiverse, Cambridge Univ. Press, 2007] that the supposed existence of multiverses does not rule out the existence of God. In case we are willing to believe in his ability to create a whole universe, then we can believe that he was able to create a multiverse (or anything else), as well.
Thus whatever we can dream up as an explanation based on natural processes, it is easy to put a question to the end of the chain of reasoning, and ask: “but who created it?” This is similar to the “God of the gasps” argument. What is more, it is impossible to find a situation where it wouldn’t be applicable. In other words: the statement that “it was created by a creator” is not bound to any physical phenomenon, observation, etc.
But since modern science is about reality, science and God does not have any relevance to each other.
04 January 2015
First of all, it is a statement about the role of philosophy in science. It is a defendable claim that examining the usefulness of philosophy in connection with science is belongs to the philosophy of science. So it is slightly self-contradictonary to use philosophical arguments against the usefulness of philosophy (in our case against philosophy of science).
Feynman’s concept echoes the so-called Baconian tradition which proposed “theory-free” experiments and observations [John Henry: A short History of Scientific Thought (2012), p. 85]. But it is well known today that there is no an observation-independent observation; there is no way to interpret an experiment without a framework of interpretation, etc. Simply speaking, there is no a theory-free theory – and every scientist uses theories in one or another form.
What is more, Feynman’s bon mot states that the usefulness of ornithology for birds is equal to the usefulness of philosophy of science for scientists. That is, if a bird can sing without interpreting its activities, then a physicist can work without any reflection for the nature of science.
It is an analogy, but we always have to make a distinction between its two functions. Sometimes we use illustrations as Plato did in his cave allegory and sometimes analogies are integral parts of the arguments. Obviously, it is critical whether an analogy belongs to the first or second category. According to Hume “all our reasonings concerning matters of fact are founded on a species of Analogy”, and even the idea of natural law is based on some essential analogies between past and future [Julian Baggini – Peter S. Fosl: The Philosopher’s Toolkit, 2010, p. 53 - 54]. Darwin used the analogy between natural selection and the pigeon breeders’ wok. And don’t forget Galileo’s analogy: “the book [of Nature] is written in mathematical language”. The mathematical equations which are used to describe the reality based on analogies between the abstract mathematical formulations and the nature of physical objects and so on. It would be funny to rewrite the whole history of science as the history of usage and abusage of analogies from Plato to Feynman. And I am curious whether it would be possible to find another solution to depict everything
03 January 2015
This reasoning based on some unspoken assumptions.
Although Abraham Loeb argues [The Habitable Epoch of the Early Universe, 2013] that there was an opportunity for the appearance of the life already 10 – 17 million years after the Big Bang, it seems to be evident that because of the physical laws, a relative huge and old universe is needed for the appearance of life. Still in Loeb’s example, the criterions of the life didn’t appear immediately after the beginning. So at least in a world governed by our laws of physics, we have to wait for some time before the first organism can be created. Obviously, it is an interesting question whether a universe with different physical laws can produce life within the first seconds. And it is a more interesting question that how long time needed for the formation of the intelligence in another Universe. Ad absurdum, one can imagine a world where the basic elements of life are cooked in the twinkle, but evolution is another story, since it based on natural selection. So a lot of steps is needed to reach a certain complexity. The appearance of the intelligence at a very, very early period of the Universe would be a powerful argument for the existence of a Creator.
Turning back to the original questions, it seems to be a well-founded argument that there are some criterion of the emergence of life and intelligence. It isn’t possible them to come into being without some physical conditions.
It is imaginable that to the emergence of a biofil universe many other universes’ existence needed. After all, life requires a relative long past with relative huge spatial extension, and ad analogiam, it cannot be excluded that a universe fine tuned for life requires the existence of other universes.
Obviously, it seems to be a not too serious concept. But Carr’s concept is based exactly on this, since he leads back the existence of our biofil environment to the existence of other universes. According to him, our Universe was chosen by chance, but I can imagine that there is a connection between our Universe and the others, and this connection isn’t based on a kind of chance, but on some kind of physical laws.
What is more, arguing that only a few (or only one) habitable universe exists, Carr suggest that life is not a general phenomenon, but an exception. But at least theoretically it is imaginable that life is necessarily associated with the presence of the matter (although not every, but) in several universes.
Or, ad absurdum, we can play with the idea of an “Ultimate Anthropic Principle” which prohibits the born of any lifeless universe. Who knows – we have only one example for a biofil world, and it isn’t enough for generalization.
02 January 2015
This proposal is divided into two parts: extending the scope of the Human Rights, and extending the scope of the beings that are included. The need for these amendments is becoming an issue due to the latest developments of science and technology.
Read more: http://mono.eik.bme.hu/~galantai/articles/Proposal_for_the_Declaration_of_Intelligent_Beings_Rights.html
Read more: http://mono.eik.bme.hu/~galantai/articles/Proposal_for_the_Declaration_of_Intelligent_Beings_Rights.html
01 January 2015
Kardashev's typology is based on a belief that we can categorize super civilizations by their energy consumption. But on the one hand we can imagine an advanced civilization which conquers either its own solar system or its galaxy without reaching the second or third level of the Kardashev Scale. On the other hand it will be impossible to build a civilization which can harness the energy of an entire galaxy, unless we discover new physical laws. So it is reasonable to create a new typology which is based on the possible spatial extents of an advanced civilization.
Read more: http://mono.eik.bme.hu/~galantai/articles/After_Kardashev_Farewell_to_Super_Civilizatons.html
Read more: http://mono.eik.bme.hu/~galantai/articles/After_Kardashev_Farewell_to_Super_Civilizatons.html