The concept of the Great Filter was introduced by Robin Hanson. According to it, the Fermi-paradox can be explained by the existence of a barrier that usually prevents the dead matter to become living and/or intelligent and albeit we were able to overcome it, the other candidates for either life or thinking wasn’t lucky/smart enough to get over it. In short: there are some factors that work against the living/intelligent beings.
Of course, it is only a theory and there are other, possible explanations for the “Great Silence” (=our missing “companion intelligences” in the Universe). For example: a low probability of life; or our inability to detect their signs; or we are the first intelligent races so it is not a surprise not to observe other intelligences in our light cone.
But we could adapt this Great Filter hypothesis for the future. Fred C. Adams (Long-term astrophysical processes, in: Global Catastrophic Risks, 2008) draws up our Universe’s very distant future up to 10^100 ears and regarding the survival of life, there are some fundamental turning points in the distant future of the Universe.
The first one is about the survuval of the earthly life. Our planet will be uninhabitable for an intelligent being within a 0.9 – 1.5 billion year and the biosphere will be “essentially sterilized in about 5.5 billion years” by the Sun. Notice that our environment in the Planetary System is not too life-friendly: a smaller start with 10% mass of the Sun would shine for trillion years (p. 44–46.). Regarding our actual knowledge about the pace of our human races’ technological developments, it seems to be possible to circumvent these problems–e.g. by migrating to another star.
The second barrier is more problematic. About 10^40 years later, because of the proton decay the matter in its traditional form will disappear and it means the “life as we know it”. (p. 47.)
It seems to be a serious “Great Filter” regarding the future of life and it would be a real challenge to find a solution which would be able to guarantee the survival of the life in this radically different physical environment. Of course, 10^40 year seem to be an unimaginably long time–but notice that it is not the end of the history of our Universe. The remains of the earlier era: the last black holes will be evaporated within a 10^100 years and only after it begins that epoch when “predictions of the physical universe begin to lose focus”. (p. 49.)
In other worlds: the living period (its end will be caused by the baryon decay) is like a phenomenon disappearing after the first trillionth – trillionth – trillionth… second of the Big Bang. A 10^40 years are simply negligible to the remaining period of time.
So it isn’t evident to argue that the “aim” of our whole Universe is that momentary phenomenon of life. However, the final conclusion of the Strong Anthropic Principle is that the appearance of life is a necessity. But why not, for example, the black holes?
So we have two choices. We can either refuse the Strong Anthropic Principle as a ridiculous and flawed theory (and I tend to do it) or we can ask how it would be possible for the future life to survive these two Great Filters: the baryon and the black hole barriers–and it is a really exciting question.
monoversum
applied philosophy of science
15 July 2015
10 July 2015
To Destroy Literally Everything
According to Evgeny Morozov, there are two kinds of walls: stone walls and virtual firewalls (The Net Delusion, 2011, p. 45.). Regarding the previous one, it is hard to build, but relatively easy to destroy. A house of cards is an extreme example of it: difficult to create and easy to ruin. Opposite to it, it is relatively simple to build (to write) a firewall, but it is impossible (or, at least, almost impossible) to destroy if you don’t have a physical access to the computer which runs it. This is because both the stone wall builder and invader act on the same level: on the level where the wall itself exist. Opposite to it, a firewall attacker has to work on the level of the actual system, while the programmer creator constructs the firewall from a “higher level” or, if you prefer, from an external point. Notice that some there are “harder to ruin than to build” systems can exist in the physical reality, as well: e.g. the pro WWII French Maginot Line of a concrete fortification system is such a construction. But the two level nature of firewalls is more important from our point of view, so it shall be used as a metaphor. Applying this wall approach to our Universe, it is obvious that a house of cards world unsuitable either for life or intelligence since it is too sensitive to any disturbance: We can imagine a multiverse with continuously appearing and promptly disappearing cosmoi. This multiverse differs from ours: Edward Teller was afraid during the WWII that an exploding hydrogen bomb, because of the extremely high temperatures, would destroy the whole Earth, but we don’t have neither weapons to ruin our planet, nor our Universe even today (opposite to the alarm bells in connection with the possible dangers of the CERN’s supposed miniature, artificial black holes). Namely, the physical system of our world seems to be both intelligence and technology proof (and at least to a certain level seem to be even foolproof). It can be imaginable a universe which is vulnerable even to a relative low level technology. It was a popular belief in the cold war era that the answer for the Fermi paradox was that every alien civilization perished because of the nuclear weapons. This scenario is adaptable in a slightly modified form for a cosmic level supposing that the stability of the physics of different universes are different. Traditionally the end of a universe is interpreted as a “matter of fact” question, but from our point of view it, can be interpreted as a version of Anthropic Principle where the level of technical development is correlated with the physical laws’ strength to determine the whole universe’s fate. Of course, it is not known whether our world’s relative long existence is a result of only a stone wall style stability and it wouldn’t be too difficult to ruin this using a near-future technology. Or, we live in a firewall style world where we can act only under the level of physics. In this case we can destroy the physically existing Universe at most: Writing about Global Catastrophic Risks, Nick Bostrom and Milan Cirkovic discuss “only” the possibility of a disaster destroying “the potential of our future light cone of universe to produce intelligent… beings" (2008, p. 2–3.).
It would be even more fatal to create a super bomb–e.g. an artificial black hole–which can destroy everything but keeps intact the laws of physics. But this not the end of the possibilieties since it is imaginable that we could destroy somehow not only the physically existing Universe but the physical laws themselves too.
It would be even more fatal to create a super bomb–e.g. an artificial black hole–which can destroy everything but keeps intact the laws of physics. But this not the end of the possibilieties since it is imaginable that we could destroy somehow not only the physically existing Universe but the physical laws themselves too.
02 July 2015
Existing, non-existing and other universes
“According to modal realism, possible worlds really exist” writes Jennifer Fisher in his book On Philosophy of Logic (p. 91.). This approach is based on modal logic that interprets true and false statements in relation to possible worlds: E.g. necessity means that the statement “is true in all possible words” (Ibid, p. 75.) and our actual world is nothing more than a world which was chosen from the set of other, existing ones. I don’t accept modal realism’s logic (after all, possibility isn’t equal to existence), but it is interesting from our point of view that the logic of modal worlds is similar to the logic of classic multiverse hypothesis that assumes the existence infinitely many words and this similarity shall lead us to a strange type of imaginable universes.
Max Tegmark interprets the multiverse as the manifestation of every mathematically possible world. It is a form of mathematical Platonism, and the main thesis is that on the one hand, anything is possible mathematically exists in reality. On the other hand, everything is governed by the rules of mathematics. “Mathematical” means in this case that every combination of different sets of physical laws and constants, or even different equations exist. In other words: according to Tegmark, all words can be described by mathematics and every imaginable combination is manifested in a really existing world. The core of Tegmark’s concept is that mathematics is equal to physics in a certain sense, since it describes the world ruled by physical laws.
But it is not sure that even our universe can be described perfectly by mathematics and perhaps only our belief suggests that every natural phenomenon is controlled by either deterministic, or probability or evolutionary laws. Inter alia, it is possible that the Great Unified Theory (GUT) doesn’t exist, since there is no mathematics to describe every connection. It is perhaps only about our inability to give a coherent description about reality, since our tools (including our minds, mathematics and logic) aren’t appropriate for it.
Or, it is imaginable that there are universes that cannot be described by mathematics at all: after all, mathematics is based on the presumption of the conservation of some rules. Thus, it is not necessarily well-founded to state that every universe is mathematical in nature. So we can imagine whole universes (albeit they wouldn’t be biofil) without mathematically interpretable natural laws. In other words: although they exist, they cannot be described by one or other mathematical form of physical laws.
Traditionally, we distinguish existing and non-existing worlds and the main sin of modal realism is that it intermixes these two categories. Now we can introduce a third kind of universes which differ from both the “existing” and “non-existing” ones and since per definitionem it is impossible to give a scientific description about their features, they don’t belong to the realm of physics.
Max Tegmark interprets the multiverse as the manifestation of every mathematically possible world. It is a form of mathematical Platonism, and the main thesis is that on the one hand, anything is possible mathematically exists in reality. On the other hand, everything is governed by the rules of mathematics. “Mathematical” means in this case that every combination of different sets of physical laws and constants, or even different equations exist. In other words: according to Tegmark, all words can be described by mathematics and every imaginable combination is manifested in a really existing world. The core of Tegmark’s concept is that mathematics is equal to physics in a certain sense, since it describes the world ruled by physical laws.
But it is not sure that even our universe can be described perfectly by mathematics and perhaps only our belief suggests that every natural phenomenon is controlled by either deterministic, or probability or evolutionary laws. Inter alia, it is possible that the Great Unified Theory (GUT) doesn’t exist, since there is no mathematics to describe every connection. It is perhaps only about our inability to give a coherent description about reality, since our tools (including our minds, mathematics and logic) aren’t appropriate for it.
Or, it is imaginable that there are universes that cannot be described by mathematics at all: after all, mathematics is based on the presumption of the conservation of some rules. Thus, it is not necessarily well-founded to state that every universe is mathematical in nature. So we can imagine whole universes (albeit they wouldn’t be biofil) without mathematically interpretable natural laws. In other words: although they exist, they cannot be described by one or other mathematical form of physical laws.
Traditionally, we distinguish existing and non-existing worlds and the main sin of modal realism is that it intermixes these two categories. Now we can introduce a third kind of universes which differ from both the “existing” and “non-existing” ones and since per definitionem it is impossible to give a scientific description about their features, they don’t belong to the realm of physics.
24 June 2015
Mathematics as cellular automation
According to a popular belief, mathematics is nothing more than a big tautology, since it is a deductive system and a new result reachable through a process of steps specified by certain rules. The two sides of the equation means the same: To give an example: 2+2=4 (and it is held that the relations between the sets of axioms and the result of proof is the same).
Of course, the logic of mathematics makes possible to reach a result (e.g. a mathematical proof), but even if you accept the axioms as a given and unchangeable base, the possibility doesn’t means the necessity. Even a few elements can result “hyper astronomically” huge number of combinations (to borrow Quine’s term). Thus searching for an answer (e.g. examining whether a theorem is true) can be interpretable as (a random) walk in the phase space of mathematics. Perhaps we are convinced that there is a certain mountain pike in this virtual landscape, but we do not know the path to it; or we even don’t know whether the hill exists at all. Obviously, this image is more or less misleading, since there aren’t existing routes originally, and we have to build them–and in some cases this activity constructs the target itself. Furthermore, not only the phase space of a certain mathematics is enormously huge, but the phase space of mathematics based on different sets of axioms and considerations are similarly large as well.
It is an interesting question that to what extent overlaps different mathematics (=mathematics based different set of axioms and rules) each other. And since the “phase space” describes mathematics as an n-dimensional landscape where every point is defined by certain parameters, we can try to define them using another method. Ad analogiam: Remember Descartes’s idea to mathematize geometry.
Arithmetic is compressible: if you know the adding rules, then you can calculate the result of 2+2 directly, omitting the steps of 1+1=2; 2+1=3; 3+1=4. Simply speaking, there aren’t intermediate steps.
Opposite to it, in case of a mathematical theorem you cannot omit any part of the proofing jumping from the starting point directly to the end, thus it is history dependent and the first step is essential to reach the second one, etc. This uncompressible method is strongly resembles for the cellular automation’s operation mode.
Cellular automata (CA) is based on simple rules that determine the status of a certain cell (=a certain point of the landscape or cellular space, if you prefer) taking into consideration certain cells’ intermediate states. It is an uncompressible process: You cannot tell the result without executing the process itself.
It seems to be plausible that we can build a CA to generate point by point the path to any certain proof: After all, we can adapt the rules which guides the work of cellular automata to the objective to be achieved. What is more, probably we could construct a “universal mathematical CA” (UCA) to execute the whole mathematics. Or, we could build other UCAs to examine their ways of work. Perhaps it would result totally different mathematics.
Of course, the logic of mathematics makes possible to reach a result (e.g. a mathematical proof), but even if you accept the axioms as a given and unchangeable base, the possibility doesn’t means the necessity. Even a few elements can result “hyper astronomically” huge number of combinations (to borrow Quine’s term). Thus searching for an answer (e.g. examining whether a theorem is true) can be interpretable as (a random) walk in the phase space of mathematics. Perhaps we are convinced that there is a certain mountain pike in this virtual landscape, but we do not know the path to it; or we even don’t know whether the hill exists at all. Obviously, this image is more or less misleading, since there aren’t existing routes originally, and we have to build them–and in some cases this activity constructs the target itself. Furthermore, not only the phase space of a certain mathematics is enormously huge, but the phase space of mathematics based on different sets of axioms and considerations are similarly large as well.
It is an interesting question that to what extent overlaps different mathematics (=mathematics based different set of axioms and rules) each other. And since the “phase space” describes mathematics as an n-dimensional landscape where every point is defined by certain parameters, we can try to define them using another method. Ad analogiam: Remember Descartes’s idea to mathematize geometry.
Arithmetic is compressible: if you know the adding rules, then you can calculate the result of 2+2 directly, omitting the steps of 1+1=2; 2+1=3; 3+1=4. Simply speaking, there aren’t intermediate steps.
Opposite to it, in case of a mathematical theorem you cannot omit any part of the proofing jumping from the starting point directly to the end, thus it is history dependent and the first step is essential to reach the second one, etc. This uncompressible method is strongly resembles for the cellular automation’s operation mode.
Cellular automata (CA) is based on simple rules that determine the status of a certain cell (=a certain point of the landscape or cellular space, if you prefer) taking into consideration certain cells’ intermediate states. It is an uncompressible process: You cannot tell the result without executing the process itself.
It seems to be plausible that we can build a CA to generate point by point the path to any certain proof: After all, we can adapt the rules which guides the work of cellular automata to the objective to be achieved. What is more, probably we could construct a “universal mathematical CA” (UCA) to execute the whole mathematics. Or, we could build other UCAs to examine their ways of work. Perhaps it would result totally different mathematics.
17 June 2015
Big Data as mathematics
Big data means that we process not only a small amount of data but all of them. And what is similarly important: we, at least partly, should stop the search for the reason–cause correlation (Victor Mayer-Schönberg and Kenneth Cukier: Dig Data, p. 14 – 15 (Hungarian Edition)) since the really big amount of data makes simply impossible to detect the causality. To give an example, the Google, examining the connection between the spread of flu and the changes in search words, tested a 450 million (!) algorithm to find the most effective version to predict the epidemic. (ibid, p. 10)
The big data approach can be applied to mathematics at least in two ways.
1. First, traditional mathematics is small data “science”: it manages only a small amount of data and tries to find more or less direct connections between certain features using a kind of deductive logics (which replaces causality in mathematics). E. g. we know the Euler theorem d^2=R(R – 2r) which describes the distance (d) between the circumcentre (R=circumradius) and incentre (r=inradius) in a triangle in geometry. Obviously, it is a proofed theorem, so we understand the cause of the correlation between these data. But why don’t try to adapt the big data approach and why we don’t try to analyze all the possible geometrical data to find new, although unproven, connections? Similarly, we could examine the distribution of prime numbers taking into consideration not only their places on the number line, but all the accessible data about numbers from HCNs (highly composite numbers) to triangle numbers to any other features to discover connections even we aren’t able to prove them.
2. There is another way to apply big data approach to a new level. Reverse mathematics is a program to examine which sets of axioms are necessary to build the foundations of mathematics. I.e. how should we choose a small amount of starting points to get a certain solution? It is, in accordance with its name, a reverse approach to the traditional mathematical way of thought which moves from a small set of axioms to theorems and which is a small data approach. But we can apply the big data “philosophy” more or less imitating Google’s solution examining different combinations of an enormously huge amount of possible axioms to create different data landscapes. Perhaps it would lead to a new kind of metamathematics.
The big data approach can be applied to mathematics at least in two ways.
1. First, traditional mathematics is small data “science”: it manages only a small amount of data and tries to find more or less direct connections between certain features using a kind of deductive logics (which replaces causality in mathematics). E. g. we know the Euler theorem d^2=R(R – 2r) which describes the distance (d) between the circumcentre (R=circumradius) and incentre (r=inradius) in a triangle in geometry. Obviously, it is a proofed theorem, so we understand the cause of the correlation between these data. But why don’t try to adapt the big data approach and why we don’t try to analyze all the possible geometrical data to find new, although unproven, connections? Similarly, we could examine the distribution of prime numbers taking into consideration not only their places on the number line, but all the accessible data about numbers from HCNs (highly composite numbers) to triangle numbers to any other features to discover connections even we aren’t able to prove them.
2. There is another way to apply big data approach to a new level. Reverse mathematics is a program to examine which sets of axioms are necessary to build the foundations of mathematics. I.e. how should we choose a small amount of starting points to get a certain solution? It is, in accordance with its name, a reverse approach to the traditional mathematical way of thought which moves from a small set of axioms to theorems and which is a small data approach. But we can apply the big data “philosophy” more or less imitating Google’s solution examining different combinations of an enormously huge amount of possible axioms to create different data landscapes. Perhaps it would lead to a new kind of metamathematics.
10 June 2015
Nonteleological World Machines?
There are two kinds of machines. A teleological one is a system where its “parts are so arranged that under proper conditions they work together to serve a certain purpose.” It seems to be self-evident for the first sight that every machine can be interpreted as a teleological system, since all of them serve certain purposes. What is more, a complex machine’s part can be regarded as teleological systems as well. A car is a teleological system, and, similarly, its engine, carburetor etc. has a purpose. (William L. Rowe: Philosophy of Religion, p. 57)
Teleological systems have two interpretations in theology: We can argue either that the Universe itself is or only some of its parts are teleological. (ibid, p. 59) Examples of biological teleological systems (e. g. eyes) are used to verify the existence of a Creator. But even accepting the teleological nature of the eyes (or human body, or planetary systems, etc.), we wouldn’t get answered whether He created the whole Universe as a machine to achieve a purpose and its parts serves His will or only our Universe’s parts are created to fulfil a task. In other words: even the verification the created nature of the eyes wouldn’t verify that we live in a created Universe.
Applying this distinction between the created (and teleological) parts and the created system as a whole, we get the following variants:
1. Both the World and the human race (and every part of the World) are created. Both the whole system and its parts have purposes (=ad analogiam a car). It can be regarded as the traditional theological point of view.
2. Opposite to it, we can state that neither the Universe nor its parts are teleological. It is the traditional atheist approach.
3. The World is created by a mighty entity, but the development of this World’s parts is regulated by laws independent of the Creator: For example, He constructed the Universe including the laws of evolution, and then the evolution resulted us. This it can be interpreted as a machine where the whole system (supposedly) has a purpose, but it is not true for its parts (and it cannot be true, since the Creator cannot be able to affect the results of the evolutionary process. Obviously, there are theologians who support this approach). It is another question whether the Creator would be able to find another solution for intelligence creation instead of the use of evolution. A parallel can be drawn between this model and the internet. According to Hubert L. Dreyfus, Ford’s automobile was a tool to support human mobility, and although it had some unintended effects (e.g. the liberalization of sex), it was a teleological machine. But because of its protean nature, the internet doesn’t have a “purpose”: It is a framework of opportunities. (On the Internet, p. 1-2)
4. The (or some) parts of the Universe are teleological systems, but the Universe itself isn’t. This model presupposes a Creator who is not outside his Universe, but a part of it. It seems to be the more exciting variant, since it introduces an inferior (Demiurge-like) Creator. To give an example, Polish SF-writer and thinker Stanislaw Lem played with the idea of a Cosmos in his book entitled Fantastyka I Futurologia (1970) where the actual state of our Universe observed by us was influenced by cosmoengineering activities of an intelligent race lived billion years ago. According to this story, their aim was to influence the density of intelligence in the Universe, thus our World satisfies the criteria of a system with teleological parts but without an overall teleological system.
And what is even more exciting: originally it was held that a machine (either a mechanical construction or a world) can be a teleological system which contains teleological parts (=car-like machine), but we have another metaphor now: An internet-like system where the teleological considerations aren’t valid on the level of its parts. The ultimate question is given: What other kind of machines and worlds can be imaginable?
Teleological systems have two interpretations in theology: We can argue either that the Universe itself is or only some of its parts are teleological. (ibid, p. 59) Examples of biological teleological systems (e. g. eyes) are used to verify the existence of a Creator. But even accepting the teleological nature of the eyes (or human body, or planetary systems, etc.), we wouldn’t get answered whether He created the whole Universe as a machine to achieve a purpose and its parts serves His will or only our Universe’s parts are created to fulfil a task. In other words: even the verification the created nature of the eyes wouldn’t verify that we live in a created Universe.
Applying this distinction between the created (and teleological) parts and the created system as a whole, we get the following variants:
1. Both the World and the human race (and every part of the World) are created. Both the whole system and its parts have purposes (=ad analogiam a car). It can be regarded as the traditional theological point of view.
2. Opposite to it, we can state that neither the Universe nor its parts are teleological. It is the traditional atheist approach.
3. The World is created by a mighty entity, but the development of this World’s parts is regulated by laws independent of the Creator: For example, He constructed the Universe including the laws of evolution, and then the evolution resulted us. This it can be interpreted as a machine where the whole system (supposedly) has a purpose, but it is not true for its parts (and it cannot be true, since the Creator cannot be able to affect the results of the evolutionary process. Obviously, there are theologians who support this approach). It is another question whether the Creator would be able to find another solution for intelligence creation instead of the use of evolution. A parallel can be drawn between this model and the internet. According to Hubert L. Dreyfus, Ford’s automobile was a tool to support human mobility, and although it had some unintended effects (e.g. the liberalization of sex), it was a teleological machine. But because of its protean nature, the internet doesn’t have a “purpose”: It is a framework of opportunities. (On the Internet, p. 1-2)
4. The (or some) parts of the Universe are teleological systems, but the Universe itself isn’t. This model presupposes a Creator who is not outside his Universe, but a part of it. It seems to be the more exciting variant, since it introduces an inferior (Demiurge-like) Creator. To give an example, Polish SF-writer and thinker Stanislaw Lem played with the idea of a Cosmos in his book entitled Fantastyka I Futurologia (1970) where the actual state of our Universe observed by us was influenced by cosmoengineering activities of an intelligent race lived billion years ago. According to this story, their aim was to influence the density of intelligence in the Universe, thus our World satisfies the criteria of a system with teleological parts but without an overall teleological system.
And what is even more exciting: originally it was held that a machine (either a mechanical construction or a world) can be a teleological system which contains teleological parts (=car-like machine), but we have another metaphor now: An internet-like system where the teleological considerations aren’t valid on the level of its parts. The ultimate question is given: What other kind of machines and worlds can be imaginable?
02 June 2015
Introducing demiurgology
According to the traditional Western viewpoint, religions’ God is “all-powerful, all-knowing and the Creator of the Universe.” (William L. Rowe: Philosophy of Religion. An Introduction p. 6) But the world creation can be discussed as a subject of natural sciences (or, at least, as a field independent from religions) as well. So it seems to be acceptable to introduce “demiurgology” to make a distinction between religious and non-religious approach. Notice that demiurge means originally an artisan-like entity who isn’t a god, but he is participating in the fashioning and maintaining of the Universe.
Obviously, demiurgology ask not exactly the same questions as religions. To give an example, Christian theology distinguishes three kinds of arguments for the existence of God: The cosmological, the design and the ontological arguments (Rowe, ibid. p. 19). The first one concludes from the existence of the universe the existence of a Creator; the design argument is based on the presupposition that the order of the world was created by Him. Both of them are applicable to the field of demiurgology–but the case of the ontological argument is different, since it states that we can conclude the existence of a creator by a deductive logic. But the natural sciences’ integral part is induction and the feedback to reality by experiments.
To give another example, the design argument presupposes the validity of the cosmological argument. After all, it is held by theology that the world was created by God, who is responsible for its order and structure, too. In other words: the existence of a created world is a precondition of the existence of order and structure. In the case of demiurgology, it is imaginable that our world is constructed by a superior cosmologist. But it doesn’t make inevitable his/her ability to create the laws/structure of the constructed world. Using a metaphor to demonstrate that sometimes the creation and the design can be independent: A sculptor makes the sculpture, but not the marble.
Or, we can mention as another example again, that according to Kant, one who isn’t perfect is unable to recognize whether our world is perfect (John Hedley Brooke: Science and Religion, p. 281), but a perfect world is only a subset of the possible worlds which can be created by someone in demiurgology.
Similarly, it isn’t a necessity from our point of view, neither a world creator’s all-powerfulness nor his/her omniscience: he/she is perhaps incompetent. And what is even more important: While the ontological argument isn’t interpretable in demiurgology, it perhaps offers opportunities to study some problems that are uninterpretable from a theologian’s approach.
Obviously, demiurgology ask not exactly the same questions as religions. To give an example, Christian theology distinguishes three kinds of arguments for the existence of God: The cosmological, the design and the ontological arguments (Rowe, ibid. p. 19). The first one concludes from the existence of the universe the existence of a Creator; the design argument is based on the presupposition that the order of the world was created by Him. Both of them are applicable to the field of demiurgology–but the case of the ontological argument is different, since it states that we can conclude the existence of a creator by a deductive logic. But the natural sciences’ integral part is induction and the feedback to reality by experiments.
To give another example, the design argument presupposes the validity of the cosmological argument. After all, it is held by theology that the world was created by God, who is responsible for its order and structure, too. In other words: the existence of a created world is a precondition of the existence of order and structure. In the case of demiurgology, it is imaginable that our world is constructed by a superior cosmologist. But it doesn’t make inevitable his/her ability to create the laws/structure of the constructed world. Using a metaphor to demonstrate that sometimes the creation and the design can be independent: A sculptor makes the sculpture, but not the marble.
Or, we can mention as another example again, that according to Kant, one who isn’t perfect is unable to recognize whether our world is perfect (John Hedley Brooke: Science and Religion, p. 281), but a perfect world is only a subset of the possible worlds which can be created by someone in demiurgology.
Similarly, it isn’t a necessity from our point of view, neither a world creator’s all-powerfulness nor his/her omniscience: he/she is perhaps incompetent. And what is even more important: While the ontological argument isn’t interpretable in demiurgology, it perhaps offers opportunities to study some problems that are uninterpretable from a theologian’s approach.
27 May 2015
Search for miracle in a Matrix Universe
Let’s play with the popular Matrix scenario supposing that we live in a coded, artificial universe. In this case one of the most important questions is: Why we cannot able to observe any sound evidence of this fact?
Unless our mighty coder is either pathetically incompetent or he/she doesn’t want to unmask him- or herself, it is logical to expect some signs that points out that it is a constructed world.
These signs should be clear and apparent. In other words, they should be easily distinguishable from every other component of our world: A “miracle” seems to be appropriate for this purpose. Obviously, it should be really convincing: According to Hume, “no testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous, than the fact, which it endeavors to establish”.
It was impossible to detect a “miracle” in the pre-scientific eras of human history, since it was possible to interpret any phenomenon as a miracle (and to attribute e. g. to a god) and science didn’t exist serving as a reference point. When everything is miracle, then a miracle has no distinctive function at all.
So it is logical from the coder’s point of view to hide such a kind of a “miracle” that can be detectable as a deep irregularity only after the beginning the era of the science. Optimally, it would be a very strange phenomenon remaining unexplained from the early experiments to date.
But according to the history of science, it is difficult (or even impossible) to find such a phenomenon. Neither planetary motions nor lightings are unanswered questions today, and although we have to face with some unsolved problems (e. g. the origin of life), we have better and better attempts to decode them. In accordance with this, every scientific era has different central problems from the theory of electromagnetism to the non-existence of ether. So: where is that mysterious miracle?
Perhaps logically impossible to construct a world both with intelligent beings and such a kind of unanswerable problems: We humans simply work out reasonable solutions for any irregularities (even they aren’t necessarily true).
Another possibility is that from our coder creator’s point of view, there is not a substantial difference between the science of the early 17th and the 21th century. He/she wasn’t able (or willing) to specify exactly the level of scientific knowledge needed to discover his/her signs. It would be a good piece of news for us, since it suggests that the future development of science will be enormous, and in comparison to the reachable level in future, there is no an essential difference between Galileo’s scientific knowledge and the string theory.
So it is possible that we still didn’t find that message-anomaly.
Or: we already found it, but we still didn’t recognize it as a quintessential “miracle”. My favorite candidate is the dark energy and I am really curious whether it will be an unsolved problem a million years from now.
Labels:
Creator,
dark energy,
Hume,
miracle,
science,
The Matrix
20 May 2015
AI, super intelligence, and super consciousness
According to the story, BINA 48 was able to produce human mental capacities and she accidentally learned that the company owned her decided to switch her off permanently to use her parts to build a new supercomputer. So BINA 48 sent e-mails to attorneys asking them to protect her rights to life and consciousness, and the story ended with the jury’s announced about their inability to decide if she (it) was really intelligent or she only emulated an intelligent behavior.
This case raises other questions over the main problem, namely, whether a computer can be really intelligent. First of all, it isn’t clear that an artificial intelligence/super intelligence is necessarily a GPS (general purpose system): Humans use their cognitive apparatus as a general problem solving mechanism from walking to dating to thinking. But it isn’t a necessity to a BINA 48 to have a general problem solving “brain”–after all, she never would have a date:-). Operating a GPS is an evolutionary solution, since a living being wouldn’t be able to survive without it, but an artificial intelligence can be a “segment intelligence” (it is more or less a synonym of the “weak AI”). After all, she doesn’t have to fight with predators, to survive: It is enough to her to think and perform her tasks. Perhaps it is possible to build a GPS AI, but perhaps it is not essential if our aim is only to construct systems that would serve us.
Similarly, we feel to exist a strong connection between intelligence and consciousness. But it is questionable whether machine consciousness is an essential ingredient of machine intelligence. These are two, radically different things: AI is about problem solving and consciousness is about to know that you solve a problem. Chess programs’ efficiency as segment intelligence shows that the consciousness is not a prerequisite for complicated problem solving. Obviously, these two features are inseparable in humans, but do not mix up evolutionary/historical paths and necessity.
Last, but not least, evolutionary development follows trial and error methods and because of it can reach only local maximums. The traditional idea of an artificial super intelligence is about the improvement of “simple” human intelligence. We are animals with consciousness, of course, but we are far from the real consciousness: for example, regarding a usual day, how many minutes (seconds) are you aware of the fact that you are conscious? According to Susan Blackmore, “In some ways the brain does not seem to be designed the right way to produce the kind of consciousness we have.” (Consciousness. A very short introduction, 17) And it is not a surprise: Nick Bostrom mentions that evolution wasted a lot of selection power for other aims than developing intelligence (Superintelligence, 57), and it is a certainty that the same is true in the case of consciousness.
So why not to try to build instead of a super intelligent machine a super conscious one?
13 May 2015
Non-emergence?
According to Robert Laughlin Nobel Prize winner physicist, Newton’s “fundamental laws” aren’t fundamental at all, since their rise is “a consequence of the aggregation of quantum matter into macroscopic fluids and solids—a collective organizational phenomenon.” [Laughlin: A Different Universe, p. 31.] So perhaps we can build up a physics based on the idea of emergence, and it is at least questionable whether either the fundamental laws or the fundamental constants remain important after reinterpreting this field.
It is undoubtedly an interesting idea, and I don’t have any problem with the approach of a new interpretation of nature. But I have some problems with this emergent approach.
First of all, why do we presuppose that the laws of the quantum level is more fundamental than the level of Newtonian physics? Because is it about a smaller magnitude?
Second of all, the use of the term of the emergence is sometimes resembles for the “God of the gasp”: Certain theologians interpret the gaps in actual scientific knowledge as a proof of the existence of the Lord. The classical example for the emergence is the ant hill, and I don’t think that the emergent interpretation is false in this case. But notice that the emergent approach offers an answer for the strangeness of a certain phenomenon (i.e. organized ant behavior), and, at the same time, it contains a tacit an assumption.
The “normal” scientific way is to observe a natural processes and then, by using induction, we deduce the natural law which resulted the given process. The emergent approach is different: we hypothesize that not some describable laws (that are descriptions of the rules) cause the phenomenon, but the phenomenon causes the effect. But, ad absurdum, it is possible that although there is a natural law to determine the organization of the ant society, having been unable to point out it, we would declare that this process’ nature is emergent. If Newton’s laws would be unknown, then we could state with conviction that the moves of the planets in the Solar System are emergent: We observe the process, and then we conclude that the process itself causes the phenomenon. Ad analogiam: how could we sure that there isn’t a law to describe the connections between the micro- and macroscopic level in Laughlin’s example?
Of course, there are emergent processes: I.e. Wolfram in his New Kind of Science presents examples where a process is uncompressible. But there are fundamental differences between math where, optimally, we can prove whether something is impossible, and physics where a similar demonstration is more problematic.
And there are other questions, as well. The dichotomy of “laws” and “objects” comes from Aristotle: He believed that on the one hand, there is a category of “natural things, which displayed change and complexity”, and there are “static and absolute truths” that are mathematical rules. [Barrow: World within World, p. 39.] Obviously, it is a kind of Pythagorean belief about the fundamentally mathematical nature of the Universe.
But it is an important question whether both the laws and objects really exist, as it is believed by the so-called realist philosophers of science. Opposite to it, the instrumentalists state that the physical laws are only instruments to describe the observed processes but they aren’t exist in reality.
It is undecidable which camp is right, since there is no an experiment to point out whether a natural law really exists or it is only a mathematical description. In short: it is pointless to debate over it.
Similarly, it is undecidable whether the individual ants’ behavior emerge into a coordinated activity to cause the anthill or the rules behind the operation of the colony drives the individuals.
It is undoubtedly an interesting idea, and I don’t have any problem with the approach of a new interpretation of nature. But I have some problems with this emergent approach.
First of all, why do we presuppose that the laws of the quantum level is more fundamental than the level of Newtonian physics? Because is it about a smaller magnitude?
Second of all, the use of the term of the emergence is sometimes resembles for the “God of the gasp”: Certain theologians interpret the gaps in actual scientific knowledge as a proof of the existence of the Lord. The classical example for the emergence is the ant hill, and I don’t think that the emergent interpretation is false in this case. But notice that the emergent approach offers an answer for the strangeness of a certain phenomenon (i.e. organized ant behavior), and, at the same time, it contains a tacit an assumption.
The “normal” scientific way is to observe a natural processes and then, by using induction, we deduce the natural law which resulted the given process. The emergent approach is different: we hypothesize that not some describable laws (that are descriptions of the rules) cause the phenomenon, but the phenomenon causes the effect. But, ad absurdum, it is possible that although there is a natural law to determine the organization of the ant society, having been unable to point out it, we would declare that this process’ nature is emergent. If Newton’s laws would be unknown, then we could state with conviction that the moves of the planets in the Solar System are emergent: We observe the process, and then we conclude that the process itself causes the phenomenon. Ad analogiam: how could we sure that there isn’t a law to describe the connections between the micro- and macroscopic level in Laughlin’s example?
Of course, there are emergent processes: I.e. Wolfram in his New Kind of Science presents examples where a process is uncompressible. But there are fundamental differences between math where, optimally, we can prove whether something is impossible, and physics where a similar demonstration is more problematic.
And there are other questions, as well. The dichotomy of “laws” and “objects” comes from Aristotle: He believed that on the one hand, there is a category of “natural things, which displayed change and complexity”, and there are “static and absolute truths” that are mathematical rules. [Barrow: World within World, p. 39.] Obviously, it is a kind of Pythagorean belief about the fundamentally mathematical nature of the Universe.
But it is an important question whether both the laws and objects really exist, as it is believed by the so-called realist philosophers of science. Opposite to it, the instrumentalists state that the physical laws are only instruments to describe the observed processes but they aren’t exist in reality.
It is undecidable which camp is right, since there is no an experiment to point out whether a natural law really exists or it is only a mathematical description. In short: it is pointless to debate over it.
Similarly, it is undecidable whether the individual ants’ behavior emerge into a coordinated activity to cause the anthill or the rules behind the operation of the colony drives the individuals.
05 May 2015
Maxwell’s demon and the physical nature of space
Maxwell’s demon is a thought experiment to examine the second law of thermodynamics. But is not only about a special part of physics, but about some unspoken features of our Universe as well.
It is a small, intelligent entity who can detect the speed of individual molecules and opening or closing a door that divides a box into two parts, he/she is able to separate the fast molecules into one while the slow ones into the other part of the box. The result is the decrease of entropy in a closed system – that is contradicts the second law.
Or not, since, according to an argument based on information theory, the main problem is that if Maxwell’s demon’s memory isn’t infinitely large, then sooner or later information should be erased form it.This process emits heat into the box, because information erasing necessarily causes heat [Charles Seife: Decoding the Universe, p. 85.]. So the second law remains valid, since the entropy grows in the closed system.
This answer is partly based on the presupposition that only finitely large memories are possible and includes another presupposition, too, about the nature of space–after all, if you would be able to divide the space into infinitely small amounts, then it would be possible to store an infinitely large amount of information in a finite storage (at least, theoretically). I.e. If you have a two square cm surface of data storage, then you could use the first square cm to store the first piece of data; a half square cm to store the second piece of data; etc. ad infinitum.
An infinitely large memory perhaps not as unreal as it seems to be for the first sight, since there are plausible theories about hypercomputing based on relativistic spacetime of blackhole physics (see the details here). In this case manipulating an infinitely huge amount of data in an finite period of time is possible thanks to the nature: since it is presupposed that time is divisible infinitely many pieces, the result is that we have enough time to perform infinitely many operations in a finite period of time.
Ad analogiam: We can hypothesize that the space’s nature is similar and it is divisible infinitely, so it is possible to build a spatially finite storage to store an infinite amount of information. So we never should erase a bit of information–and the entropy wouldn’t rise in the box.
Obviously, nobody knows whether the space continuous, but it has been shown by this example how the laws of thermodynamics is embedded into the "environment" of the existing physical laws. And we cannot exclude the existence of Maxwell’s demon if the space we live in is not discrete, but continuous.
It is a small, intelligent entity who can detect the speed of individual molecules and opening or closing a door that divides a box into two parts, he/she is able to separate the fast molecules into one while the slow ones into the other part of the box. The result is the decrease of entropy in a closed system – that is contradicts the second law.
Or not, since, according to an argument based on information theory, the main problem is that if Maxwell’s demon’s memory isn’t infinitely large, then sooner or later information should be erased form it.This process emits heat into the box, because information erasing necessarily causes heat [Charles Seife: Decoding the Universe, p. 85.]. So the second law remains valid, since the entropy grows in the closed system.
This answer is partly based on the presupposition that only finitely large memories are possible and includes another presupposition, too, about the nature of space–after all, if you would be able to divide the space into infinitely small amounts, then it would be possible to store an infinitely large amount of information in a finite storage (at least, theoretically). I.e. If you have a two square cm surface of data storage, then you could use the first square cm to store the first piece of data; a half square cm to store the second piece of data; etc. ad infinitum.
An infinitely large memory perhaps not as unreal as it seems to be for the first sight, since there are plausible theories about hypercomputing based on relativistic spacetime of blackhole physics (see the details here). In this case manipulating an infinitely huge amount of data in an finite period of time is possible thanks to the nature: since it is presupposed that time is divisible infinitely many pieces, the result is that we have enough time to perform infinitely many operations in a finite period of time.
Ad analogiam: We can hypothesize that the space’s nature is similar and it is divisible infinitely, so it is possible to build a spatially finite storage to store an infinite amount of information. So we never should erase a bit of information–and the entropy wouldn’t rise in the box.
Obviously, nobody knows whether the space continuous, but it has been shown by this example how the laws of thermodynamics is embedded into the "environment" of the existing physical laws. And we cannot exclude the existence of Maxwell’s demon if the space we live in is not discrete, but continuous.
29 April 2015
Where is every time traveler?
Perhaps nowhere, since they simply don’t exist, as time travel is impossible–but Einstein’s twin paradox (time dilatation, if you prefer) is nonetheless real. Notice that the time dilatation is not a proof of time travel; and that time travel and time dilatation are different questions. The first one is about whether time is similar to space in a certain sense, the second one is about what happens at very high speeds.
Obviously, time is a problematic field of modern physics. As far as we know, “the second law is the only fundamental law of physics that distinguishes between past and future” [Melanie Mitchell: A Guided Tour to the Complexity, p. 43]. Obviously, it has a kind of physical background, as the rise of entropy wouldn’t be possible without an early phase of high order (=Big Bang) but entropy’s concept is purely mathematical. I.e. in the case of the law of gravity, the gravitational force is inversely proportional to the square of distance–and this proportion cannot be deduced solely from the mathematical equations. But to understand the second law is enough to know that there are more ways to make disorder than order. What is to say, in this case there is no an additional physical law to determine the results over the logic of mathematics (opposite to gravitational law where there is a second “layer” over the mathematical description to determine that the connection between the distance and force is not, say, linear. By the way: if Newton’s law is two-layered (physics over math), then it is interesting question whether exist laws with tree, four etc. layer).
The second law creates the “time of arrow”. So since there is no any physical effect to modify its mathematics, time can be regarded as a result of the mathematics which is the basis of our time description.
British historian Arnold J. Toynbee said that history was regarded to be nothing but "one damned thing after another.” According to the logic of our argumentation, it is defendable that although time, using a kind of mathematical abstraction, can be represented as a dimension, in reality it is nothing more than “one damned thing after another”. So the meaning of travel in time is simply uninterpretable: It has no meaning at all, but it doesn’t exclude the time dilatation where according to the observers of a different frame, events follow slower each other if your speed is close to the speed of light.
Of course, it can be argued that it is a natural law that there is no an additional natural law over the level of mathematics in connection with entropy, and it can be asked why.
Obviously, time is a problematic field of modern physics. As far as we know, “the second law is the only fundamental law of physics that distinguishes between past and future” [Melanie Mitchell: A Guided Tour to the Complexity, p. 43]. Obviously, it has a kind of physical background, as the rise of entropy wouldn’t be possible without an early phase of high order (=Big Bang) but entropy’s concept is purely mathematical. I.e. in the case of the law of gravity, the gravitational force is inversely proportional to the square of distance–and this proportion cannot be deduced solely from the mathematical equations. But to understand the second law is enough to know that there are more ways to make disorder than order. What is to say, in this case there is no an additional physical law to determine the results over the logic of mathematics (opposite to gravitational law where there is a second “layer” over the mathematical description to determine that the connection between the distance and force is not, say, linear. By the way: if Newton’s law is two-layered (physics over math), then it is interesting question whether exist laws with tree, four etc. layer).
The second law creates the “time of arrow”. So since there is no any physical effect to modify its mathematics, time can be regarded as a result of the mathematics which is the basis of our time description.
British historian Arnold J. Toynbee said that history was regarded to be nothing but "one damned thing after another.” According to the logic of our argumentation, it is defendable that although time, using a kind of mathematical abstraction, can be represented as a dimension, in reality it is nothing more than “one damned thing after another”. So the meaning of travel in time is simply uninterpretable: It has no meaning at all, but it doesn’t exclude the time dilatation where according to the observers of a different frame, events follow slower each other if your speed is close to the speed of light.
Of course, it can be argued that it is a natural law that there is no an additional natural law over the level of mathematics in connection with entropy, and it can be asked why.
22 April 2015
Complexity and alien intelligence
From a complexity approach there are some different kind of “intelligence” and they have two components: individual building blocks (ants, neurons, humans etc.) and their societies (ant hill, brain, human society etc.). These components can be either simple or complex.
In short: in search for alien intelligence, we have to take into consideration not only the complexity of an individual agent, but its societal environment, as well. Incidentally, it raises the question whether a traditional, “lonely” computer could produce a kind of intelligence similar to ours, since all intelligent form known by us is embedded into a societal networks, bit it isn't.
Regarding the possible combinations, it is a self-contradiction a complex system with simple building blocks and similarly simple society, so we have to ignore it.
The second possibility is represented by ants. They aren’t intelligent individually, but their societies can show complex and adaptive behaviors.
The third category is the human-like complexity: we are complex individuals who live in a complex society.
Theoretically it seems to be possible a fourth solution: it would be a huge, complex and adaptive being. A similar entity appears in Stanislaw Lem’s novel Solaris (where the whole planet sems to be intelligent) without any societal environment.
This taxonomy raises some questions.
First of all, it is a cliché that how intelligent an ant colony, although a sole ant isn’t. They can build anthills with regulated temperature, defend their descendants etc. It is a central problem of complexity science that there is no an exact measurement for complexity (Melanie Michell: a Guided Tour to complexity, p 95), but is seems to be unquestionable that there is no ant-like intelligence with consciousness on our planet. They can monitor their environment but from this point of view they are on the level of cockroaches. They never reached the second level: the self-reflexivity or reflective self-awareness: to know that you know. (Clayton: Emergence and Mind 110)
Since there are more insect races and individual insects than mammals, they would have more opportunity to evolve into an intelligent system than mammals. Ergo it is a plausible hypothesis that a mammal-level intelligence (=intelligent agents, complex society) unreachable by ant colonies (=simple individuals, complex society). And not only their intelligence, but their ability for environment modification is limited, as well. They can build ant hills, but cannot able to build spaceships. In other words: ant like system cannot reach the level of consciousness and consciousness is the only way to develop really effective technologies. Opposite to some SF writers’ ant like, intelligent technological societies, an alien intelligence presumably would similar to us in the sense that they would be individually complex entities. After all, technology reachable only if you are both a complex entity and are supported by a complex society–it is not a surprise that computing is not a Paleolithic invention.
Following this train of thought, it seems to be probable that there are no Solaris like mega brains, since they wouldn’t have complex societies. But who knows–perhaps their mega complexity can substitute a complex society’s influence.
Or not.
In short: in search for alien intelligence, we have to take into consideration not only the complexity of an individual agent, but its societal environment, as well. Incidentally, it raises the question whether a traditional, “lonely” computer could produce a kind of intelligence similar to ours, since all intelligent form known by us is embedded into a societal networks, bit it isn't.
Regarding the possible combinations, it is a self-contradiction a complex system with simple building blocks and similarly simple society, so we have to ignore it.
The second possibility is represented by ants. They aren’t intelligent individually, but their societies can show complex and adaptive behaviors.
The third category is the human-like complexity: we are complex individuals who live in a complex society.
Theoretically it seems to be possible a fourth solution: it would be a huge, complex and adaptive being. A similar entity appears in Stanislaw Lem’s novel Solaris (where the whole planet sems to be intelligent) without any societal environment.
This taxonomy raises some questions.
First of all, it is a cliché that how intelligent an ant colony, although a sole ant isn’t. They can build anthills with regulated temperature, defend their descendants etc. It is a central problem of complexity science that there is no an exact measurement for complexity (Melanie Michell: a Guided Tour to complexity, p 95), but is seems to be unquestionable that there is no ant-like intelligence with consciousness on our planet. They can monitor their environment but from this point of view they are on the level of cockroaches. They never reached the second level: the self-reflexivity or reflective self-awareness: to know that you know. (Clayton: Emergence and Mind 110)
Since there are more insect races and individual insects than mammals, they would have more opportunity to evolve into an intelligent system than mammals. Ergo it is a plausible hypothesis that a mammal-level intelligence (=intelligent agents, complex society) unreachable by ant colonies (=simple individuals, complex society). And not only their intelligence, but their ability for environment modification is limited, as well. They can build ant hills, but cannot able to build spaceships. In other words: ant like system cannot reach the level of consciousness and consciousness is the only way to develop really effective technologies. Opposite to some SF writers’ ant like, intelligent technological societies, an alien intelligence presumably would similar to us in the sense that they would be individually complex entities. After all, technology reachable only if you are both a complex entity and are supported by a complex society–it is not a surprise that computing is not a Paleolithic invention.
Following this train of thought, it seems to be probable that there are no Solaris like mega brains, since they wouldn’t have complex societies. But who knows–perhaps their mega complexity can substitute a complex society’s influence.
Or not.
14 April 2015
Undercomputing
Hypercomputing is about the hypothetical extension of the capabilities of traditional Turing-machines. I.e. how to go beyond Turing-compatibility? How to calculate infinitely many steps within a finite amount of time (these are supertasks, see i.e. the Thomson lamp) and how to solve a mathematical problem which is unsolvable by Turing machines? Obviously, these are mainly thought experiments, but the main point is to exceed certain limits of Turing-computing.
But we can turn into another direction and we can examine the limits of physically existing computers. A Turing machine is an abstract construction originally with to aims: to model how a “human computer” can solve a problem (in the year of publication of Turing’s paper about Turing machines (1936) the computational processes were performed by humans with paper and pen); and to create a mathematical construction to describe it–this was adopted to electromechanical/digital computers around the end of WWII. In other worlds: it was based on an abstract conception about mathematics.
According to this approach, every Turing machine which are capable to solve a problem are equal, since they give the same answer using the same logic and it is indifferent if one of them is a super fast computer while the other one is a slow, mechanical model made of tinker toy.
But–opposite to the physical reality–mathematics does not contain time as parameter and it explain why one of the central problems of Turing machines is the “halting problem”. The question is whether the computer halts because of a given input sometime at all, but it is irrelevant whether we have to wait either a second or one trillion years.
Penrose makes a distinction between hardware and software (Emperor’s new Mind, p. 24.). The previous one is “the actual machinery involved in a computer” (circuits, processors, etc.) while the software “refers to the various programs which can be run on the machine”. So Turing machines are equal regarding their software–after all, they are based on the same logic. But they aren’t equal in the real world, since it is not the same to wait for a result a second or to a longer time than the earth would exist. It is indifferent from a practical point of view that we don’t know the answer since a Turing machine unable to solve the problem or because it takes too long time.
But making a distinction between the abstract mathematics of computing and the real, physically existing machines makes possible to introduce a neologism called undercomputing. Similarly in a certain way to the concept of the Landauer number, it is about the limits determined by the physical reality, and while hypercomputing is to exceed the traditional Turing machine, undercomputing is to take into consideration the physical limits of every physically existing computer.
Bennett and Landauer [The fundamental physical limits of computation. Scientific American 253(1), 1985] have studied some fundamental physical limits of information processing pointing out that, for example, PI doesn’t have an exact value in reality, since it cannot be calculable to the last decimal. So we can interpret computers as machines with performing capacities influenced by their hardware that are determined by the physical laws.
But we can turn into another direction and we can examine the limits of physically existing computers. A Turing machine is an abstract construction originally with to aims: to model how a “human computer” can solve a problem (in the year of publication of Turing’s paper about Turing machines (1936) the computational processes were performed by humans with paper and pen); and to create a mathematical construction to describe it–this was adopted to electromechanical/digital computers around the end of WWII. In other worlds: it was based on an abstract conception about mathematics.
According to this approach, every Turing machine which are capable to solve a problem are equal, since they give the same answer using the same logic and it is indifferent if one of them is a super fast computer while the other one is a slow, mechanical model made of tinker toy.
But–opposite to the physical reality–mathematics does not contain time as parameter and it explain why one of the central problems of Turing machines is the “halting problem”. The question is whether the computer halts because of a given input sometime at all, but it is irrelevant whether we have to wait either a second or one trillion years.
Penrose makes a distinction between hardware and software (Emperor’s new Mind, p. 24.). The previous one is “the actual machinery involved in a computer” (circuits, processors, etc.) while the software “refers to the various programs which can be run on the machine”. So Turing machines are equal regarding their software–after all, they are based on the same logic. But they aren’t equal in the real world, since it is not the same to wait for a result a second or to a longer time than the earth would exist. It is indifferent from a practical point of view that we don’t know the answer since a Turing machine unable to solve the problem or because it takes too long time.
But making a distinction between the abstract mathematics of computing and the real, physically existing machines makes possible to introduce a neologism called undercomputing. Similarly in a certain way to the concept of the Landauer number, it is about the limits determined by the physical reality, and while hypercomputing is to exceed the traditional Turing machine, undercomputing is to take into consideration the physical limits of every physically existing computer.
Bennett and Landauer [The fundamental physical limits of computation. Scientific American 253(1), 1985] have studied some fundamental physical limits of information processing pointing out that, for example, PI doesn’t have an exact value in reality, since it cannot be calculable to the last decimal. So we can interpret computers as machines with performing capacities influenced by their hardware that are determined by the physical laws.
07 April 2015
Living in an extremely big Universe
- The first level is the planets;
- the second is the solar systems;
- the third one is the level of the galaxies and
- the last one is the Universe itself.
Another example for the observer-dependent nature of scales/taxonomies is the megatrajectory theory. It is based on the evolutionary mile stones and
- the emergence of life is the first so-called megatrajectory;
- the second is the prokaryote diversification;
- then eukaryotic cells appear;
- it is followed by the rise of the multicellular life forms;
- the “invasion of the land” is the 5th and
- the rise of intelligence is the 6th megatrajectory that gave an opportunity for the invasion of every possible environment
The post biological intelligence can be interpreted as the 7th megatrajectory (Cirkovic, Dragicevic and Beric-Bjedov 2005).
This taxonomy is focuses at least partly on parochial details of earthly evolution, since its aim is to give a description of the history of life on earth: I.e. the fifth megatrajectory would never occur on a water covered planet.
We can reinterpret this megatrajectory concept focusing on those factors that are presumably universal. Freeman Dyson distinguishes three classes of phenomena which can occur in our universe: “normal physical processes”; “biological processes” and radio (or another forms of) “communication between life forms existing in different parts of the universe.”(Disturbing the Universe, 1979) These factors can be interpreted as “gigatrajectiores,” because they–opposite to megatrajectories–can be typical in any universe which is populated with intelligent observers.
- The first gigatrajectory is characterized by the domination of lifeless matter;
- the appearance of life was the second gigatrajectory and
- finally intelligence rose.
What is more, if you accept one or other form of panspermia hypothesis, then it seems to be possible that life which appeared on the surface of a planet can spread in our Galaxy. But it seems to be impossible even to the microbial life to sail to the closest galaxy, since it would take billions of years. So the spread of life is localized for ridiculously small parts of the Universe. In other word: to believe that other galaxies inhabited with intelligent beings, we should believe that both life and intelligence arose independently from us somewhere in in the Universe.
Similarly, technology seems to be provide an opportunity (theoretically, at least) to conquer not only our planet, but both our solar system and our galaxy–after all, even the Fermi Paradox is based on the presumption of its possibility (if we are able to, then they should be able to visit any stars, including the sun, as well). But notice that even the SETI is mainly about the search for alien intelligence within the Milky Way. Although we have a theoretical chance to observe a super civilization’s energy emission from another galaxy, it is impossible to visit even the M31, since it would take million years even with 0.9 c. And although the time dilatation would make slower the aging on the board of the spaceship, its construction should survive an unimaginably long period of time (notice that the Homo sapiens itself didn’t existed a million year ago). This long travel seems to be not theoretically, but technically impossible. So unless we invent a revolutionary new form of travel (i.e. a warp driver, but it is only an unfounded dream today without a chance to build it), we will never leave the Milky Way and it means that the Universe is extremely huge compared to our possibilities.
In short: the spatial structure of the Universe limits the possibilities the spread of life and intelligence. It raises a question whether it would be possible a universe which can be conquerable by its habitants.
31 March 2015
The origin of the multiverse
According to Aristotle our world which had a potentially infinite past and it’s spatially extension was finite, but a potentially infinite past is unacceptable today, since it necessarily ends in our present. So it is completed – and so it cannot be potential (but a potentially infinite future is still a possibility). Obviously, Aristotle’s finite space conception is similarly contradictory for us, as it is based on the belief in a finite, spherical shell of the stars that surrounds the Universe.
Newton believed in actually infinite space and finite time – namely, according to him the World was created some thousand years ago (of course, if we do not accept the existence of a final point in history, then it has a potentially infinite future).
The Big Bang Theory uses a finite spacetime model.
Last but not least, according to a common interpretation, the multiverse hypothesis presumes that both space and time is actually infinite and the Big Bang is only a bubble in the infinite sea of other bubbles. This resembles to the Steady State Theory, since it declares a spatiotemporal infinity without a beginning (and it is ironic that this obsolete model, which is forgotten in traditional cosmology, survives in this form). As far as I know, nobody examined the question of the possible origin of the multiverse as a scientific problem. Of course, if you believe that it has no origin (since it is spatiotemporally infinite), then it is a meaningless question.
But we can play either with the idea of a spatiotemporally finite multiverse that includes only finitely many universes.The fact that we cannot able observe other worlds, does not mean that there are infinitely many of them.
Or there is a possible multiverse model where time is finite (or potentially infinite), but its space is actually infinite. In other words: it is a multiverse that was born a finite time ago but it is spatially infinite since the beginning, and it resembles to Newton’s model in a certain way. The appearance of infinity is always problematic in physics, so – at least from this point of view – this model with an instant spatially infinity is not more contradictory than the traditional multiverse interpretation with an infinite past.
Newton believed in actually infinite space and finite time – namely, according to him the World was created some thousand years ago (of course, if we do not accept the existence of a final point in history, then it has a potentially infinite future).
The Big Bang Theory uses a finite spacetime model.
Last but not least, according to a common interpretation, the multiverse hypothesis presumes that both space and time is actually infinite and the Big Bang is only a bubble in the infinite sea of other bubbles. This resembles to the Steady State Theory, since it declares a spatiotemporal infinity without a beginning (and it is ironic that this obsolete model, which is forgotten in traditional cosmology, survives in this form). As far as I know, nobody examined the question of the possible origin of the multiverse as a scientific problem. Of course, if you believe that it has no origin (since it is spatiotemporally infinite), then it is a meaningless question.
But we can play either with the idea of a spatiotemporally finite multiverse that includes only finitely many universes.The fact that we cannot able observe other worlds, does not mean that there are infinitely many of them.
Or there is a possible multiverse model where time is finite (or potentially infinite), but its space is actually infinite. In other words: it is a multiverse that was born a finite time ago but it is spatially infinite since the beginning, and it resembles to Newton’s model in a certain way. The appearance of infinity is always problematic in physics, so – at least from this point of view – this model with an instant spatially infinity is not more contradictory than the traditional multiverse interpretation with an infinite past.
25 March 2015
The Landauer number: in search of the biggest possible number
“One the most fundamental observations… about the natural numbers… is that there is no last or greatest,” writes Grahan Oppy in the introductory chapter of his book entitled Philosophical Perspectives of Infinity. It is widely accepted concept– but it raises some serious questions.
We usually distinguish two kinds of infinity: the potential and the actual. The potential one means that if you choose n, then I can choose n+1.Obviously, neither a biggest number nor an infinitely big number exist if we accept potential infinity.
In case of actual infinity means there is no a biggest number and the infinity actually exists with each of its strange qualities (see the infinite hotel or Thomson's lamp, for example). Since the actual infinite unreachable with counting (after all, even an unthinkably big number is finite), we have to presume its existence if we believe in the existence of actual infiniy. In other words: the presumption of actual infinity’s existence is the basis of the presumption of the existence of actual infinity (which is a tautology). But we can answer that actual infinity is a kind of mathematical abstraction – similar to the complex numbers.
In cosmology it is controversial whether actual infinity exist or we can accept only potential infinity. This question is not surprising, since the realms of the pure mathematics and the real physics is not necessarily are the same. So we have to take into consideration the possible differences between them.
Rolf Landauer pointed out that the computation – opposite to a mathematics which can be imagined in a Platonist manner – has physical limits. I.e. it is impossible to compute every point of the number line (or only a line segment’s every point) unless we have unlimited (actually infinite) computing capacity.
In our Universe we have a limited computing capacity - unless our Universe is eternal and we suppose that opposite to entropy's law we will have enough energy for calculations in the far-far future). According to Paul Davies (Cosmic Jackpot), the real numbers that are served as a basis of the natural laws in the traditional physics, simply don’t exist.
What is more, it means that there is a biggest possible number in our Universe. To illuminate it, consider the following example: If you have only a minute to write down the biggest possible number, then the limited amount of time limits your possibilities. It is unquestionable that even a mere 60 second is enough for the construction of an enormously huge number – but it is unquestionable, as well, that if our Universe is not eternal, then we have only limited time to compute the biggest possible result.
It would be interesting to find the most efficient form/algorithm/solution to calculate the biggest number that can be constructed in one minute or 100 billion billion years, but it is more important from our point of view, that this aspect of our physical reality characterized neither actual nor potential infinity, but a very big, but finite number that can be called Landauer number.
Representatives of ultrafinitism in mathematics states that there is no either actually infinite sets of natural numbers or very big numbers (2^10000, for example), since they are inaccessibly by human minds. This argumentation seems to be flawed, as it presupposes that the existence of a mathematical object presupposes that it is imaginable by us. The introducing of the Landauer number doesn’t causes similar problems.
So we can imagine three kind of universes: They are determined by Landauer number or potential or actual infinity. But notice that it is only about the nature of the time of a given universe, and either of the space or the mass density can be potentially/actually infinite parallel to the existence of Landauer numbers.
We usually distinguish two kinds of infinity: the potential and the actual. The potential one means that if you choose n, then I can choose n+1.Obviously, neither a biggest number nor an infinitely big number exist if we accept potential infinity.
In case of actual infinity means there is no a biggest number and the infinity actually exists with each of its strange qualities (see the infinite hotel or Thomson's lamp, for example). Since the actual infinite unreachable with counting (after all, even an unthinkably big number is finite), we have to presume its existence if we believe in the existence of actual infiniy. In other words: the presumption of actual infinity’s existence is the basis of the presumption of the existence of actual infinity (which is a tautology). But we can answer that actual infinity is a kind of mathematical abstraction – similar to the complex numbers.
In cosmology it is controversial whether actual infinity exist or we can accept only potential infinity. This question is not surprising, since the realms of the pure mathematics and the real physics is not necessarily are the same. So we have to take into consideration the possible differences between them.
Rolf Landauer pointed out that the computation – opposite to a mathematics which can be imagined in a Platonist manner – has physical limits. I.e. it is impossible to compute every point of the number line (or only a line segment’s every point) unless we have unlimited (actually infinite) computing capacity.
In our Universe we have a limited computing capacity - unless our Universe is eternal and we suppose that opposite to entropy's law we will have enough energy for calculations in the far-far future). According to Paul Davies (Cosmic Jackpot), the real numbers that are served as a basis of the natural laws in the traditional physics, simply don’t exist.
What is more, it means that there is a biggest possible number in our Universe. To illuminate it, consider the following example: If you have only a minute to write down the biggest possible number, then the limited amount of time limits your possibilities. It is unquestionable that even a mere 60 second is enough for the construction of an enormously huge number – but it is unquestionable, as well, that if our Universe is not eternal, then we have only limited time to compute the biggest possible result.
It would be interesting to find the most efficient form/algorithm/solution to calculate the biggest number that can be constructed in one minute or 100 billion billion years, but it is more important from our point of view, that this aspect of our physical reality characterized neither actual nor potential infinity, but a very big, but finite number that can be called Landauer number.
Representatives of ultrafinitism in mathematics states that there is no either actually infinite sets of natural numbers or very big numbers (2^10000, for example), since they are inaccessibly by human minds. This argumentation seems to be flawed, as it presupposes that the existence of a mathematical object presupposes that it is imaginable by us. The introducing of the Landauer number doesn’t causes similar problems.
So we can imagine three kind of universes: They are determined by Landauer number or potential or actual infinity. But notice that it is only about the nature of the time of a given universe, and either of the space or the mass density can be potentially/actually infinite parallel to the existence of Landauer numbers.
17 March 2015
A new kind of infinite machines
Since David Hilbert’s thought experiment, it is popular to demonstrate the strangeness of infinities describing a hotel with infinite rooms where new and new tourists/tourist groups arrives (even in a countably infinite number). The trick is that although all the rooms are full, the management always can find free ones – after rearranging the reservations. I.e. if only one tourist wants to check in, then the person occupying room 1 is can be moved to room 2; and the occupier of room 2 to room 3 etc. (and the occupier of room n moves to room n+1). If a countably infinite amount of new guest arrives, then the person from room 1 moves to room 2; the person from room 2 to room 4 (from room n to room 2n). After all, there as many odd as even numbers and the new visitors can occupy the odd-numbered rooms that are free now. This method works even if countably infinitely many buses arrives with countably infinitely many passengers on each.
The infinite hotel is misleading in a certain way, since it suggests that these algorithms are the simplest solutions for pairing the rooms and visitors. But there is a simpler method: at the time of the arriving of a new group with even countably infinitely visitors, we can ask every occupier to leave their room – the result is infinitely many free room with infinitely many persons (including the newly arrived ones) without room. Then we ask everybody to go into a still free places – and that’s all: we paired infinitely many persons with infinitely many rooms.
Keeping in mind the lesson of the infinite hotel, we can introduce a new kind of infinite machines with a new typology.
From our point of view, there are two fundamental parameters to determine these machines: the number of steps of the process to reach infinity and the needed time.
It’s obvious that there are impossible machines. You cannot build a machine that solve a problem in zero time even it infinitely fast; and similarly impossible that version that takes only finite number of steps in an infinitely long period – but not because it halts at a certain point in the process (i.e. since it is prescribed that it has to stop after a certain number of steps or reaching a number), but because – as a reversed Thomson lamp – its algorithm prescribes it.
So the simplest infinite machine is a Turing machine with an infinite tape – it can take infinitely many steps over an infinitely long period (and every step can be paired with the moment of the step).
Opposite to it, a Tomson's lamp takes infinitely many steps within a finite period of time. The solution is that 1+1/2+1/4…=2 so if we can press the Thomson lamp’s button two times faster at the n+1st step than at the nth step, then we can finish the process within 2 unit of time (i.e. within two seconds, if it took 1 second to press the button for the first time).
But it is possible a third type of infinite machine. Obviously, the last time we press the Thompson lamp’s button we have to do it infinitely fast, and the pressing process is infinitely short. It means on the one hand, that we handle (at least mathematically) an infinitely small amount. On the other hand: Why should we vary the pressing time to reach our aim? It is possible theoretically to press the button infinitely fast even for the first time; and even an infinitely small time is enough to do it infinitely many times. So this infinite machine finish its process not in infinite time (as a Turing machine) and not in a finite time (as a Thomson lamp), but in an infinitely short time.
The infinite hotel is misleading in a certain way, since it suggests that these algorithms are the simplest solutions for pairing the rooms and visitors. But there is a simpler method: at the time of the arriving of a new group with even countably infinitely visitors, we can ask every occupier to leave their room – the result is infinitely many free room with infinitely many persons (including the newly arrived ones) without room. Then we ask everybody to go into a still free places – and that’s all: we paired infinitely many persons with infinitely many rooms.
Keeping in mind the lesson of the infinite hotel, we can introduce a new kind of infinite machines with a new typology.
From our point of view, there are two fundamental parameters to determine these machines: the number of steps of the process to reach infinity and the needed time.
It’s obvious that there are impossible machines. You cannot build a machine that solve a problem in zero time even it infinitely fast; and similarly impossible that version that takes only finite number of steps in an infinitely long period – but not because it halts at a certain point in the process (i.e. since it is prescribed that it has to stop after a certain number of steps or reaching a number), but because – as a reversed Thomson lamp – its algorithm prescribes it.
So the simplest infinite machine is a Turing machine with an infinite tape – it can take infinitely many steps over an infinitely long period (and every step can be paired with the moment of the step).
Opposite to it, a Tomson's lamp takes infinitely many steps within a finite period of time. The solution is that 1+1/2+1/4…=2 so if we can press the Thomson lamp’s button two times faster at the n+1st step than at the nth step, then we can finish the process within 2 unit of time (i.e. within two seconds, if it took 1 second to press the button for the first time).
But it is possible a third type of infinite machine. Obviously, the last time we press the Thompson lamp’s button we have to do it infinitely fast, and the pressing process is infinitely short. It means on the one hand, that we handle (at least mathematically) an infinitely small amount. On the other hand: Why should we vary the pressing time to reach our aim? It is possible theoretically to press the button infinitely fast even for the first time; and even an infinitely small time is enough to do it infinitely many times. So this infinite machine finish its process not in infinite time (as a Turing machine) and not in a finite time (as a Thomson lamp), but in an infinitely short time.
machine type | number of steps | time |
Impossible zero | infinite | zero |
Impossible finite | finite | infinite |
Turing | infinite | infinite |
Tomson lamp | infinite | finite |
Third type | infinite | infinitely small |
08 March 2015
The nature of the natural laws
Aristotle developed a rather complicated idea about the causes (ending with a kind of “Formal Cause” – which “causes” the form). His “Final Cause” became the modern laws of nature and his “Efficient Cause” is close to modern cause (Barrow: World within World, p. 53). So following Aristotle’s logic we make a distinction between the laws and those phenomena they affect. But it is not surely a necessary distinction between causes and their subjects. The fundamental question is whether the laws exist in a certain sense or they are only practical descriptions of reality. According to Lee Smolin, supposing a kind of cosmic evolution and the existence of baby universes that are born with slightly modified natural laws than of their parent universe's, this “evolving laws seems to be a breakdown of the distinction between the state of a system and the law that evolves it.”
This hypothesis makes possible to imagine some different scenarios between the laws and those subjects that are affected by them.
1. Obviously, we can accept the traditional laws vs systems differentiation.
2. But it is more exciting to suppose that the formation of a baby universe means the formation (slightly) different laws. It can be happened three different ways.
- The first one means that the law formation is restricted to the moment of creation (whatever it means). After the Big Bang we have a certain amount of matter, energy etc. and it won’t change. Similarly, the laws are “finished” as well and they won’t change. On the other hand, Smolin supposes that different universes can be determined by different laws.
- But why to restrict the changes of natural laws for a very short period of time? A second solution proposes a longer, even continuous evolutionary process where the forming laws and the physical environment are in continuous interaction in the course of the universe’s history. According to this model, not only the initial laws of a new universe are different from its parent’s laws, but the changes can be influenced by some events in the history of the universe and slightly different initial conditions would result very different laws later.
- To make the story even stranger, there is the problem of the saltation hypothesis. In evolutionary theory not accepted to suppose that biological evolution produces its effects via large and sudden changes, but cosmology isn’t about earthly ecosystems’ biology. The laws of our Universe seems to be fixed today, but it is imaginable that this idle period when our physical laws are static is only a transitional state, and then a sudden saltation would happen in the future and the nature of the laws will radically change.
02 March 2015
The tree of cosmological natural selection
What is more, the Newtonian interpretation is unable to explain why the observed laws exist instead of others. Similarly, both our Universe’s high homogeneity and that this Universe is so far from thermal equilibrium are in question. Boltzmann answered the second problem arguing that it was a result of the fact that our world was born in an incredibly low-entropy state (it is the “past hypothesis”). But he simply replaced the original problem with a new one, since it is not known why was so low the entropy originally.
Generalizing this problem, the Newtonian paradigm is “a theory [which] has infinite of solutions,” but we observe only one Universe.
To solve this problem, one can introduce the multiverse hypothesis without changing the Newtonian paradigm. According to this logic, our Universe is a part of a bigger ensemble and we live in a biofil cosmic environment, because all of the possible universe-variations are realized, and we are simply lucky enough to find ourselves in a world of life-friendly conditions. But there is no testable predictions to falsify the multiverse proposal.
Smolin’s answer is the “cosmological natural selection”. It states that “the laws of nature have evolved over time” and he “decided to copy the formal structure of population biology by which populations of genes or phenotypes evolve on so called fitness landscapes.” The reproduction happens via black holes and the result is the birth of new baby-universes which inherit the physical laws of their parent universes’ laws in a slightly changed form (thanks to small, random changes). This process selects the more successful (=more back hole reproductive) universes and we presumably live in a successful universe which laws offers favorable conditions for black hole creation. And it is not a surprise, since life seems to be improbable without long-lived stars and carbon – which are necessary to the formation of black holes.
But this coincidence seems to be strange. Why the preconditions of black holes and life are the same?
- Perhaps it is only an observational bias and the physics of stars and the appearance of life are different phenomena without real connection, but there are other hidden, but cruical factors – although we still didn’t realized them. Obviously, this explanation cannot be excluded, although it seems to be improbable.
- Perhaps the presence of long-lived stars, back holes and carbon are necessary preconditions of life, and these and only these preconditions lead to life.
- Or – and it seems to me the most probable – life can appear not only in carbon based universes, but our laws which leads to the creation of heavier elements allows its appearance. In other words: there are other, biofil universes without carbon, and it is another question whether there are another universes with different physics and different physical processes to reproduce themselves – perhaps even via more efficient solutions than the black holes. Namely: adapting Smolin's evolutionary approach, evolutionary processes are able to find only local maximums and the history of the descending baby-universes can be portrayed as an evolutionary tree. And it not sure that our branch is the most successful.
Subscribe to:
Posts (Atom)