SUMMARY PAPER

 


Ga naar : Beginpagina Index van het boek Samenvatting


 

 

 

 

 

 

 

 

 

 

 

3.2. Hierarchical levels in the average evolution

We have seen that as a result of the "egoistic" behaviour of the interactors in evolution, and their continuous attempt to damp variations, con-currence for the same evolutionary path originates. The consequence is that interactors will behave more "coherent" or "in formation".
When we look carefully at several phenomena in nature, we will observe that, passing certain values of stress-threshold of the interactors, a completely new macroscopical behaviour will emerge. As this behaviour could not be predicted from the knowledge of the behaviour of the individual interactors, we must consider this as a new "hierarchical" level in evolution.
So, the origin of a new hierarchical level is the result of collective behaviour of interactors in certain stress-conditions.

Ultimately, the universe consists of a hierarchical network of interactors, which interacts continuously with each other. Each new level in evolution, is built with building blocks of lower levels in evolution.

Atoms are built with elementary particles, molecules are formed with atoms, crystals and biochemical systems are formed out of atoms and molecules, planets are condensates of crystallised or amorphous molecules and galaxies consist of millions of stars. Our Milky Way galaxy, along with several hundred nearby galaxies, all seem to move towards a point in space called the Great Attractor, whose exact nature is not known.
In a similar way, living systems demonstrate a clear hierarchy in structure. Living cells are built from cell organelles, while living organisms are built from living cells. Groups of organisms are formed from individual organisms, and so on…

The hierarchical nature of our universe is omnipresent.
As for each classification made by man, there is something artificial in classifying interactors in universe according to their hierarchical relationships. In reality, the universe is not a pure hierarchical structure, but an extremely complex network, in which interactions happen at the same time between many levels of the superstructure (this is why a use the term "hierarchical network").

Again, the classification according to hierarchical structures, is useful to increase understanding of nature.

 

Transitions in evolution lead to emergent hierarchical levels

At each moment, in each system, many interactions happen concurrently. But each level is characterised by "dominant interactions", which influence the behaviour of the system in a dominant way. These dominant interactions, are steered by the dominant information in the system.

At each level in the universe, there is an abundancy of information, also due to interactions with other levels. Only a few types of information exchanges are relevant within the context of a certain level : this is the "dominant information".

Each transition to a new hierarchical level, is preceded by a change of the dominant information and interactions. The basis for these transitions is "the most fundamental of all symmetry breaking events", namely the one whereby new self-promoting information originates, and a new attractor. When the information is more complex, also the attractor can be more complex, and thus also the resulting organisation and hierarchical structure can be more complex.

The variation-damping model of evolution

As evolution continues, more attractors will be created, with increasing complexity. Parallel with this, also the complexity of the interactors will increase.
This increasing complexity is characterised by :
- an increasing specialisation of the interactors
- an increasing number of hierarchical levels
- an increasing adaptive behaviour : from optimal adaptation of the interactor to the environment, to optimal adaptation of the environment by the interactor (e.g. human beings)
- an increasing self-promoting tendency, and thus also an increasing damping of spontaneous variations, and the creation of artificial variations that support self-promotion.


Maximum complexity does not occur at maximum coherence of the stream of matter, energy and information, but in an intermediate region between maximum and minimum coherence : at the edge of order and chaos. In this intermediate region, there is a balance between the creation of new variations ("innovation") and selection of the variations ("confirmation"). The confirmation leads the stages of quasi-equilibrium, or the status of temporary stability of the evolutionary structures.

Back to the index of the summary

 

3.3. Flux-maximisation and quasi-equilibrium : the link between microscopical and macroscopical behaviour



"On the origin of information"

In the previous paragraphs, we have described the impact of information on the formation of attractors and interactors.
In this paragraph, we will go one step further by putting forward a hypothesis that can explain the origin of information.

After describing the phenomena, we will now investigate deeper the cause of the phenomena. In line with the reasoning of Charles Peirce, that it is not sufficient to state that "nature has a tendency to acquire habits", we also have to search for an explanation for the cause of the evolutionary habits.

When the stress is increased in a group of interactors, these will originally react with an increase of the microscopical spreading (dissipation).

From a critical stress-threshold on, a new organisation form will be created, that possibly can lead to the formation of a new hierarchical level. The basis for this collective behaviour is what we call "the principle of flux-maximisation" :

"When matter, energy or information (M/E/I) is added to a system, the system will first react by microscopical dissipation (spreading) of this M/E/I, and after this - to the extent tolerated by the context - this microscopical dissipation will be converted into macroscopical dissipation, resulting in flux-maximisation."


The word flux is in this context representing a coherent stream of M/E and I, within a certain evolutionary context. A coherent pattern in a stream is thus a flux-pattern.

So the principle of flux-maximisation indicates that under stress-conditions (variations exceeding a stress-threshold), the coherence of the evolution-stream is increased in discrete quantum leaps, whenever this is possible within the context.


Flux-maximisation (principal scheme), or how the egoistic behaviour at micro-level leads to con-currence and coherence on macro-scopical level.

The principle of flux-maximisation is the basis for some important phenomena which we can observe in "the average evolution".

It explains the relationship between microscopical damping of a stressing variation in evolution (a classical example is an increase in energy-input) and the macroscopical damping. The principle of flux-maximisation states that under stress-conditions (e.g. increased M/E/I input), a non-ordered (in-coherent/chaotic) spreading pattern will be converted into an ordered (coherent) spreading pattern (keep in mind that in both cases, we have a spreading (dissipation) phenomenon ; only the way of spreading is different). In other words : a non (or less-) informed stream is converted into a more informed stream. So, flux-maximisation is also the basis, the origin, of information and the resulting change in complexity.
Flux-maximisation can be considered as the physical cause and explanation for the origin of information.

 


When we look at different patterns of flux, certain patterns will have a relatively higher probability of occurrence compared to others, and therefore they act as attractors in the "pool of flux patterns". Because these situations have a higher probability of occurrence, they are relatively more stable. We therefore call this the status of quasi-equilibrium.

Flux-maximisation and quasi-equilibrium

This status of quasi-equilibrium is a temporary equilibrium that only can exist, as long as the feeding input of M/E/I is active.
- When this feeding M/E/I is completely eliminated, then the flux (the coherent stream), will disappear, and the system falls back to a situation of more stabile equilibrium.
- When this feeding M/E/I is partly eliminated, the system can move to a quasi-equilibrium of a lower level, which is dominated by another attractor.
- Is the feeding of M/E/I intensified, then a new attractor of a higher level can emerge, resulting in a quasi-equilibrium of a higher order.


Each variation in input of the stream of M/E/I towards the system, is thus counteracted by a reacting flux of the system itself. In this way, the system itself acts (unconsciously) in a self-conservating way, what on its turn leads to the status of "quasi-equilibrium".

A similar principle is for M/E streams also described as the "Unified Principle of Thermodynamics", and is defined by Kay and Schneider in the following way :
"The thermodynamic principle which governs the behaviour of systems is that, as they are moved away from equilibrium, they will utilize all avenues available to counter the applied gradients. As the applied gradients increase, so does the system's ability to oppose further movement from equilibrium."

This principle is called "the principle of gradient-dissipation".


This description of Kay and Schneider is ignoring the origin and impact of information in this behaviour, and can therefore be considered as a more narrow principle than the principle of flux-maximisation. Furthermore, I am convinced that the principle of flux-maximisation is also relevant for changing coherence in streams of information (so, information is not only relevant for changing coherence in streams of M/E). For example, complex systems (as living systems), react in analogue ways when they are suffering information-overload.

I have called this type of collective behaviour flux-maximisation, because the ordered aspect (maximisation of coherent dissipation) of the phenomenon is the most visible aspect, and it is also the cause of order that we observe around us. However, I must emphasise here that also a decrease of input of M/E/I can occur in certain conditions. So, flux-maximisation is realised in balance with the input of M/E/I : it is not an absolute extremum principle.

The principle of flux-maximisation and quasi-equilibrium has, apart from the origin of information and attractors, some other implications.

When the individual interactors decrease spreading of M/E/I in the course of evolution, and the total population is characterised by an increase of spreading (flux-maximisation), this implies that the population is growing. In order to keep the population growing, a condition is that the input of M/E/I is sufficient to "feed" the population. This means that the sustainability of the population is depending on the context.

In ideal conditions, the input of M/E/I is unlimited, and the capacity for growth of the population will also be unlimited. In reality however, this will be almost never the case. Only in the initial (exponential) growth phase of the life cycle of populations, this condition is met. In a mature population, input and output will be more balanced, resulting in a situation of quasi-equilibrium or slow growth/decline.

The theory of natural selection (Darwin) starts from the assumption that resources are scarce (this was based on the "principles of population" by Thomas Malthus). As a result, more "efficient" interactors will have a greater chance for survival, and thus the population will be slowly enriched with more efficient individuals. Ultimately, the whole population will behave more efficient with regard to the use of resources, and the population can grow, even with the same overall resources available.

Certain interactors will be in such a stable situation of quasi-equilibrium, that they can and will be used as building blocks/modules in the flux of a higher evolutionary level. The existence of these building blocks enables the flux to be further increased, whereby an autocatalytic effect is created ("flux-maximisation promotes hierarchy and hierarchy promotes hierarchy"). Further away from stable equilibrium, more levels and paths are available to increase coherence.

We can compare the behaviour of flux-maximisation and quasi-equilibrium metaphorically with the creation of a river or stream in a newly formed hilly landscape.
When the amount of rain increases, we will see the formation of little macroscopical gullies, which enable the output of water (the flux of water is increased through the formation of the small gullies). Some of the gullies, will grow further until they are brooklets, while other will form big rivers. Finally, a quasi-equilibrium will be reached between the amount of rainfall, and the population of channels to canalise efflux of water in the landscape. The nature and extent of macroscopical streams that will be created, will be in balance with the stress that is exerted onto the landscape by the rainfall. Furthermore, the status of equilibrium will be broken, when the pattern of rainfall changes drastically. If for instance the rainfall would stop, then also the flux of water in the populations of channels will stop, and we end up with a situation of stable instead of dynamic equilibrium.

Alpha-, Beta- and Gamma-attractors are three kinds of "riverbeds" which can be created for interactors in the stream of evolution, as result of the flux-maximisation.

When we look back at the first figure in this paper, and we ask ourselves again what is cause of the gap between the actual spreading and the maximal (potential) spreading, we are at least one step ahead in answering this question.

The average evolution, and the gap between maximal and actual spreading.

As the maximal (potential) spreading increases during evolution, the principle of flux-maximisation causes the creation of information and ordered structures. These structures on their turn increase the total spreading, and thus contribute to further increase of the total flux. So, flux-maximisation is an autocatalytic process.

Back to the index of the summary

 

4. FUNDAMENTAL UNCERTAINTY IN THE AVERAGE EVOLUTION

Information and energy are two important characteristics of the evolution stream. In the previous paragraphs, the importance of information has been emphasised. In classical natural sciences, and in physics in particular, the role given to information was not impressive. However, in our current "age of information", the role of information is being generally accepted to be important for several processes in evolution.

This historical lack of attention for information, can be explained as follows : energy can be considered as a quantitative aspect of the stream of evolution, while information is a qualitative aspect. Since the 17th century, science is booming mainly due to the quantitative approach of nature. In many cases, simple dynamic processes (e.g. mechanical processes) can be described quantitatively in terms of a balance of input and output of energy, and numerous scientific realisations and the resulting artefacts are created based on these quantitative energy-relations. These early quantitative successes have decreased the need and esteem for qualitative relationships, as for information-dominated processes.

There have been attempts in the past to approach information in a quantitative way, by searching for a defined relationship between the quantitative amount of information and the amount of energy. The end result of this search would have been a relation of the kind :
1 bit Information = ? Joule Energy

Nobody has succeeded in finding this fixed relationship until now (as Einstein has found a fixed relationship between matter and energy). This explains why information is never a standard parameter in most of the quantitative physical relationships. It is my opinion that this relationship will never be found, for the simple reason that there is no universal relation between matter/energy and information. Some arguments for this reasoning are :
- the same information can be carried on different types of material carriers. For instance, a compact disc and a book can carry the same information. The energy needed to put this information on these carriers, or to read the information from the carriers, will certainly be different. So, in this case, there is no fixed relationship between the information and the energy needed to process the information.
- The Swiss Ferdinand de Saussure (1857-1913), basic thinker of both linguistics and semiotics, has always emphasised the independence between the meaning of a signal, and the carrier. Although every "sign" (a sign is the smallest unit with a meaning), always consists of a carrier (which he called the "signifier" = the M/E aspect), and the meaning (which he called the "signified" = the I-aspect). As the relationship between the meaning and the carrier becomes more and more independent (as in the case of Gamma-information), also the importance of interpretation grows.
- The same information can be coded in different ways, even if they use the same carrier (for example a book). The word for "table" in English is different than in French or Dutch, although these languages use the same symbols. The word for "table" in Chinese or Japanese, will be even more different, because these languages use another type of symbolic code. As a consequence, one type of symbolic code is more efficient than another code, and thus the energy needed to use different symbolic codes will be different, even for the representation of a simple object as a table.

As explained before, information has only meaning within a certain context (cfr. our definition of information), namely when both the physical carriers and codes are related to a known code convention. Within certain, very limited contexts, it is probably possible to find quantitative relationships between bits and Joules, but in most cases it will not be possible or relevant. The important consequence is that information cannot be used to generate quantitative and universal laws of nature, based on streams of both information and energy/matter.


The fact that it is probably impossible to make a universal quantitative relationship between information and matter/energy, does not mean that information has no important role in our universe.
From the point of view of "the average evolution", information deserves an honourable place close to energy and matter in descriptions of our universe. This statement will probably be less controversial in our information era, than 300 years ago.

However, we may also not overestimate the role of information in evolution. Information and energy are co-evolving as "chicken and eggs" (and asking who was first might not be to the point). We need energy to create information, and during the creation of each bit of information, energy will be lost or spread, and will not be useful anymore to create other evolution patterns. If we would have an unlimited reservoir of energy/matter, we would probably be able to counteract all forms of spreading with information, and we could control the direction of evolution to the largest possible extent. We would be able to steer evolution, and even to reverse evolution in some limited cases. (But, my dear reader, this is a hypothetical assumption, and the universe is plagued with a permanent and universal lack of information.) As the evolutionary spreading increases, so does the amount of information, but not to the same extent. As information decreases uncertainty about selections in evolution, and as all necessary information will never be available, we can make the statement :
The universe is characterised by a fundamental uncertainty.

This fundamental uncertainty is certainly also valid for human life. We will be able to describe laws of nature which will lead to important applications and the creation of valuable artefacts. However, we will always lack information to create detailed laws which apply to the whole universe, and which are not limited to certain contexts. For this reason, we will have to be satisfied with more humble aspirations, and we will have to limit the validity of our own laws in two ways. This will lead to two types of potential laws that we can create :

- contextual laws of nature : these laws are detailed but only valid within certain contexts (as for instance biology, physics or chemistry). For instance, the Newtonian mechanics is not valid when the speed of the objects is near to the speed of light.
- average laws : these laws are not so detailed, but use average characteristics of populations of objects or phenomena in time or space. The advantage is that they are valid in a broader context than the "contextual" laws. A typical example is the law of statistical mechanics developed by Boltzmann. These laws relate the average microscopical movement of particles, with the macroscopical characteristics (e.g. the pressure of the gas). Another example is the quantum-mechanical wave equation of Schrödinger.
The conclusion is in each case that a complete description of the universe in not possible, not only because the knowledge of human beings about universe is insufficient, but also because uncertainty is an essential element of the universe itself.

When we want to describe nature, we will have to select the alternative that is best fitting with our purposes. It will be clear now for the reader that the concept of "the average evolution" uses the second alternative. I have proposed here a heuristic model for a qualitative relationship between the average microscopical and macroscopical behaviour. I emphasise here the word qualitative, because I wanted to emphasise that information could be considered equally important to evolution as energy. The concept of "the average evolution" is developed in the first place to enhance our understanding of evolution, across the classical scientific barriers.

Back to the index of the summary

 

5. FINAL CONSIDERATIONS

The concept of the average evolution is not completely new. It is based on mainly three schools of thinking, mainly developed in the 20th century :
1) The principle of fundamental uncertainty, that is essential in statistical mechanics and quantum-mechanics ;
2) The extended laws of thermodynamics ;
3) The understanding that information plays an essential role in all dynamical processes.
I have tried to integrate these 3 phenomena within the context of a contemporary and coherent evolutionary theory.
This paper is a short introduction to this concept. The full text with references is available in Dutch, at the web-site http://www.oocities.org/evolutionweb.
The scheme below summarises the most important aspects of this concept. At the left side of the scheme, chance is dominating the evolution, while at the right side, information is dominating the evolutionary habits.

 

Figure: "Information in the average evolution"

Back to the index of the summary