This book concerns the development of a theory of complex phenomena; using such concepts as fractals; chaos; and fractional derivatives; but; most important; the idea of an allometric control process is developed. In summary the theory attempts to explain why the distribution in the intensity of wars is the same as the relative frequency of the number of words used in languages and the number of species evolved over time from one or a few remote ancestors. The theory also describes the similarity in the variability of the number of births to teens in Texas to the number of sexual partners in homosexual liaisons. The data in both of the aforementioned categories are shown to have long-term memory; and it is this memory that also gives rise to inverse power laws in such physiological phenomena as the interbeat interval distribution of the human heart; the interstride interval distribution in the human gait; and memory in DNA sequences.
https://doi.org/10.1142/9789812815361_fmatter
The following sections are included:
https://doi.org/10.1142/9789812815361_0001
Why do things get more complicated with the passage of time?
While it may not be a mathematical theorem, it certainly seems clear that as cultures, technologies, biological species and indeed most large scale systems, those with many interacting components, evolve over time, either they become more complex or they die out. This is quite different from physical systems that tend towards a featureless kind of existence with increasing time. The asymptotic approach of physical systems to equilibrium is captured by the second law of thermodynamics which states that the entropy of an isolated system either remains fixed or increases over time. An entropy increase is interpreted to mean a decrease in order or a loss of information. All physical states in equilibrium are statistical equivalent and this is the end point of all physical activity. The systems that become more complex rather than dying out apparently contradict the second law, since their entropy tends to decrease rather than increase over time and their order increases rather than decreases. All biological systems do this in their early stages of growth and although they may appear to violate the second law of thermodynamics, they in fact do not. The apparent conflict with physical law arises because no system is completely isolated. Instead every system draws energy from the surrounding environment and gives energy up to it. Patterns emerge when a balance is achieved between energy input and energy output. Prigogine [1980] referred to such patterns as dissipative structures, and a great deal of research has gone into understanding the physical mechanisms that give rise to such stable patterns…
https://doi.org/10.1142/9789812815361_0002
For our purposes we shall regard all phenomena as made up of systems. A system consists of a set of elements together with a defining set of relations among those elements. We can think of a piece of matter as a system, with atoms being the individual elements and the electrostatic forces holding the atoms together in a lattice defining the relations among the atoms. This is bulk matter and the interactions among the atoms determine the properties of the material. Such systems are studied in condensed matter physics. A society may also be considered as a system, with people being the fundamental elements. How people interact with one another, form collectives and act in cooperative ways is studied in psychology and sociology. Herein we do not focus on the detailed mechanisms of the phenomenon being studied, often times because we do not know what they are, instead we step back and attempt to extract the common features from all complex phenomena. This is what the system perspective assists us in doing…
https://doi.org/10.1142/9789812815361_0003
It is only fairly recently that the influence of nonlinear dynamical systems theory, under the name chaos theory, has been systematically applied to the social sciences, see for example, Arnopoulos [1993] as well as Elliott and Kiel [1996]. One of the limitations of these applications has been the failure to provide unambiguous evidence for the existence of chaos rather than noise in sociological data sets. In the background of these dynamical and statistical ideas lurks fractals, the brainchild of Benoit Mandelbrot [1977] and one of the themes that ties together the measures of complexity developed in these lectures. It seems remarkable that when I sat in a lecture hall as a graduate student at the University of Rochester in the late 1960's and listened to Benoit Mandelbrot explain why the night sky was not uniformly illuminated and how profit is distributed in the stock market, that it would be another decade before the word fractal was introduced into the scientists' lexicon…
https://doi.org/10.1142/9789812815361_0004
Many people who have grown up in western culture are obsessed with finding ways to gain knowledge of the future. They appeal to a wide range of practitioners of prediction, from fortune tellers to astrologers to astronomers. In some cases the knowledge they seek is easily obtained and dependable, say, what time the sun will rise on March 1, 2001. Almanacs, calendars and weather reports often contain this information. Because the factors leading to the time of sunrise are somewhat easy to identify (the time of the year and speed of rotation of the earth) and there is continuity between these factors and the predicted event we say that the time of sunrise is fully determined by them and their use as predictors is quite reliable. These factors and their relationship to the time of sunrise remain good predictors across time. As a famous scientist once said, "It is difficult to make predictions, especially about the future."…
https://doi.org/10.1142/9789812815361_0005
Stochastic processes, in the way they are used today in the physical sciences, were introduced into the social sciences at the turn of the century in a doctoral dissertation by Bachelier [1900]. His thesis advisor was the mathematician and astronomer H. Poincaré. In his thesis Bachelier addressed the effect of speculation on the profitability of stocks in the French Stock Market, and in so doing invented the mathematical description of the process of diffusion that is now used throughout science. Yes, Bachelier had the misfortune to write a seminal paper in an area of research that was also of interest to Albert Einstein, and consequently his work went unrecognized until well after his death. Einstein gave the first correct description of physical diffusion and in his second paper on the subject referred to the possibility that this phenomenon might be the same as that observed by Brown, but characteristically he noted that he did not have sufficient information on the latter process to know for sure. Diffusion, as it was first conceived is the physical realization of a stochastic process. It is the consequence of a heavy particle in a fluid of lighter particles being buffeted by the much larger number of lighter particles and therefore moving in an erratic way through space because of the random imbalance in the forces being applied to its surface. This motion was seen by the Scottish botanist Robert Brown, who in the late 1820's observed through his microscope pollen motes suspended in water undergoing the most peculiar motion depicted in Figure 5.1. The motes appeared to have an internal force that caused them to lurch first this way and then that, with no apparent goal in mind. This erratic movement is today called Brownian motion, after Brown, even though he did not know the cause of the motion. Brown was not the first to observe this effect, however, a full half century before Brown observed his pollen motes, Jan IngenHauz, a physician in the Court of Maria Teresa, put powdered charcoal on alcohol and observed the same erratic motion. Of course today we do not refer to either Bachelier or to IngenHauz, since five years after Bachelier's thesis a clerk in a patient office published a paper in a physics journal deriving the same mathematical description…
https://doi.org/10.1142/9789812815361_0006
Daston [1995] points out that in 1662 the intellectual leaders in the development of what was to become probability theory believed that decisions should be based on the expectation of outcomes. This judgment was made primarily in a legal context in which equity was the primary consideration. Daston gives an example to demonstrate the notion of expectation in the context of a game of chance. Ten players of the game each contribute one unit to the ante; each can either lose one unit or gain nine, but the game is so constructed that it is "nine times more probable" that any given player will lose one coin rather than gain the other nine. Daston reasons that each player hopes for nine coins, has one coin to lose, nine degrees of probability of losing that one coin, and only one degree to win the nine coins: this makes for perfect equality. From this we see the notion that the measure of the probability for gain or loss is inversely proportional to the ratio of the amount of gain or loss itself. The greater the expected gain the smaller the likelihood of winning. Note that this procedure is one way to proportion risk to gain in a way that provides for maximum fairness and does not require the concept of a probability…
https://doi.org/10.1142/9789812815361_0007
The average or mean value of a data set is traditionally thought to be the best characterization of a measured quantity. Of course the average itself can vary from one set of measurements to another, but if the statistics of the process are well behaved, then as more and more data are included in the averaging process, the less the value of the mean varies from set to set. This is the perspective that has evolved over the past century with regard to how we understand random processes. The measured mean ought to stay within well-defined limits that determine the degree of variability of the data set. This variability in the mean tells us something quite different about the process than does the mean itself…
https://doi.org/10.1142/9789812815361_0008
We have mentioned a number of times that one indicator of fractal random processes is an inverse power law, and that such laws come in two flavors, spectra and probability densities. The inverse power-law spectrum, or equivalently the correlation function, indicates the existence of long-term memory, far beyond that of the familiar exponential. So far in fact that the time series may not even have a characteristic time scale. In the probability the inverse power law indicates the contribution of terms that are quite remote from the mean, when the mean exists. In these distributions the second moment, or variance, diverges. What we want to consider now are some of the basic properties of a function having a power-law structure and to do this we use elementary dimensional analysis…
https://doi.org/10.1142/9789812815361_0009
Let us begin our discussion of the applications of the measures described in the last lecture by considering different ways of obtaining discrete random time series. The simplest way to measure a time series is to periodically sample a dynamical process using a predetermined time interval Δt, such as indicated in Figure 9.1. Here an irregular function is sampled (measured) every ten units of the time giving rise to 20 data points. In this way a continuous dynamical process X(t) is sampled every Δt days, minutes or seconds depending on the units of time selected, yielding the discrete data set Xj = X(tj) where tj = jΔt and j = 1,2,…N. This type of sampling is like watching an activity using a strobe light to illuminate the process and allow us to take a measurement, otherwise the system is dark. The timing of the light flash is denoted by the lines in Figure 9.1. If the strobe is fast compared to changes within the process, then the process unfolds smoothly like a motion picture that is being flashed 32 or 64 times a second. If the strobe is slow, however, the process will look jerky and unnatural, like the silent films of seventy or eighty years ago. The choice of sampling time can therefore be quite important and is problem specific. We choose Δt to be any unit of time that is compatible with the data set of interest…
https://doi.org/10.1142/9789812815361_0010
The best physical model is the simplest one that can "explain" all the available experimental data, with the fewest number of assumptions. Alternative models are those that make predictions and which can assist in formulating new experiments that can discriminate between different hypothesis. One such model is that of random walks. In its simplest form a random walk provides a physical picture of diffusion, about which much is said below. Its real strength, however, lies in the ease with which it can be generalized to describe more complex phenomena such as fractional Brownian motion and Lévy processes. Therefore, we not only spend time on developing the formal concepts used in random walk models, but we also discuss some of the limitations of numerical simulations…
https://doi.org/10.1142/9789812815361_0011
In previous lectures we incorporated complementary characteristics into the definition of complexity. Complexity has some aspects of regularity and some aspects of randomness, but the statistics are not simple since they may be generated internally by means of nonlinear dynamical interactions or externally by means of interactions with the environment, or by a combination of the two. We also discussed a number of simple models that have been used in the natural sciences to understand this dual aspect of regularity and randomness. It is now time to shift our discussion from the traditional models of statistics involving linear, additive, uncorrelated random processes, to those involving scaling and consequently to the effects of memory. Scaling behavior indicates that there is no space (time) scale to dominate the fluctuations in the phenomena of interest. To properly understand this lack of scale we examine the phenomenon of inverse power-law behavior in either the statistical distribution, the correlation function describing the stochastic process or both. We find a close relation between complexity in natural and social phenomena and inverse power law behavior…
https://doi.org/10.1142/9789812815361_0012
Scaling properties are manifest through probability distribution functions, correlation functions, or both. The probability density has an inverse power-law form when the relative frequencies for the occurrences of events are tied to one another across multiple scales, which is to say that we have a contingency process. Compare for example the difference between the distribution of heights which is a normal distribution, describing as it does a linear additive process, and Pareto's Law for the distribution of wealth that describes a nonlinear multiplicative process, as we show below, cf. Figure 12.1. In this latter figure we depict the distribution of income in the United States in 1918 on log-log graph paper. A straight line with a negative slope indicates an inverse power law, with the index α being given by the slope of the line. If X is the income level of an individual then the distribution of income is given in algebraic form by
https://doi.org/10.1142/9789812815361_0013
In our discussion of diffusive phenomena and the processing of data we maintained the underlying assumption that processes change smoothly and that a given data point is most strongly influenced by adjacent data points. This was nowhere more clearly evident than in the simple random walk model we used to derive the Gaussian distribution. In that model the probability of the displacement increasing or decreasing at each point in time was p and q, respectively. The probability density of having a certain observed displacement after n time intervals is a binomial distribution which for p = q = 1/2 in the long-time limit goes over to a Gaussian distribution. The fundamental assumptions used in the argument is that the steps are local in state space, the transition probabilities up and down are independent of the location in state space and only connect adjacent states. These assumptions guarantee that the mean-square displacement remains finite for finite times, a result which has pleased all modelers since de Movire [1732] first derived the distribution which now bears the name of Gauss…
https://doi.org/10.1142/9789812815361_0014
In these lectures we have shown, using diverse phenomena from the social and natural sciences, that complex systems manifest inverse power-law probability distribution functions. Further, as a consequence of this inverse power-law behavior we would expect these phenomena to exhibit clustering behavior in the space and time domains separately or in both together as was explicitly shown using fractal random walks. It is clear from such walks that the inverse power laws, that are apparently ubiquitous in the time series from the natural and social sciences, are scaling. We now establish a close connection between such clustering and the fractal dimension of those processes. In this lecture we demonstrate that a Lévy stable statistical process has the scaling that manifests the clustering properties previously investigated and therefore is a candidate for describing fractal stochastic processes…
https://doi.org/10.1142/9789812815361_0015
When the probability density has an inverse power-law form there is significant probability that extreme values of the random variable are non-negligible. In fact it may be the case that the extreme values of the process dominate and therefore "explain" most of the complex behavior in the phenomenon of interest. In a similar way the correlation function has an inverse power-law form when there are long-time correlations in the time series data. The scaling ties events together in an orderly statistical sequence in time (space) giving rise to an inverse power-law spectrum, just as it ties the relative frequencies of the occurrence of events together in the distribution function. We shall focus on the correlation function in the present lecture, and in particular on the properties of systems having algebraic correlation functions and consequently inverse power-law spectra…
https://doi.org/10.1142/9789812815361_0016
The random walk models utilize a kind of averaging we have not talked about very much. This is the average over a statistical distribution of the random shocks. You may recall that we assumed the statistics of the random shocks are normal, with zero mean and constant variance when we discussed the law of errors. The question arises as to what to do when these statistical assumptions are not justified; when we do not know the probability distribution for the random forcing function. In this situation another approach suggests itself, that being to construct an ensemble of realizations of the process of interest regardless of its statistics and to explicitly calculate the average over these realizations. Let us consider the time series for teen births and see how we might construct various ensembles from these data…
https://doi.org/10.1142/9789812815361_0017
Let us now return to our examples of complex phenomena and examine some properties of the time series. Once a person has been tentatively diagnosed as severely ill, say with a life-threatening arrhythmia, point measurements are no longer an adequate format for gaining information. Information on the primary indicators of the state of the cardiovascular system must be continuous in order that a transition into a life-threatening situation can be anticipated and hopefully averted. This continual monitoring provides information in the form of a time series, say for example the electrocardiogram (ECG) trace in a critical care unit which records the continuous operation of the heart and its support systems on a real time basis…
https://doi.org/10.1142/9789812815361_0018
The Weierstrass function was introduced in our discussion of random walks as the structure function for a lattice, that is, the Fourier transform of the spatial transition probability. The structure function provided the spatial interconnectedness that lead to long-range correlations in the random walk. We can also use this function, as well as suitable generalizations of this function, to model phenomena having long-time correlations. In this pursuit we investigate the spectral properties of such a scaling function as an exemplar of non-analytic functions, that is, functions that are continuous everywhere but not differentiable anywhere. Such functions have been shown to be fractal. In his investigation of turbulence, Richardson observed in 1926 that the velocity field of the atmospheric wind is so erratic that it probably can not be described by an analytic function. In his paper, "Does the Wind Posses a Velocity?", he suggested a Weierstrass function as a candidate to represent the velocity field. We have since come to realize that Richardson's intuition was superior to nearly a century of analysis regarding the nature of turbulence…
https://doi.org/10.1142/9789812815361_0019
Suppose we suspect that a given data set comes from a particular parent distribution, how can we test for the truth of this suspicion? For the moment we shall not be concerned with how we formulate such suspicions, but say we suspect the data ought to be normally distributed if we had enough of it. How do we prove this, or rather how do we determine that the data is closer to a normal than it is to a, say, Poisson distribution? More practically one might wish to determine if a new drug used in the treatment of a disease significantly improves the recovery rate or decreases the severity of the disease when compared with a drug already on the market. If the first few cases tried show an apparent advantage, how can we be sure that this is not just a random fluctuation and is truly of some significance. Both these kinds of questions require the formation of "significance tests" or "goodness of fit" criteria in order to determine if the deviations of the data set from the assumed parent distribution is statistically significant…
https://doi.org/10.1142/9789812815361_0020
One characteristic shared by biological and sociological time series is that they fluctuate erratically in time. These fluctuations, however, do not imply that what occurs at different points in time are unrelated. It is well know that correlations in such time series decrease with increasing time lags, which is to say, the farther apart in time two events occur, the less influence they have on one another. We often need to characterize correlations, that is, the degree and extent of similarity of a measured property with itself, as a process varies in time. The use of the normal distribution to describe the statistics of a process assumes the fluctuations to be mutually independent, which is to say that the random shocks to the system are uncorrelated from one another. However, in real systems fluctuations have more structure, since the system itself can induce correlations among statistically independent random shocks. Examples of real systems having such structural correlations include variations in the interbeat intervals of mammalian heart beats, the firing of neurons and similarly for interbreath intervals. But this is not only true for biomedical time series, it is also true for sociological data sets as well, as we show for teen births…
https://doi.org/10.1142/9789812815361_0021
In our previous discussions we implicitly introduced the idea of statistical inference, which is to say that we have attempted to infer the properties of a population from samples of that population. When we say that z per cent of the population is a minority, or is male, or has an income below $20,000 per year, we are describing only particular aspects of that population. Such descriptive statistics should be made as accurate as possible, even though statistical inference is not limited to mere description. If males earn more money than females in the same position, or if minorities have teen-births more frequently than their fraction of the general population would indicate they should, and the differences are statistically significant, this strongly suggests that societal factors play some part in the causation of these situations. Just as in the physical sciences we wish to understand the whole by examining the parts and determining how the parts are interrelated. In time series this often takes the form of determining how a given aspect of the phenomenon changes over time. The function of the modeler is to identify the possible mechanisms that can account for these changes and to suggest experiments and observations that may crucially test the proposed explanation…
https://doi.org/10.1142/9789812815361_0022
The most comprehensive technique for the study of time series analysis in the social sciences is that of Box and Jenkins [1976], the general class of auto regressive integrated moving average (ARIMA) models. Various pieces of ARIMA were developed by others over the first half of this century, but Box and Jenkins were the first to pull all the pieces together to form a single comprehensive strategy for the understanding and analysis of random time series. In keeping with our general philosophy we shall be selective rather than exhaustive in our presentation of ARIMA modeling, since this is only one tool in our toolbox of statistical methods. More importantly we wish to concentrate on an extension of ARIMA that involves fractional differences and the relation of such differences to the inverse power laws we discussed in the last few lectures. Therefore in this lecture we briefly review the underlying principles of ARIMA modeling and subsequently discuss how to generalize these ideas to include long-term memory…
https://doi.org/10.1142/9789812815361_0023
We briefly introduced the notion of differencing in the ARIMA modeling in order to generate time series from which trends and drifts have been eliminated. Here we generalize the concept of differencing to fractional values in order to generate processes with inverse power-law memory. In the same spirit as the random walk model, this approach to modeling long-time memory provides us with a conceptually straightforward mathematical representation of rather complex processes. In fact one possible view of the present approach is as a direct extension of the random walk model to non-differentiable phenomena in the continuous in the continuous limit. Since the continuum limit of simple differences do not exist in the phenomena being considered, the "trends" or long-term memory in the data cannot be removed…
https://doi.org/10.1142/9789812815361_0024
We have discussed a number of formal mechanisms by which a dynamical process can possess both regularity and randomness. In the familiar arena of stochastic processes this is referred to as randomness with memory; short-term memory that decays exponentially with time, or long-term memory that decays as an inverse power law in time. We have seen that the ideas from simple random walks can be generalized to include fractional differences, which in turn give rise to long-time memory. This long-range memory has been observed in the teen-birth data, as well as in the data of over 347 species through Taylor's Law (Taylor and Woiwod [1980]). This density-dependent effect appears to be a fairly ubiquitous property of sentient populations. We now wish to examine time series for dynamic, but non-sentient, phenomena and attempt to use these formal mechanisms to better understand those phenomena…
https://doi.org/10.1142/9789812815361_0025
Walking is one of those things that we do everyday without giving it much thought. That is until forceful contact of a table is made with a toe in the dark, or a knee brings back memories of an adolescent afternoon in which a one-man touchdown was discouraged, or the weight of years makes each step a painful experience. But putting these and other such considerations aside, most people walk rather confidently with a smooth pattern of strides and without apparent variation in their gait. This pattern is remarkable when we consider the fact that the motion of walking is created by the destruction of balance which in his treatise on painting Leonardo da Vinci (1452-1515) points out, that nothing can move by itself unless it leaves its state of balance, and with regard to running, a "…thing moves most rapidly which is furthest from its balance." So that in one respect walking can be viewed as a sequence of fallings and has been the subject of scientific study for well over a century, see for example, Weber and Weber [1836]…
https://doi.org/10.1142/9789812815361_0026
All living organisms carry their hereditary information in deoxyribonucleic acid (DNA) molecules (aside from some viruses). DNA resembles a rope ladder with two complementary chains twisted around each other in a right-handed helix. The elements making up the linear chains (polynucleotide sequences) are four nucleotides: adenine (A), guanine (G), thymine (T) and cytosine (C). The two chains are tied together by hydrogen bonds between pairs of nucleotides: A pairs with T by a weak bond (two hydrogen bonds) and G pairs with C by a strong bond (three hydrogen bonds)…
https://doi.org/10.1142/9789812815361_0027
It is quite clear that the dominant characteristics of teen-birth time series data as depicted in Figure 6.1 are random fluctuations superposed on a periodic variation in time. In spite of this periodic regularity predicting teen births is immensely difficult, in part because of the problems associated with the erratic fluctuations in the teen-birth population as well as those in the overall population over time. There is a traditional assumption that causal factors at work in the pattern of a changing phenomenon exert their influence over time in a uniform, or stationary, way. The non-stationary nature of teen-birth data has been a persistent problem for current methods of modeling. Texas teen-birth data are no exception; they exemplify a process that is markedly non-stationary, see, for example, Hamilton et al. [1994a, b]. The method of differencing the data, done to remove trend, as we discussed in the context of the ARIMA modeling strategy, does not make the resulting sequence of teen births stationary…
https://doi.org/10.1142/9789812815361_0028
Our approach to understanding complex phenomena and the uncertainty that goes along with such phenomena has been through scaling. This is quite different from the traditional linear perspective of Sir Isaac Newton and his lineage. We discuss the challenges to that earlier world view made using the nonlinear perspective of Poincaré and his intellectual progeny. In this regard we introduce some of the general ideas from nonlinear dynamics systems theory and discuss the relationship between dynamics and chaos, with its subsequent influence on the certainty of our knowledge about the world…
https://doi.org/10.1142/9789812815361_0029
The modern theory of dynamical systems starts with the seminal works of Hamiltion and Lagrange whose theories describe the continuous changes in mechanical systems over time, including the orbits of celestial bodies. It was the inability of these theories to properly calculate these orbits for long times, the problem of small divisors and the stability of the solar system, that towards the end of the last century prompted Poincaré to develop the geometrical interpretation of differential equations. Lyapunov extended these investigations to provide quantitative measures of stability and therefore to precisely determine just how well or how poorly the solutions to Newton's equations describe the orbital motions of the planets and their satellites. The final resolution of these difficulties was made by Kolmogorov, Arnold and Moser (KAM) over a period of about ten years beginning in the middle of the 50's. The contributions of these latter scientists are associated with the qualitative theory of differential equations and mappings through their geometrical interpretation and the determination of the structural stability of dynamical systems that are deterministic and finite dimensional. Imagine a car poised on the edge of a cliff. A movement in one direction and the car is on level ground, whereas a small displacement in the other direction and the car falls and crashes on the rocks below. That is structural stability. In our discussion of the stability of motion we shall learn that what we thought was known about determinism, causality and predictability is not as straightforward as we once believed…
https://doi.org/10.1142/9789812815361_0030
The difference between the values of the dynamical variable separated at two consecutive time points (29.7) is not the only way in which maps and continuous dynamical equations can be related. A second way is by using a strobeing of the output of the continuous system. Suppose that one does not record the continuous output of a system continuously, but rather uses some prescription to record the output, for example, every five seconds the amplitude of the time series is recorded, or each time the derivative of the time series vanishes the amplitude is recorded, or any of a number of other choices. This prescription provides a discrete set of interrelated data points. The points are related because they are each generated by the same dynamical system even though we have apparently thrown away a large amount of information, that is, the values of the process between measurements. This is what was done with the heart rate, walking and genetic sequence data. Therefore, under some circumstances one can construct a map to generate the same set of data as the continuous generator does and the map is then a discrete equivalent of the continuous dynamics. The equivalence of the two generators is a consequence of the fact that both the discrete and continuos dynamical description gives rise to the same data and are therefore indistinguishable…
https://doi.org/10.1142/9789812815361_0031
The discrete dynamics involving maps we have so far discussed have been concerned with the existence of chaos and not particularly with the properties of chaos, except in so far as we could characterize the irregularities in the solutions to the maps as random. Such things as the possible memory content of a chaotic time series have not been addressed. There is a long standing belief in the natural sciences that simple systems should have simple descriptions, and complex systems should have complex descriptions. Therefore, the idea that the statistical properties of a time series are interdependent over a long time interval ought to have a somewhat intricate mechanism. However the rich dynamical behavior of maps may enable one to simply describe quite complex systems, even those with memory, with nothing more than a nonlinear recursion relation. Thus, when confronted with the overwhelming complexity of a typical social or medical phenomenon, one need not necessarily throw up one's hand in despair. The complexity may not be an illusion, but it may well be caused by a knowable deterministic mechanism. When examined for its generator the complexity of the process may collapse into a relatively simple structure. Attention, therefore, should be directed towards the generator of the ongoing process, towards the cause of the irregular behavior, not towards the symptoms…
https://doi.org/10.1142/9789812815361_0032
One of the questions of immediate interest is the length of the memory that develops in the intermittent chaotic processes discussed in Lecture 31. How long does the dynamical process stay in a laminar region before becoming turbulent and making the transition to one of the other laminar regions? Here we follow in part the discussion given by Bai-lin [1989], based as it is on Hirsch et al. [1982], and consider the average time spent in a laminar region, that is, the average time between turbulent bursts. First of all we note the fact that the cubic map g(x) is an explicit function of the control parameter μ, so that since we are interested in the behavior of the map in the vicinity of the points of tangency x=x* so that μ = μc we expand g(x, μ) in a double Taylor expansion:
https://doi.org/10.1142/9789812815361_0033
Here we examine how deterministic mechanisms, for example, the planets orbiting the sun under the influence of gravity, the spreading of diseases such as measles through contacts between children, or the growth in the number of bureaucracies over time, can give rise to erratic time series that satisfy all the conditions of randomness, while at the same time remaining deterministic. We argue that there exists a deep relation between the ideas of fractal geometry and chaos. Chaotic time series have a broad-band spectrum, often with an inverse power-law form, and the power-law index is related to the fractal dimension. However, noise can also have an inverse power-law spectrum in which case it is referred to as colored noise, to distinguish it from white noise. White noise was given its name by the mathematician Norbert Wiener, who was studying the mathematical properties of white light. The name was a natural choice for a time series with a broad-band incoherent spectrum. In both the cases of chaos and colored noise the time series is a random fractal in time. For this reason it is often unclear whether or not a given random fractal time series is generated by a low-dimensional deterministic nonlinear dynamical process, chaos, or by colored noise, a random time series arising from the interaction of the system of interest with the infinite-dimensional environment. We shall examine this question in some detail subsequently, since it is quite important that we distinguish between chaos and noise in a given experimental time series, because how we subsequently analyze the data and interpret the underlying process is determined by this judgment. A colored noise signal has in the past motivated the investigator to look for a static fractal structure that is modulating the noisy signal in such a way as to give rise to the fractal dimension. If, however, the signal is chaotic, then the fractal dimension is related to the underlying nonlinear dynamical process and we have some hope of constructing a deterministic dynamical description of that underlying process. A third possibility arises if we recall our discussion of the fractional difference equation driven by white noise. In the above classification scheme this would also be categorized as colored noise…
https://doi.org/10.1142/9789812815361_0034
Let us now consider how to use a time series to reconstruct the attractor in the phase space on which the evolution of the system unfolds using the attractor reconstruction technique (ART). Of course we do not know before hand if there is an attractor, so this is part of the procedure, to determine if such an attractor does in fact exist. ART was first applied by Packard et al. [1980] to a time series generated by a set of rate equations proposed by Rössler [1976] to describe a typical set of oscillating chemical reactions:
https://doi.org/10.1142/9789812815361_0035
We have spent a great of time discussing fractal random processes, those with inverse power-law spectra having long-time memory. Two separate approaches have been used to model such phenomena. The first approach was by means of fractional differences in discrete stochastic equations, where we determined that fractional differences induce long-time memory into uncorrelated stochastic processes. The second method was by means of low-dimensional, nonlinear, deterministic dynamical equations having intermittent chaotic solutions. Both of these approaches explain erratic behavior in time series, but neither has been directly connected to the probability density in these lectures. In fact the discussion of fractional differences has been called the discrete analog of fractional Brownian motion (fBm), that is, a process with long-time memory and Gaussian statistics even though we have not shown that explicitly. We now want to examine the continuous case and construct the equations of motion for a filtered delta-correlated process with Gaussian statistics, which is in fact given by a fractional integral. We also investigate an explicit representation of this statistical behavior using a Weierstrass-like function…
https://doi.org/10.1142/9789812815361_0036
In an earlier lecture we demonstrated that one way to model the inverse power-law distributions, so common in the social and medical sciences, is by using fractional-differenced white noise. In the last lecture we suggested that the fractional integral is a linear filter that when applied to a Wiener process yields a Gaussian process with long-time, inverse power-law, memory. We are now in a position to merge these two discussions and explore the relationship between inverse power-law distributions and continuous time processes using the fractional calculus. Now let us turn our attention to the formal derivation of fractional derivatives and integrals using the continuum limit of finite difference equations…
https://doi.org/10.1142/9789812815361_0037
Every person who has jumped from an airplane and reached free fall, recognizes that they reach a terminal velocity after falling for some interval of time. The hydrodynamic drag of the air on the jumper, being quadratic in the speed of the jumper, eventually balances the attractive force of gravity, and the jumper thereafter descends at a constant speed. This balancing of the two forces on the jumper is described by the Riccati equation, see for example Davis [1962]. The following discussion of this equation is taken largely from Metzler, Glöckle, Nonnenmacher and West [1997] and is intended to motivate the generalization of such phenomenological equations to include a description of complex phenomena using the fractional calculus…
https://doi.org/10.1142/9789812815361_0038
Following the approach from the last lecture we propose to generalize yet another dynamical process, attempting to describe a physical phenomenon that traditionally is inadequately described by ordinary differential equations, to a more satisfactory description using fractional differential equations. Of course, the fractional calculus does not in itself constitute a physical theory, but requires one to interpret the fractional derivatives and integrals in terms of physical models. Therefore, we follow Nonnenmacher and Metzler [1995] and examine a simple relaxation process described by the rate equation
https://doi.org/10.1142/9789812815361_0039
Up to this point in our use of the fractional calculus we have focused our discussion on ordinary differential equations and their generalization to fractional form. However, we know that the proper description of the evolution of a dynamical random variable X(t) often requires a probability density that is a function of the magnitude of the variate x and the time t, separately. We need to know the probability that X(t) lies in the interval (x, x + dx) at time t given the initial value x0, that is, P(x,t∣x0)dx. Suppose that the dynamical equation for X(t) is of the fractional differential form
https://doi.org/10.1142/9789812815361_0040
We should point out here that starting from a fractional Langevin equation is not the only way to obtain a fractional diffusion equation. For the past few years Zaslavsky has been exploring various ways of obtaining such an equation starting from a non-integrable Hamiltonian. He has used the scaling properties of the chaotic solutions in phase space to construct what he calls the fractional Fokker-Planck-Kolmogorov equation (FFPK), see, for example, Zaslavsky [1994]. In his analysis there are two critical exponents α and β corresponing to the fractional derivatives in space and time, respectively. We derive a similar equation here using the generalized Taylor expansion…
https://doi.org/10.1142/9789812815361_0041
As with any classic tale, this one has a moral. But one that is perhaps not so simply revealed in a sentence or two. Therefore we shift from the literary device of the tale to that of the parable. Consider the case of the frog and the scorpion…
https://doi.org/10.1142/9789812815361_bmatter
The following sections are included: