TOWARDS UNDERSTANDING THE THEORETICAL CHALLENGES OF NUMERICAL MODELING OF DYNAMICAL SYSTEMS

Asking and answering the right questions about computability and computational complexity of dynamical systems is more important than many may realize. In this note, I discuss the principal challenges, their implications, and some instructive open problems.


Preface
It is a privilege to contribute to the volume in memory of Vaughan Jones. I was one of the invited speakers at NZMRI Summer meeting in January 2009. These "Summer schools" were a wonderful initiative of Jones', leading to the creation of NZMRI which he co-founded and chaired. For me it was a remarkable trip. Although I delivered a series of lectures, I also felt a bit like a student, being a newcomer to the questions of algorithmic computability and complexity which were the focus of the meeting. The study of computational complexity in dynamics was making its first baby steps (it is only a little more mature now). It was fun to talk to a receptive audience about results which I was finding exciting, and it was instructive for me to see it as a part of the broader development of the field of algorithmic complexity in continuous (as opposed to discrete) mathematics. It helped me to focus on what I see as the main theoretical challenges of modeling dynamics on a computer, which I attempt to convey in this paper.

Motivation
In discrete mathematics, techniques of numerical modeling and theoretical study of computational complexity of the corresponding problems go hand-in-hand. Indeed, typically the same people develop practical algorithms and establish rigorous complexity bounds. The culture of modeling continuous phenomena is completely different. Thus, practitioners bravely use computers to model nuclear explosions, weather systems, or, indeed, the Earth's climate decades from now without ever asking whether such modeling is theoretically possible. I see two principal reasons for this: (I) the modern development of the mathematical study of dynamical systems has been led by computer experiments since the 1950's. Numerical studies have informed the general paradigm of the expected behaviour of typical dynamical systems. In a way, this has become a self-fulfilling prophecy -we look to model natural phenomena using mathematics which was developed to interpret the results of the modeling; (II) numerical experiments of this nature seem to "work" -that is, they produce plausible results which agree with the established paradigm. In this note I aim to describe the modern paradigm of numerical modeling of dynamical systems, and its limitations from the point of view of computational complexity and computability, posing questions on what can and cannot be computed in theory. I also speculate on what happens when computer simulations of complex natural processes are performed, why the results appear to make sense, and what they actually describe.

Computers versus chaos.
In what follows, I will only consider discrete-time dynamical systems on a finite-dimensional space. The latter condition means that a state of the system can be described by finitely many parameters, conveniently represented as coordinates of a point in R m . Discreteness means that the evolution of the system will be represented by a map F from a domain Ω ⊂ R m to R m : in one tick of the clock a pointx ∈ Ω will transition to F (x). Assuming the successive images ofx remain in Ω, we can continue applying iterates of F , to obtain the orbit which represents the long-term evolution of the system starting atx. This is in line with actual computational practice. Even if studying a continuous-time dynamical system, which is the flow 1 Φ(t;x 0 ) of a system of differential equations in practice one would use a numerical method to estimate for some suitably small ∆t. When computing the values of F (x), by necessity, the real pointx has to be rounded up to a rational approximationx r . Round-off errors in numerical experiments have inadvertently led to the empirical discovery of "chaos", the key to the modern theory of dynamical systems. The most famous and, probably, the most influential of these experiments was performed by a meteorologist E. Lorenz in 1961 [16]. To save time, Lorenz attempted to reproduce a simulation of a weather pattern, using the output corresponding to some intermediate point in time. The predictions of the model were completely different from the original calculation. The culprit: a round-off error introduced when outputting the intermediate results.
In retrospect, one of the most surprising aspects of this and other similar experimental discoveries is that they were surprising. The general question of (in)stability in dynamical system was investigated by A.M. Lyapunov [18] in the late 1800's.
An example of what we would now describe as chaotic dynamics was discovered around the same time by H. Poincaré in his seminal study [28] of the Three-Body Problem of the relative motion of the Sun, the Earth, and the Moon. 2 Rounding off does not need to lead to a dramatic change in the results of the numerical experiments, if the orbit ofx is Lyapunov-stable. For instance, the contracting one-dimensional map F (x) = 0.5x has a single fixed point x 0 = 0 which attracts the orbits of every point x ∈ R. Round-off error is decreased by half with every iterate, thus the precision of the calculation actually improves (by a single dyadic bit) with each step. The scenario when an orbit converges to an attracting periodic point is commonly referred to as regular dynamics -it presents no obstacles to numerical modeling.
To see how things can go wrong, note first that if the map F is smooth on Ω, and ||DF || is bounded by some M > 0, when we replacex with a rounded off valuē x r , the distance ||x −x r || may grow at most M times after one iterate. But, if this happens at every step of the iteration, then in order to calculate k = [log 2 M ] steps of the orbit ofx, we have to increase the precision of the computation by one dyadic bit -which will quickly become prohibitive.
As an elementary example, consider the doubling map D(x) = 2x mod 1 defined on the circle T = R/Z. Since D (x) = 2 at every point, iterating one step further requires adding one binary digit to the precision of computing the function, so an orbit cannot be followed too long in practice, the phenomenon known as sensitive dependence on initial conditions which is the main hallmark of "chaos". And the behavior of an orbit can be complicated indeed: some of the sequences {D n (x)} n≥0 are dense in the circle T (for instance, if x is a number whose binary expansion consists of all possible finite binary sequences ordered lexicographically). This is another common feature of chaos, known as transitivity.
Tracing individual orbits of D(x) on a computer is out of reach. However, it is easy to describe statistical properties of typical orbits. Recall, that a number x ∈ T is normal base b if for any string Θ = θ 1 θ 2 . . . θ n of base b digits of length n ∈ N, denoting N l (Θ, x) the number of times Θ appears as a substring in the first l digits in base b expansion of x, we have that is, all finite strings appear with equal frequency in the base b expansion of x. This concept was introduced byÉmile Borel, who showed that almost all x ∈ T have this property (this is a consequence of Borel-Cantelli Lemma). Wall [35] proved that normality is equivalent to the fact that the sequence {b n x} is uniformly distributed in T. Given an arc A ⊂ T and a point x ∈ T, we will generally not be able to identify the iterates D n (x) ∈ A, however, the fraction of time the orbit of a typical point x spends in A is equal to the length of A.

Attractors and physical measures.
To formalize the discussion of statistical averages, let us say that a probability measure µ on Ω is a physical measure of F if the set B(µ) ⊂ Ω (the basin of µ) of pointsx for which has positive Lebesgue measure. A physical measure which is ergodic is known as Sinai-Ruelle-Bowen (SRB) measure. The support of an SRB measure is an attractor for F . Regular dynamics gives one such example: if is an attracting periodic orbit of F , then the combination of delta-masses is an ergodic physical measure. The unique SRB measure of D(x) is the Lebesgue measure on the circle, which is another simple example. Attractors (and measures) may be much more complicated, however, such as the Lorenz "butterfly" attractor made famous by E. Lorenz's discovery, whose existence was rigorously proved by W. Tucker [32] (see Figure 1). The principal conjecture (see [27]) in the field can be formulated as follows: Finitness of attractors conjecture. For a typical dynamical system F almost every orbit lies in the basin of an SRB measure, and there are finitely many such measures.
Typicality is understood with respect to Lebesgue measure in a finite-parameter family of maps, presumably given by an application of a physical law, and could be suitably interpreted in an infinite dimensional space of all maps of a given regularity. This statement is clearly motivated by the numerical experimentation and in its turn informs how we approach computer experiments. There is a lot of supporting evidence in its favor (more on that below), but there have also been voices of caution. Notably, D. Ruelle [31] has discussed examples of maps for which the averages 1 n do not converge for pointsx in some set of positive Lebegue measure, and conjectured that in high enough dimension such dynamical systems may occupy open subsets of the parameter space. But at this point no convincing evidence of this has been offered, to my knowledge.
To round off the discussion, let me turn now to the principal example of discrete dynamical systems, which has attracted more attention than the rest of the field combined: quadratic polynomial maps.
3.3. The star of the show. Studying the dynamics of quadratic polynomials has kept everyone in the subject busy from the 1970's to this day. Since linear changes of coordinate turn one quadratic into another, two out of the three coefficients of a degree 2 map can be normalized to our liking. Most commonly, the family is considered; with the additional restriction λ ∈ [0, 1], the function f λ maps the closed interval [0, 1] to itself. In this family, a very strong version of the Finitness conjecture holds. To formulate it, let us first introduce some terminology. As usual, the ω-limit set ω(x) of a point x ∈ [0, 1] is the set of limit points of its orbit: Let us say that f λ is renormalizable with period p if there exists a periodic cycle of disjoint intervals It is infinitely renormalizable if there exists an infinite nested sequence of cycles of periodic sub-intervals of [0, 1]: C 0 ⊃ C 1 ⊃ C 2 ⊃ · · · with increasing periods. We say that the Feigenbaum-like Cantor set of an infinitely renormalizable polynomial is the intersection ∩ k∈N C k ; it actually is a Cantor set in this case. There are only three possibilities for the structure of the ω-limit set of a typical orbit of a real quadratic polynomial (see [19] where the classification was completed, and references therein): Then there is a unique set A λ (a measure-theoretic attractor in the sense of Milnor [24]) such that A λ = ω(x) for Lebesgue almost all x ∈ [0, 1], and only one of the following three possibilities can occur: (1) A λ is an attracting periodic cycle; (2) A λ is a periodic cycle of intervals; In all of the above cases, A λ = ω(x) for a dense-G δ set of points in [0, 1] (the topological attractor of f λ in the sense of Milnor [24]).
Furthermore, M. Lyubich [20] showed that: For almost every λ ∈ [0, 1], the attractor A λ is as in (1) or (2) of Theorem 3.1, is equal to ω(0.5), and is the support of the unique SRB measure µ λ of f λ , whose basin has full measure in [0, 1]. Moreover, in case (2), the measure µ λ is absolutely continuous on each of the intervals forming A λ .
Thus, for almost all parameters λ, the maps of the family f λ possess a unique SRB measure µ λ supported on an attractor A λ whose structure is well understood, and which attracts typical (in every reasonable sense) orbits of f λ . One potential cause of concern is the instability of the statistical properties of orbits with respect to λ. M. Misiurewicz showed in [25], that there exists a test function φ : [0, 1] → R, a set S ⊂ [0, 1] of positive measure, and > 0, such that for any λ ∈ S and a neighborhood U the property which he called strong structural instability, and which is likely common in other natural families of examples.
Quadratic polynomials play a similarly central role in the study of dynamics of functions of one complex variable. In this case, the normalization P c (z) = z 2 + c, with the parameter c ∈ C is more common. A polynomial P c has a single repeller J c , which attracts all of the inverse orbits z → except at most one 3 . It is called the Julia set of P c . Again, for all points z ∈ C but at most one, the weighted averages weakly converge to a measure ω c whose support is equal to J c . It is known as Brolin-Lyubich measure and coincides with the harmonic measure of the Julia set seen from the point at infinity. The subject of computability and computational complexity of Julia sets is rich and subtle, to keep the discussion focused I will mostly leave it out of this note. I refer the interested reader to my monograph with M. Braverman [8], and also the paper [2], in which the computability of ω c is addressed.

Computability and Computational Complexity: Briefly
The formal model of computation widely accepted in computer science is a Turing machine (TM); it is provably equivalent to a program in any common programming language running on a computer with unlimited storage. I will use the terms TM, computer program, or an algorithm interchengeably. Definitions of computability of continuous objects took a while to develop; the subject is not without controversy (see [4] for a review). The first step was taken by Turing in his foundational work [33], in which he defined a computable real number x as a number for which there exists a TM which takes a single input n ∈ N and outputs a rational number d n such that |x − d n | < 2 −n . The bound 2 −n can be replaced by any other quantity that converges to 0 with n in a constructive fashion -for instance, using 1 n in its stead will give an equivalent defintion. This specific choice reflects the fact that modern computers think in binary. If d n ∈ Z[2 −1 ] is a dyadic rational, represented in base 2 as a finite string of dyadic bits, then we can think of it as an approximation of x to the n-th bit. Most numbers are not, in fact, computable, since in any programming language there can be only countably many programs; in his paper, Turing gave a "specific" example of a non-computable real. Starting with computable reals, one can define, for instance, computable complex numbers, computable points in R m and so on.
Turing's idea clearly corresponds to the actual computational practice, and gives the direction for further development of computability in analysis. It is interesting to note (and is not as well known) that further progress owes much to S. Banach and S. Mazur [1,21]. Most of the subtlety in this development lies in the difficulty of defining a computable function f (x) of a real variable x. Since only countably many real numbers x are computable, one has to somehow account for the inputs which are not. The modern approach again corresponds to the actual computing practice: a real function f (x) is computable if there is an algorithm to approximate the value of f (x) with an arbitrary desired precision, assuming that the value of x can be read by the algorithm with whatever precision is required.
To formalize this, let us say that a function φ : N → Q is an oracle for x if |x − φ(x)| < 2 −n . An oracle for z ∈ C orx ∈ R m is defined in the same way. A TM may be allowed to query an oracle, that is, to read the values φ(n). A real function f is computable if there exists a TM which takes a single input n ∈ N and, given an oracle for x in the domain of definition of f , outputs d n ∈ Q such that |f (x) − d n | < 2 −n .
If all of this sounds a bit confusing, think, for example, of the problem of computing the value of the exponential function exp(x) with a given precision 2 −n . To fix the ideas, let us suppose that we use the Maclaurin series for the exponential. The formula for the remainder term is where c is a real number between 0 and x. The algorithm can first ask the user for a rough approximation of the value of x to get an integer upper bound B > |x| + 1 for it (in practice, a uniform such bound may be imposed by restricting the range of allowed inputs x). The value of |R m (x)| can be bounded from above crudely by From this, the algorithm can calculate the value of m needed to get E m < 2 −(n+1) , and the precision 2 −s with which x needs to be known to compute the sum of the first m terms of the Maclaurin series with error of at most 2 −(n+1) . Now the user is again asked to input the value of x, this time with precision 2 −s , and the calculation is completed. This is cumbersome, of course, but the key point here is that it can actually be done for the exponential function, so there is no theoretical obstacle to approximating the value exp(x) if we set our mind to it. In our discussion of computability of attractors in the quadratic family, we will talk about computable compact sets in the plane. In practice, by computing a set, we mean visualizing it on a computer screen. A visualization is a finite collection of pixels. If we fix a specific pixel size (commonly taken to be 2 −n for some n) then to accurately draw the set within one pixel size, we should fill in the pixels which are close to the set (for instance, within distance 2 −n from it), and leave blank the pixels which are far from it (for instance, at least 2 −(n−1) -far). Thus, for the set K to be computable, there has to exist an algorithm which for every square of size 2 −n with dyadic rational vertices correctly decides whether it should be filled in or not according to the above criteria. 4 This definition easily generalizes to subsets of R m for any m.
Suppose that we are trying to compute the attractor A λ of f λ (x). Then we should assume that the algorithm has an access to the oracle for λ. In other words, we will treat the dependence of the attractor on λ as a function Φ : λ → A λ , and will ask whether it is computable at a point {λ 0 }, or, more generally, on some subset of [0, 1]. This is completely consistent with the computing practice -if we would like to compute an object (an invariant set, an SRB measure, etc) associated with a dynamical system F : Ω → R m , then we expect the algorithm to "know" what the function F is, that is, to be able to approximate the values of F with any given precision. 5 To talk about computability of measures, we can follow the same strategy, using approximations by a dense subset of explicitly defined measures, such as rational combinations of delta-masses at rational points, for instance. However, the way it typically is with measures, it is more convenient to take the dual approach. We will say that a measure µ is computable if the following is true. There is a TM which, given a computable test function φ (as a subprogram, or a black box) and n ∈ N outputs d n ∈ Q such that φdµ − d n < 2 −n .
I would like to emphasize that all of the above definitions are consistent with the actual computing practice. While it is true that practitioners will often cut corners and not worry too much about the round-off error in the calculation, at least, for a computable object (real number, value of a real function, an integral over a measure, etc) the calculation can in principle be carried out with the desired precision. For a non-computable object no algorithm can do the job, all of them will provably fail. Moreover, they must fail in such a way that we would not have a systematic way (an algorithm) of detecting it (which would otherwise enable us to pick a correct finite precision approximation by discarding the incorrect ones). 4 Note that we let the algorithm treat the "in-between" pixels in an arbitrary fashion, to reflect the fact that a finite-precision computation may have round-off errors. We only require that these errors are small enough, so the "fuzz" around the approximation of the set is at most one pixel thick. 5 Think of having a black box which, given an oracle for x, outputs the values of F (x) with any desired precision.
An extremely important practical consideration for a computable object is its computational complexity, since computing resources are finite in practice. In this note I will only address the time complexity of the computations. A complexity bound is expressed by a function T : N → N, whose argument is the desired precision, and the output is a bound on the computing time. For instance, we say that the complexity of a set K R 2 is bounded above by T (n) if there exists an algorithm to visualize K on a computer screen, and N ∈ N such that for every n ≥ N and each pixel of size 2 −n the time that the algorithm takes to decide the pixel is bounded above by T (n). If this is not the case, we say that T (n) is a lower complexity bound for the set. In other words, the time complexity is bounded below by T (n) if for every program computing the set, there exists an infinite sequence of pixels P n k of size 2 −n k such that the time it takes the program to decide P n k is greater than T (n k ).
You will have noticed that the above definition of complexity depends on the programming language that is used, the computation may run faster in some of them than in others. However, changing the language changes the computing time for each step by at most a constant multiple -and we are generally only interested in the rate at which T (n) increases with n. Any computation whose time grows faster than polynomial in n becomes impossible in practice for large values of n. 6 Problems with polynomial-time complexity are generally considered to be computably tractable; having a polynomial upper bound for time complexity is not the same, of course, as having an efficient algorithm, but the two usually go handin-hand. If the lower bound on T (n) is, for example, exponential in n, then there is no realistic hope of computing a high-precision approximation of a continuous object, no matter what we try.

Can the Statistical Behavior of a Dynamical System be Modeled on
a Computer? 5.1. Monte Carlo modeling. Lorenz' observations on the difficulty of modeling the weather have not put an end to weather modeling. Instead, weather forecasts are made in the language of statistics (e.g. there is a 40% chance of rain tomorrow). The probabilistic approach to modeling is consistent with the paradigm which we have presented above, in which a typical dynamical system has finitely many stable limiting regimes into which almost all orbits eventually fall. The technique itself, known as Monte Carlo method, dates to the very dawn of computer modeling of natural processes. It was pioneered by S. Ulam and J. von Neumann during their work in Los Alamos in 1946 [34,23,22] to model neutron diffusion. Informally speaking, given a numerical simulation of a dynamical system F : Ω → R m , one can throw random darts to select a large number of initial values; run the simulation for the desired duration for each of them; then statistically average the outcomes. We then expect these averages to reflect the true average statistics of our system. More formally, letx 1 , . . . ,x k be k points in Ω randomly chosen, for some k 1 and consider the probability measure where δx is the delta-mass at the pointx ∈ R n . The expectation is then that for k, n → ∞ the probabilities µ k,n converge to a limiting statistical distribution that we can use to make meaningful long-term statistical predictions of our system. If the finitness of attractors postulate holds for F , then the limit of the sums (5.1) is almost surely equal to the weighted average of the physical measures, where the weights correspond to the proportions of Ω occupied by the basins of the measures. In particular, if there is a unique attractor with an SRB measure in our dynamical system, as is the case for a typical quadratic map f λ , then one random dart to pick an orbit in (5.1) will suffice to obtain this measure in the limit. Of course, there are examples of systems with infinitely many physical measures. For instance, Newhouse [26] showed that a polynomial map f in dimension 2 can have infinitely many attracting basins, on each of which the dynamics will converge to a different stable periodic regime. This in itself, however, is not necessarily an obstacle to the Monte-Carlo method, since the weighted average of the attracting basins is still a meaningful statistic. Indeed, the empirical belief is that Monte Carlo method still succeeds in these cases.

When limiting statistics cannot be computed.
It is hard to overestimate the importance of Monte Carlo simulations in computing practice, it is the go-to tool of numerical simulations. In light of this, it was particuarly surprising for C. Rojas and I [30] to discover the following: Theorem 5.1. There exist uncountably many parameters λ ∈ [0, 1] such that: • the quadratic map f λ has a unique SRB measure µ λ whose basin has full measure in [0, 1]; • the measure µ λ is not computable by a TM with an oracle for λ.
Monte Carlo method (as well as any other numerical algorithm) would thus definitely fail to compute the limiting statistics of the orbits -even though almost every orbit has the same limiting statistics.
The measures µ λ constructed in [30] are not absolutely continuous with respect to the Lebesgue measure, however, this does not seem to be a distinguishing feature. It is not yet known whether absolutely continuous non-computable examples exist in this family, but I suspect that this is the case.
The first instinct of a theoretical computer scientist would be to ask whether a value of λ with non-computable µ λ can be made computable. While we have not proved that it is so, it is very likely to be the case. In fact, the proof of Theorem 5.1 has little to do with the parameter λ itself, but rather is based on a very precise description of the instability of the dependence λ → µ λ , similar in spirit to the strong structural instability of [25]. Instability does therefore become a theoretical barrier to computing the limiting distribution.
A more important question to ask is whether such parameters λ can form a set of positive Lebesgue measure. The answer to this is likely "no": we expect µ λ to be computable for almost all λ. The set of parameters produced in [30] has zero Lebesgue measure. However, the situation may change when one considers higher dimensional dynamical systems -it is possible that there exist large sets of parameters for which limiting statistics cannot be computed.
Even if in the quadratic family f λ , the measure µ λ is almost always computable, would it almost always be computable in polynomial time? The existence of noncomputable examples may forecast trouble.
Modelers should be at least aware that even in the simplest families with the nicest limiting statistical properties, numerical methods such as Monte Carlo may fail for reasons of non-computability or high computational complexity. There is no practical way around this obstacle. The limiting distribution µ λ for maps in Theorem 5.1 is truly unknowable.

Can one Compute an Attractor as a Set?
To be more precise, can we compute the set of limit points of a typical orbit? This is another very natural question from the point of view of modeling. It is clearly complementary to the statistical modeling discussed above, and the evidence is mixed as we will see below.
In the family f λ , together with C. Rojas [29] we have shown: Theorem 6.1. The attractor A λ is always computable by a TM with an oracle for λ. However, for any lower bound T : N → N, there exists a parameter λ ∈ [0, 1] such that the attractor A λ is a Feigenbaum-like Cantor set whose computational complexity (given an oracle for λ) is bounded below by T .
By M. Lyubich's Theorem 3.2, Feigenbaum-like attractors only occur for a set of parameters of measure zero, and indeed, as shown in [29], in all other cases the attractor is computable in polynomial time, given an oracle for λ. I note that despite this encouraging fact, the mechanism of creating a high-complexity attractor in Theorem 6.1 is quite robust, and it is again likely that in higher-dimensional families of maps this phenomenon is much more prevalent.
As I have mentioned, M. Braverman and I have studied the question of computability of the Julia sets J c of complex quadratic polynomials P c (z) = z 2 + c and demonstrated [6,8,9] that there exist values of c ∈ C such that J c is not computable with an oracle for c (and, in fact, such c's can be produced constructively [7], negating the need for an oracle). Values of c for which J c is not computable form an uncountable set, which is, however, quite small in the measure-theoretic sense. It not only has zero area, but, subject to a natural conjecture, also has zero Hausdorff dimension. It is plausible that for a typical value of c, the set J c is computable in polynomial time, and this is indeed true for a typical real value of c as shown by A. Dudko and I in [10].
The situation with Julia sets in two or more complex dimensions is entirely unknown, and any results, either positive or negative, would be most interesting.

What does Computer Modeling tell us in Practice?
I am far from being the first person asking this question, and there are numerous papers on this; as a sample of references spanning three decades see [15,14,17]. Of course, the only theoretical obstacle to rigorously approximating a computable object such as a physical measure or an attractor of a dynamical system with whatever precision is desired are the required computational resources. In the case when there is an efficient (polynomial-time) algorithm, the computation can be carried out with an arbitrary degree of accuracy -but it has to be done carefully, to control the round-off error. There are widely available computing packages which realize computations with an arbitrary precision arithmetic, which automate the process. However, this approach is computationally costly, as I could see first hand from my own very limited experience with this type of computations [11]. Arbitrary precision computations are immensely valuable in the situations when a single high-precision estimate needs to be obtained. They are the foundation of computer-assisted proofs in Analysis, since the pioneering work of Lanford [13].
Still, in most situations, numerical computations are carried out with a high, but finite precision, usually based on the rounding off conventions of the programming language used -something along the lines of double precision "reals" in C. Clearly, we cannot hope to faithfully recreate the properties of the orbits of a dynamical system with this approach. Suppose, we are given a dynamical system F acting on a bounded domain Ω ⊂ R m . A finite-precision discretization of F replaces it with a map G whose domain and range consist of points whose coordinates are dyadic rationals of the form p · 2 −n where both n and p are bounded integers. This means that G acts on a finite set, all of its orbits are finite, and so eventually fall into periodic cycles. This is an artefact of the discretization, and, if the latter is not well-chosen, may result in an unfortunate scenario when all of the orbits seem to quickly collapse into very few cycles (see the insightful discussion in [14] and references therein). To provide a mathematical justification, iteration of a roundedoff map is usually interpreted as random iteration. That is, at the k-th step of the iteration, F is replaced with F + k , where k is randomly chosen in a ball of radius δ > 0 around the origin. This approach has been studied since the early days of the subject; in [14], Lanford credits D. Ruelle with suggesting it to explain the empirical behaviour of discretized chaotic maps.
Random iteration was studied from the point of view of algorithmic computability by the authors of [5], who found that, if k is chosen uniformly and F is given by an oracle, then for almost all δ all ergodic measures of the random dynamical system are algorithmically computable. Thus, random noise destroys non-computability.
Rounding off, at least if interpreted in this way, masks the non-computable phenomena.
Another approach is to study the computability of the products {(ν, O ν )| ||ν|| < δ}, where ν is the parameter of the dynamical system F ν , and the object O ν is, for instance, an attractor. This seems to mask non-computability in a similar fashion; for instance, M. Braverman and I showed that the set {(c, J c )} is computable 7 [8].
Rather than interpreting such statements as positive news, I see them as yet another reason for caution -since they suggest that the results produced by finiteprecision iteration may be much nicer than the underlying dynamics.

A few Words in Conclusion
The field of computational complexity of continuous processes is only beginning to take shape now. Practitioners should be aware of the potential theoretical obstacles to numerical modeling of dynamical systems. There are two plausible explanations for the seeming effectiveness of numerical methods: the possibility that a typical dynamical system is computably tractable (and thus, all is good with the modeling world), or the less welcome scenario in which our approach to numerical modeling replaces the underlying dynamical systems with ones that are computably tractable and may thus blind us to the reality in some instances. Distinguishing between them motivates most of the open problems I have discussed in this note. Without a doubt, many fascinating discoveries await in the emerging new synthesis of analysis and theoretical computer science.