Introduction to Dynamical Systems
In this tutorial we are concerned with dynamical systems. A simple
definition of dynamical system adequate for our purposes is as follows.
A dynamical system consists of a state space along with an operation which
maps the state at the current time to the state at some later time. Thus
the current state contains all the needed information to completely describe
the system for all times in the future. In other words, to
determine the state of the system at a future time, only the state at the
current time is needed; information about the past states is not necessary.
The state at some initial time t0 can be considered an initial
condition. We consider two classes of dynamical systems; discrete-time
and continuous time dynamical systems.
In discrete-time dynamical systems, time can only take on discrete values,
usually the integers, while in continuous-time dynamical systems, the time
can be any real number. Given an initial conditions x0 at
t0, we can find the state at all future time t, denoted x(t).
The collection x(t) for all (future time) t is called the
trajectory. Sometimes x(t) is written as x(t,t0,x0) to
indicates its dependence on t0 and x0. If x(t,t0,x0) does
not depend on t0, then the system is called autonomous.
Otherwise, the system is called nonautonomous.
Continue to one dimensional maps.
Back to introduction.
Copyright 1996, 1997, 1998 Chai
Wah Wu
Last modified: Jul 22, 1998. Disclaimer