public:courses:finance:computational_finance:time_series

Week4 - Time Series

  • Stochastic (Random) Process: \(\{\dots,Y_1,Y_2,\dots,Y_t,Y_{t+1},\dots\} = \{Y_t\}_{t=-\infty}^\infty\) is a sequence of random variables indexed by time.
  • Observed time series of length T: \(\{Y_1=y1,Y_2=y2,\dots,Y_T=y_T\} = \{y_t\}_{t=1}^T\)
  • Intuition: \({Y_t\}\) is stationary if all aspects of its behavior are unchanged by shifts in time.
  • A Stochastic process \(\{Y_t\}_{t=1}^\infty\) is strictly stationary if, for any given finite integer r and for any set of subscripts \(t_1,t_2,\dots,t_r\) the joint distribution of \((Y_t,Y_{t_1},Y_{t_2},\dots,Y_{t_r})\) depends only on \(t_1 - t, t_2-t, \dots, t_r -t\) but not on t.
  • For a strictly stationary process, Yt has the same mean, variance for all t.
  • Any function/transformation g(Y) of a strictly stationary process \(\{g(Y_t)\}\) is also strictly stationary.
  • \(E[Y_t] = \mu\) for all t
  • \(Var(Y_t) = \sigma^2\) for all t
  • \(cov(Y_t,Y_{t-j}) = \gamma_j\) depends on j and not on t. This is called the j-lag autocovariance
  • with assumption of covariance stationarity: \(Corr(Y_t,Y_{t-j}) = \rho_j = \frac {cov(Y_t,Y_{t-j})}{sqrt{var(Y_t) var(Y_{t-j})}} = \frac {\gamma_j}{\sigma^2} \)
  • The autocorrelation function (ACF) is the plot of \rho_j against j.
  • \(Y_t \sim iid N(0,\sigma^2)\) or \(Y_t \sim GW N(0,\sigma^2)\)
  • \(E[Y_t] = 0, ~ Var(Y_t)=\sigma^2\)
  • \(Y_t\) is independent of \(Y_s\) for \(t \neq s\) ⇒ \(cov(Y_t,Y_{t-s})=0\) for \(t \neq s\)
  • Note that “iid” means “independent and identically distributed”
  • Here, \(\{Y_t\}\) represents random draws from the same \(N(0,\sigma^2)\) distribution.
  • \(Y_t \sim iid (0,\sigma^2)\) or \(Y_t \sim IW N(0,\sigma^2)\)
  • \(E[Y_t] = 0, ~ Var(Y_t)=\sigma^2\)
  • \(Y_t\) is independent of \(Y_s\) for \(t \neq s\)
  • Here, \(\{Y_t\}\) represents random draws from the same distribution. But we do not specify what the distribution is (only its mean and variance).
  • \(Y_t \sim W N(0,\sigma^2)\)
  • \(E[Y_t] = 0, ~ Var(Y_t)=\sigma^2\)
  • \(cov(Y_t,Y_{t-s})=0\) for \(t \neq s\)
  • Here, \(\{Y_t\}\) represents an uncorrelated stochastic process with given mean and variance. (it does not imply independence). (we can have non-linear dependence)
  • \(Y_t = \beta_0 + \beta_1 t + \epsilon_t, ~~ \epsilon_t \sim WN(0,\sigma^2)\)
  • \(E[Y_t] = \beta_0 + \beta_1 t\), depends on t.
  • A simple detrending transformation yield a stationary process:

\[X_t = Y_t - \beta_0 - \beta_1 t = \epsilon_t\]

  • \(Y_t = Y_{t-1} + \epsilon_t, ~~ \epsilon_t \sim WN(0,\sigma_\epsilon^2), ~~ Y_0\) is fixed.
  • Then: \(Y_t = Y_0 + \sum\limits_{j=1}^t \epsilon_j\) ⇒ \(Var(Y_t) = \sigma_\epsilon^2 \times t\) depends on t.
  • Simple detrending transformation yield a stationary process:

\[\Delta Y_t = Y_t - Y_{t-1} = \epsilon_t\]

  • \(Y_t = \mu + \epsilon_t + \theta \epsilon_{t-1}, ~~ -\infty \lt \theta \lt \infty, ~~ \epsilon_t \sim iid~ N(0,\sigma^2)\)
  • We then have \(E[Y_t] = \mu + E[\epsilon_t] + \theta E[\epsilon_{t-1}] = \mu\)
  • In practice we use -1 < \(\theta\) < 1.
  • \(\begin{align} Var(Y_t) & = \sigma^2 = E[(Y_t - \mu)^2] \\ & = E[(\epsilon_t + \theta \epsilon_{t-1})^2] \\ & = E[\epsilon_t^2] + 2\theta E[\epsilon_t \epsilon_{t-1}] + \theta^2 E[\epsilon_{t-1}^2] \\ & = \sigma_\epsilon^2 + 0 + \theta^2 \sigma_\epsilon^2 = \sigma_\epsilon^2 (1+\theta^2)\end{align}\)
  • Similarly we can find that: \(Cov(Y_t,Y_{t-1}) = \gamma_1 = \theta \sigma_\epsilon^2\)
  • Note that the sign of \(\gamma_1\) depends on the sign of \(\theta\).
  • So we have \(\rho_1 = \frac{\gamma_1}{\sigma^2} = \frac {\theta \sigma_\epsilon^2}{\sigma_\epsilon^2 (1+\theta^2)} = \frac {\theta}{1+\theta^2}\)
  • Note that the maximum value of \(\rho_1\) here is +/- 0.5 for \(\theta =\) +/- 1.
  • Note that \(\gamma_2 = 0\) for this model.
  • In general for this model \(\gamma_j = 0 \) for j > 1. So Gamma doesn't depend on t but only on j.

⇒ MA(1) is covariance stationary.

Example

  • \(r_t \sim iid ~ N(\mu_r,\sigma_r^2)\)
  • We consider a time serie on 2 months: \(r_t(2) = r_t + r_{t-1}\)
  • This monthly time serie will overlap by 1 month:

\[r_t(2) = r_t + r_{t-1}\] \[r_{t-1}(2) = r_{t-1} + r_{t-2}\] \[r_{t-2}(2) = r_{t-2} + r_{t-3}\]

⇒ Then \(\{r_t(2)\}\) follows an MA(1) process.

  • \(Y_t - \mu = \phi (Y_{t-1} - \mu) + \epsilon_t, ~~ -1 \lt \phi \lt 1, ~~ \epsilon_t \sim iid ~N(0,\sigma_\epsilon^2)\)

⇒ AR(1) is covariance stationary provided \(-1 \lt \phi \lt 1\).

  • Notion of Ergodicity : The time dependence in the data will die progressively. Eg. \(Y_t\) and \(Y_{t-j}\) are essentially independent if j is big enough.
  • AR(1) is ergodic. Its properties are:

\[E[Y_t] = \mu\] \[Var(Y_t) = \sigma^2 = \frac {\sigma_\epsilon^2}{1- \phi^2}\] \[Cov(Y_t,Y_{t-1}) = \gamma_1 = \sigma^2 \phi\] \[Corr(Y_t,Y_{t-1}) = \rho_1 = \frac{\gamma_1}{\sigma^2} = \phi\] \[Cov(Y_t,Y_{t-j}) = \gamma_j = \sigma^2 \phi^j\] \[Corr(Y_t,Y_{t-j}) = \rho_j = \frac{\gamma_j}{\sigma^2} = \phi^j\]

  • Note that, since \(|\phi| \lt 1\), we have: \(\lim\limits_{j \to \infty} \rho_j = \phi^j = 0\)
  • Concept of mean reversion : the AR(1) will tend to revert “around” the mean when being for a “moment” on one side of the mean of the other (the speed of reversal depends on \(\phi\)).
  • AR(1) model is a good description for:
    • Interest rates
    • Growth rate of macroeconomic variables
      • Real GDP, industrial production
      • Money, velocity
      • Real wages, unemployment.
  • public/courses/finance/computational_finance/time_series.txt
  • Last modified: 2020/07/10 12:11
  • by 127.0.0.1