To analyze time series data, it is always useful to identify what kind of process are we seeing: White noise, moving average, autoregressive, or a combination of the last two, autoregressive moving average. If we simply try to visualize these processes, it will be almost impossible to distinguish them from one another.
For example, the images above were generated using the following formulas:
Even though the formulas that generated these 4 processes are different, it is hard to categorize each process correctly by sight. To begin disentangling the inner structures of each series, we need to resort to autocorrelation functions, but to understand them, we first need to understand autocovariance functions.
The word autocovariance seems complicated but is just a sophisticated way of naming the calculation of the covariance of a series with itself (but lagged periods). We can define the autocovariance function () as:
\[ \gamma_{j} = Cov(y_{t}, y_{t-j}) = E[(y_{t} - \mu_{t})(y_{t-j} - \mu_{t-j})] \]
Here, is the original series, is the same series lagged periods, and and are their corresponding means.
If we assume that the series is weakly stationary1, we can now simplify the expression to:
\[ \gamma_{j} = E[(y_{t} - \mu)(y_{t-j} - \mu)] \]
To estimate , we use the sample autocovariance function:
\[ \hat{\gamma_{j}} = \frac{1}{n}\sum_{t=j}^{n} (y_{t} - \bar{y})(y_{t-j} - \bar{y}) \]
Here, is the sample mean of the observations, and is the number of observations.
Autocorrelation functions are just a normalization of autocovariance functions, this makes them have no unit of measurement (dimensionless). They are no more than the time series version of the typical Pearson correlation coefficient. Mathematically, we can define them as:
\[ \rho_{j} = \frac{\gamma_{j}}{\gamma_{0}} \]
The sample version of this is simply:
\[ \hat{\rho_{j}} = \frac{\hat{\gamma_{j}}}{\hat{\gamma_{0}}} \]
It is clear—but worth remembering—that always .
I created two simple Python functions to do the calculations2.
Finally, we can visualize the autocorrelation function of each of the series shown in the beginning3:
Now it is clear that the four series are different. Here are some simple rules to identify the times series structure by observing the autocorrelation function composition:
Autocorrelation functions are a good first approximation to analyze time series data, but they are just that: “a first approximation.” There are other methods to continue finding the right structure of our data, for example, the Akaike Information Criterion or the Bayesian Information Criterion.
You can find all the code here and play with an online Jupyter Notebook here .
We say that is weakly stationary if:
Please keep in mind that this is not intended to be computationally optimal, only easy to understand. For an already-made alternative, you can use statsmodels, by doing:
import statsmodels.api as sm
sm.tsa.acf(series)
↩
These plots are often called “correlograms.” ↩