In this post we will look into the transformations from time series data to the domain of complex networks which will allow us to characterise the dynamics underlying the time series in terms of topological features of the complex network. We will see that the constructed graph inherits several properties of the series in its structure. Thereby, periodic series convert into regular graphs, and random series do so into random graphs. Moreover, fractal series convert into scale-free networks, enhancing the fact that power law degree distributions are related to fractality, something discussed by Ayush Goel in his post ‘Fractality in Complex Networks’ recently.
We will look into a simple and fast computational method, the visibility algorithm, to convert a time series into a graph. The main idea is to study to which extent the techniques and focus of graph theory are useful as a way to characterize time series. The key question is to know whether the associated graph inherits some structure of the time series, and consequently whether the process that generated the time series may be characterized by using graph theory.
Visibility Algorithm for Time Series :
Visibility Algorithm for Time Series :
First, we consider periodic series. For illustrative purposes, in Fig. 1 we present a scheme of the visibility algorithm. In the upper zone we plot the first 20 values of a periodic series by using vertical bars (the data values are displayed above the plot). Considering this as a landscape, we link every bar (every point of the time series) with all those that can be seen from the top of the considered one (gray lines), obtaining the associated graph (shown in the lower part of the figure). In this graph, every node corresponds, in the same order, to series data, and two nodes are connected if visibility exists between the corresponding data, that is to say, if there is a straight line that connects the series data, provided that this “visibility line” does not intersect any intermediate data height.
More formally, we can establish the following visibility criteria: two arbitrary data values(ta, ya) and (tb, tb) will have visibility, and consequently will become two connected nodes of the associated graph, if any other data (tc, yc) placed between them fulfills:
We can easily check that by means of the present algorithm, the associated graph extracted from a time series is always
1. Connected: each node sees at least its nearest neighbors (left and right).
2. Undirected: the way the algorithm is built up, there is no direction defined in the links.
3. Invariant under affine transformations of the series data: the visibility criterion is invariant under rescaling of both horizontal and vertical axes, and under horizontal and vertical translations (see Fig. 2).
The visibility graph of a time series remains invariant under several transformation of the time series. (a) Original time series with visibility links. (b) Translation of the data. (c) Vertical rescaling. (d) Horizontal rescaling . (e) Addition of a linear trend to the data. As can be seen in the bottom diagram, in all these cases the visibility graph remains invariant.
The example plotted in Fig. 1 is a periodic series with period 4. The associated visibility graph is regular, as long as it is constructed by periodic repetition of a pattern. The degree distribution of this graph is formed by a finite number of peaks related to the series period, much in the vein of the Fourier power spectrum of a time series. Generically speaking, all periodic time series are mapped into regular graphs, the discrete degree distribution being the fingerprint of the time series periods. In the case of periodic time series, its regularity seems therefore to be conserved or inherited structurally in the graph by means of the visibility map.
Random Graphs :
As an opposite to periodic series, in a second step we will tackle a series R(t) of 10^6 data values extracted from an uniform distribution in [0, 1]. Although one would expect in a first moment a Poisson degree distribution in this case [as for uncorrelated random graphs (1)], a random time series has indeed some correlation, because it is an ordered set. In fact, let k(t) be the connectivity of the node associated with the data t. If k(t) is large (related to the fact that the data have a large value and that consequently they have large visibility), one would expect that k(t+1) would be relatively small, because the time series is random and two consecutive data values with a large value are not likely to occur. It is indeed because of these “unlikely” large values (the hubs) that the tail of the degree distribution deviates from the Poisson distribution.
Two large values in the series data can be understood as two rare events in the random process. The time distribution of these events is indeed exponential (2), therefore we should expect the tail of the degree distribution in this case to be exponential instead of Poissonian , as long as the form of this tail is related to the hub's distribution.
In the left side of Fig. 3 we depict the first 250 values of R(t) . In the right side we plot the degree distribution P(k) of its visibility graph (plotted in semilog). The tail of this distribution fits an exponential distribution quite well, as expected.
Therefore, order and disorder structure in the time series seem to be inherited in the topology of the Visibility Graph Network. These algorithms provide a new way of studying time series in the complex networks domain and equip us with a wide range of statistical measures.
(1) Bollobás B (1998) Modern Graph Theory (Springer, New York).
(2) Feller W (1971) An Introduction to Probability Theory and Its Applications (Wiley, New York).