Capturing the interplay of dynamics and networks through parameterizations of Laplacian operators
- Published
- Accepted
- Received
- Academic Editor
- Kamesh Munagala
- Subject Areas
- Network Science and Online Social Networks
- Keywords
- Network, Community structure, Spectral graph theory, Centrality, Dynamical process
- Copyright
- © 2016 Yan et al.
- Licence
- This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
- Cite this article
- 2016. Capturing the interplay of dynamics and networks through parameterizations of Laplacian operators. PeerJ Computer Science 2:e57 https://doi.org/10.7717/peerj-cs.57
Abstract
We study the interplay between a dynamical process and the structure of the network on which it unfolds using the parameterized Laplacian framework. This framework allows for defining and characterizing an ensemble of dynamical processes on a network beyond what the traditional Laplacian is capable of modeling. This, in turn, allows for studying the impact of the interaction between dynamics and network topology on the quality-measure of network clusters and centrality, in order to effectively identify important vertices and communities in the network. Specifically, for each dynamical process in this framework, we define a centrality measure that captures a vertex’s participation in the dynamical process on a given network and also define a function that measures the quality of every subset of vertices as a potential cluster (or community) with respect to this process. We show that the subset-quality function generalizes the traditional conductance measure for graph partitioning. We partially justify our choice of the quality function by showing that the classic Cheeger’s inequality, which relates the conductance of the best cluster in a network with a spectral quantity of its Laplacian matrix, can be extended to the parameterized Laplacian. The parameterized Laplacian framework brings under the same umbrella a surprising variety of dynamical processes and allows us to systematically compare the different perspectives they create on network structure.
Introduction
As flexible representations of complex systems, networks model entities and relations between them as vertices and edges. In a social network for example, vertices are people, and the edges between them represent friendships. As another example, the World Wide Web is a collection of web pages with hyperlinks between them. An unprecedented amount of such relational data is now available. While discovery and fortune await, the challenge is to extract useful information from these large and complex data.
Centrality and community detection are two of the fundamental tasks of network analysis. The goal of centrality identification is to find important vertices that control the dynamical processes taking place on the network. Page Rank (Page et al., 1999) is one such measure developed by Google to rank web pages. Other centrality measures, such as degree centrality, Katz score and eigenvector centrality (Katz, 1953; Bonacich, 1972; Bonacich & Lloyd, 2001; Ghosh & Lerman, 2012), are used in communication networks for studying how each vertex contributes to the routing of information. Identifying central vertices also plays an important role in methods to maximize influence (Kempe, Kleinberg & Tardos, 2003) or limit the spread of a disease on networks.
The objective of community detection is to discover subsets of well-interacting vertices in a given network. Discovering such communities allows us to follow the classic reductionist approach, separating the vertices into distinct classes, each of which can then be analyzed separately. For example, US-based political networks usually exhibit a bipolar structure, representing democrat/republican divisions (Adamic & Glance, 2005). Communities within online social networks like Facebook might correspond to real social groups which can be targeted with various advertisements. However, just like with the different notions of centrality, there is an assortment of community detection algorithms, each leading to a different community structure on the same network (see Fortunato, 2010; Porter, Onnela & Mucha, 2009 for reviews).
With so many choices for both centrality and community detection, practitioners often face a difficult decision of which measures to use. Instead of looking for the “best” such measure, we describe an umbrella framework that unifies some of the well known measures, connecting the ideas of centrality, communities and dynamical processes on networks. In this dynamics-oriented view, a vertex’s centrality describes its participation in the dynamical process taking place on the network (Borgatti, 2005; Lambiotte et al., 2011; Ghosh & Lerman, 2012). Likewise, communities are groups of vertices that interact more frequently with each other (according to the rules of the dynamical process) than with vertices from other communities (Lerman & Ghosh, 2012). In fact, this view of modeling is not new: when choosing conductance as a measure of community quality, one implicitly assumes that unbiased random walk is taking place on the network (Kannan, Vempala & Vetta, 2004; Spielman & Teng, 2004; Chung, 1997; Delvenne, Yaliraki & Barahona, 2008). Under the continuous time random walk model, heat kernel page rank (Chung, 2007) also leads to a measure of community structures. Other dynamical processes, such as the spread of information, or exchange of opinions, arise from different interactions than the unbiased random walk. For example, maximum entropy random walk (Burda et al., 2009) is a stochastic process that is biased towards neighbors that are closer to the network’s strongly connected core. Represented by the replicator operator (Lerman & Ghosh, 2012; Smith et al., 2013), it also models an epidemic process at the epidemic threshold, whose stationary distribution is closely related to eigenvector centrality (Bonacich & Lloyd, 2001; Ghosh & Lerman, 2011). It is natural, then, that vertex centrality and community depend on the specifics of the dynamical process, even if the underlying network topologies are the same.
Recently, Ghosh et al. (2014) introduced a parameterization of Laplacian operators to capture the interplay between a dynamical process and the underlying topology of the network on which it unfolds. By generalizing the traditional conductance, they proved a more general version of the Cheeger inequality and used it as a basis for an efficient spectral clustering algorithm (Spielman & Teng, 2004; Andersen, Chung & Lang, 2007; Andersen & Peres, 2009). In this paper, we generalize previous results by introducing a formal framework with additional parameters and better intuitions. We also introduce parameterized centrality and relate it to existing centrality measures through transformations. This paper makes the following contributions:
Parameterized Laplacian (‘Parameterized Laplacian Framework’): We introduce the parameterized Laplacian framework that extends the traditional Laplacian for describing diffusion and random walks on networks. Recall that a random walk is a stochastic dynamical process that transitions from a vertex to a random neighbor of that vertex. It defines a Markov chain that can be specified by the normalized Laplacian of the network. Our framework attempts to capture a family of dynamical processes that have additional parameters based on the normalized Laplacian, which allows the modeling of arbitrary biases and delays. Members of this family are connected via simple parameterized transformations, which enables analysis of the impact of these parameters on the measures of centrality and communities.
Parameterized centrality (‘Parameterized Centrality’): Based on the connection between centrality measures and the stationary distribution of a random walk (Page et al., 1999; Ghosh & Lerman, 2012), we generalize the notion of centrality to all dynamical processes in the parameterized Laplacian family. Some well known centrality measures are identified as special cases under this unified framework, which allows us to systematically compare them using transformations. In particular, we show that seemingly different formulations of dynamics are in fact the same after a change of basis. Parameterized centrality also leads to the definition of parameterized volume for subsets of vertices.
Parameterized conductance (‘Parameterized Community Quality’): We also generalize the notion of conductance to all dynamical processes under the framework and call it parameterized conductance.^{1} This quantity measures the quality of every subset (of vertices) as a potential community with respect to this process on the given network. Recall that conductance balances between minimizing the cross-community interactions and the volume of each community. Parameterized conductance is defined in exact same fashion, but with the parameterized notions of interaction as well as volume. As with centrality, some existing community measures turn out to be special cases. For completeness, we will restate the previously proven generalized versions of Cheeger inequality and the resulting spectral algorithm (Ghosh et al., 2014). The parameterized Laplacian framework enables systematic comparison between different community measures, as they are now unified and connected by simple transformations.
Empirical evaluation on real-world networks (‘Experiments’): We apply our framework to study the structure of several real-world networks. They are from different domains that embody a variety of dynamical processes and interactions. We contrast the central vertices and communities identified by different dynamical processes and provide an intuitive explanation for their differences. Keep in mind that we do not claim any specific centrality or community structure measures to be the “best.” We think every outcome is potentially interesting among many possible perspectives.
In contrast to the earlier work on which this paper is based, the emphasis of this paper is on the theoretical framework that brings together important concepts in network science. While the parameterized Laplacian framework described in this paper cannot model every dynamical process of interest, it is still flexible enough to include a variety of dynamical processes which are seemingly unrelated. It allows us to systematically study and compare these processes under a unified framework. We hope this study will lead to better approaches for defining and understanding the general interaction between dynamics and topologies.
Background and Related Work
Before introducing our framework, we briefly review some closely related models. We will later show that these existing models are special cases under the parametrized Laplacian framework. The intuition about these well-known systems is helpful for understanding the motivation behind the framework.
We represent a network as a weighted, undirected graph $G=\left(V,E,\mathit{A}\right)$ with n vertices, where for i, j ∈ V, a_{ij} assigns an non-negative weight (affinity) to each edge (i, j) ∈ E. We follow the tradition that a_{ij} = 0 if and only if (i, j)∉E; i.e., $\mathit{A}$ is the weighted symmetric adjacency matrix. We assume a_{ii} = 0 for all i ∈ V. In the discussion below, the (weighted) degree of vertex i ∈ V is defined as the total weight of edges incident on it, that is, d_{i} = ∑_{j}a_{ij}. A dynamical process describes a state variable θ_{i}(t) associated with each vertex i. This variable changes its value based on interactions with the vertex’s neighbors according to the rules of the dynamical process.^{2}
In this paper, since we view dynamics as operators on the vector composed of vertex state variables, we adopt the linear algebra convention, i.e., using column vertex state vectors $\mathbf{\theta}\left(t\right)$ and left-multiply them by matrix operators.^{3} Table 1 summarizes the terms and notation.
Term | Description | Term | Description |
---|---|---|---|
$\mathit{A}$ | Weighted adjacency matrix | a_{ij} | Entry i, j of $\mathit{A}$ |
$\mathit{W}$ | Interaction matrix | w_{ij} | Entry i, j of $\mathit{W}$ |
$\mathbf{\theta}\left(t\right)$ | Vertex state vector (column) at time t | θ_{i}(t) | Entry i of $\mathbf{\theta}\left(0\right)$ |
${\mathit{D}}_{\mathit{A}}$ | Diagonal degree matrix of $\mathit{A}$ | d_{i} | Degree of vertex i in $\mathit{A}$ |
${\mathit{D}}_{\mathit{W}}$ | Diagonal degree matrix of $\mathit{W}$ | ${{d}_{\mathit{W}}}_{i}$ | Degree of vertex i in $\mathit{W}$ |
$\mathit{T}$ | Diagonal delay matrix | τ_{i} | Delay factor of vertex i |
$\mathcal{L}$ | Generalized Laplacian Operator | P_{ij} | Random walk probability from j to i |
${\stackrel{\u20d7}{v}}_{\mathit{A}}$ | Dominant eigenvector of $\mathit{A}$ | ${{\stackrel{\u20d7}{v}}_{\mathit{A}}}_{i}$ | Entry i of ${\stackrel{\u20d7}{v}}_{\mathit{A}}$ |
${\mathit{V}}_{\mathit{A}}$ | Diagonal matrix with ${\stackrel{\u20d7}{v}}_{\mathit{A}}$ entries | ${\stackrel{\u20d7}{v}}_{i}$ | ith eigenvector of $\mathcal{L}$ |
c_{i} | Centrality of vertex i | S | Subset of V, defines a community |
Random walks
One of the most-widely studied dynamical processes on networks is the random walk. The simplest is the discrete time unbiased random walk (URW), where a walker at vertex i follows one of the edges with a probability proportional to the weight of the edge (Ross, 2014; Aldous & Fill, 2002). In this case, the state vector $\mathbf{\theta}$^{4} forms a distribution whose expected value follows the update equation: ${\theta}_{i}\left(t+1\right)=\sum _{j}{P}_{ij}{\theta}_{j}\left(t\right).$ Here P is a stochastic matrix whose entry P_{ij} is the transition probability for a walker to go from the vertex j to i, P_{ij} = a_{ij}∕d_{j}.
The update equation of an unbiased random walk leads to the difference equation $\Delta {\theta}_{i}={\theta}_{i}\left(t+1\right)-{\theta}_{i}\left(t\right)=\sum _{j}{P}_{ij}{\theta}_{j}\left(t\right)-{\theta}_{i}\left(t\right)=-\sum _{j}{L}_{ij}^{RW}{\theta}_{j}\left(t\right),$ where L^{RW} is the normalized random walk Laplacian matrix with ${L}^{RW}=\mathit{I}-\mathit{A}{\mathit{D}}_{\mathit{A}}^{-1}$.
To go from a discrete time synchronous random walk to a continuous time dynamics, we introduce a waiting time function for the asynchronous jumps performed by the walk (Ross, 2014). Assuming a simple Poisson process where the waiting times between jumps are exponentially distributed as the PDF $f\left(t,\tau \right)=\frac{1}{{\tau}_{i}}{e}^{-\frac{t}{{\tau}_{i}}}$, we can rewrite the above difference equations as differential equations, $\frac{d{\theta}_{i}}{dt}=-\sum _{j}\frac{{L}_{ij}^{RW}}{{\tau}_{j}}{\theta}_{j}.$ The solution to the above differential equations gives the state vector of the random walk at any time t: $\mathbf{\theta}\left(t\right)={e}^{-{L}^{RW}{\mathit{T}}^{-1}t}\cdot \mathbf{\theta}\left(0\right),$ where $\mathit{T}$ is the n × n diagonal matrix with the mean waiting time τ_{i} as entries. If the dynamical process converges, then regardless of its initial value $\mathbf{\theta}\left(0\right)$, the stationary distribution π_{i} has the following density: (1)${\pi}_{i}=\underset{t\to \infty}{lim}{\theta}_{i}\left(t\right)=\frac{{d}_{i}{\tau}_{i}}{\sum _{j}{d}_{j}{\tau}_{j}}.$ Intuitively, the stationary distribution is proportional to the product of vertex degree and the mean waiting time.
A natural extension of the process is to bias the random walk towards specific vertices, making it a biased random walk (BRW). According to Lambiotte et al. (2011), any biased random walk defined with the transition probability P_{ij} ∝ b_{i}a_{ij} (where b_{i} is the bias towards vertex i) can be reduced to a URW on a re-weighted “interaction network” with the adjacency matrix $\mathit{W}=\mathit{B}\mathit{A}\mathit{B},$ where $\mathit{B}$ is a diagonal matrix with ${\mathit{B}}_{ii}={b}_{i}$. The above symmetric re-weighting ensures that ${P}_{ij}=\frac{{b}_{i}{a}_{ij}{b}_{j}}{\sum _{i}{b}_{i}{a}_{ij}{b}_{j}}\propto {b}_{i}{a}_{ij},{P}_{ji}=\frac{{b}_{j}{a}_{ji}{b}_{i}}{\sum _{j}{b}_{j}{a}_{ji}{b}_{i}}\propto {b}_{j}{a}_{ji}.$
In one class of BRWs previously studied in network communications (Ling et al., 2013; Fronczak & Fronczak, 2009; Gómez-Gardeñes & Latora, 2008), bias b_{i} has a power-law dependence on degree: ${P}_{ij}\propto {d}_{i}^{\beta}{a}_{ij}$. The exponent β controls the strength of bias. The URW is recovered with β = 0; When β > 0, biases toward high degree vertices are introduced, and when β < 0, the random walk is more likely to jump to a lower degree neighbor.
Another type of BRW is the maximum-entropy random walk (Burda et al., 2009; Lambiotte et al., 2011), defined as ${\theta}_{i}\left(t+1\right)=\sum _{j}\frac{\stackrel{\u20d7}{{v}_{{\mathit{A}}_{i}}}{a}_{ij}}{{\lambda}_{max}\stackrel{\u20d7}{{v}_{{\mathit{A}}_{j}}}}{\theta}_{j}\left(t\right),$ where $\stackrel{\u20d7}{{v}_{\mathit{A}}}$ is the eigenvector of $\mathit{A}$ associated with its largest eigenvalue λ_{max}: $\mathit{A}\stackrel{\u20d7}{{v}_{\mathit{A}}}={\lambda}_{max}\stackrel{\u20d7}{{v}_{\mathit{A}}}$. Again, an unbiased random walk on the interaction network $\mathit{W}={\mathit{V}}_{\mathit{A}}\mathit{A}{\mathit{V}}_{\mathit{A}}$ is equivalent to biased random walk on the original network $\mathit{A}$ (the entries of diagonal matrix ${\mathit{V}}_{\mathit{A}}$ is the components of the eigenvector $\stackrel{\u20d7}{{v}_{\mathit{A}}}$). In particular, the stationary distributions of both can be written as ${\pi}_{i}=\frac{\stackrel{\u20d7}{{v}_{{\mathit{A}}_{i}}^{2}}}{{\sum}_{i}{\stackrel{\u20d7}{{v}_{{\mathit{A}}_{i}}}}^{2}}.$
Consensus and opinion dynamics
Another closely related class of discrete time dynamical processes is the so-called the “consensus process” (DeGroot, 1974; Lambiotte et al., 2011; Olfati-Saber, Fax & Murray, 2007; Krause, 2008). Consensus process models coordination across a network where each vertex updates its “belief” based on the average “beliefs” of its neighbors. Unlike random walks, which conserves total state value throughout the network (since the state vector is always a distribution), the consensus process follows the following update equation ${\theta}_{i}\left(t+1\right)=\frac{1}{{d}_{i}}\sum _{j}{a}_{ij}{\theta}_{j}\left(t\right).$ This leads to the difference equation $\Delta {\theta}_{i}={\theta}_{i}^{t+1}-{\theta}_{i}^{t}=-\sum _{j}{L}_{ij}^{CON}{\theta}_{j}\left(t\right)$ where L^{CON} is the consensus Laplacian matrix with ${L}^{CON}=\mathit{I}-{\mathit{D}}_{\mathit{A}}^{-1}\mathit{A}$. For an undirected graph with a symmetric $\mathit{A}$, L^{CON} = [L^{RW}]^{T}.
Consensus can also be turned into asynchronous continuous time dynamics. Again, assuming a Poisson process where the update interval at each vertex is exponentially distributed as ${\mathrm{\tau}}_{i}\left(t\right)=\frac{1}{{\tau}_{i}}{e}^{-\frac{t}{{\tau}_{i}}}$, we can rewrite the above difference equations as differential equations, $\frac{d{\theta}_{i}}{dt}=-\sum _{j}\frac{{L}_{ij}^{CON}}{{\tau}_{i}}{\theta}_{j}.$
The consensus process always converge to a uniform “belief” state with the value, (2)${\pi}_{i}=\frac{1}{\sum _{j}{d}_{j}{\tau}_{j}}\sum _{i}{\theta}_{i}\left(0\right){d}_{i}{\tau}_{i}.$
Just like the URW, unbiased consensus can also be generalized by introducing a weight when averaging over neighbors’ values. This opens the door to consensus dynamics such as opinion dynamics (Krause, 2008), and linearized approach to synchronization models (Lerman & Ghosh, 2012; Motter, Zhou & Kurths, 2005; Arenas, Díaz-Guilera & Pérez-Vicente, 2006).
Communities and conductance
In network clustering and community detection, previous work has focused on identifying subsets of vertices S⊆V that interact more frequently with vertices in the same community than vertices in other subsets (Fortunato, 2010; Porter, Onnela & Mucha, 2009). A standard approach to clustering defines an objective function that measures the quality of a cluster. For a subset S⊆V, let $\stackrel{\u0304}{S}=V\setminus S$ denote the complement of S, which consists of vertices that are not in S. Let $\mathrm{cut}\left(S,\stackrel{\u0304}{S}\right)={\sum}_{i\in S,j\in \stackrel{\u0304}{S}}{a}_{i,j}$ denote the total interaction strength of all edges used by S to connect with the outside world. Let $\mathrm{vol}\left(S\right)={\sum}_{i\in S}{d}_{i}={\sum}_{i\in S,j\in V}{a}_{i,j}$ denote the volume of weighted “importance” for all vertices in S.
One popular measure of the quality of a subset S as a potential good cluster (or a community) (Kannan, Vempala & Vetta, 2004; Spielman & Teng, 2004; Chung, 1997) is to use the ratio of these two quantities: (3)$\varphi \left(S\right)=\frac{\mathrm{cut}\left(S,\stackrel{\u0304}{S}\right)}{min\left(\mathrm{vol}\left(S\right),\mathrm{vol}\left(\stackrel{\u0304}{S}\right)\right)}.$ For example, a subset that (approximately) minimizes this quantity—the conductance of S—is a desirable cluster, as it maximizes the fraction of affinities within the subset. If interactions among vertices are proportional to their affinity weights, then a set with small conductance also means that its members interact significantly more with each other than with outside members. The smallest achievable ratio over all possible subsets is also known as the isoperimetric number. As an important measure for mixing time in classic Markov chains, conductance has proven mathematical bounds in terms of the second eigenvalue of its Laplacian (Cheeger, 1970; Jerrum & Sinclair, 1988; Lawler & Sokal, 1988). Other well-known quality functions are normalized cut (Shi & Malik, 2000) and ratio-cut, given respectively by $\frac{\mathrm{cut}\left(S,\stackrel{\u0304}{S}\right)}{\mathrm{vol}\left(S\right)}+\frac{\mathrm{cut}\left(S,\stackrel{\u0304}{S}\right)}{\mathrm{vol}\left(\stackrel{\u0304}{S}\right)}\phantom{\rule{10.00002pt}{0ex}}\text{and}\phantom{\rule{10.00002pt}{0ex}}\frac{\mathrm{cut}\left(S,\stackrel{\u0304}{S}\right)}{min\left(\left|S\right|,\left|\stackrel{\u0304}{S}\right|\right)}.$
Algorithmically, once a quality function is selected, one can then perform a graph partitioning algorithm or any community detection algorithm to find clusters that optimize the objective. The optimization, however, is usually a combinatorial problem. To address this problem on large networks, efficient approximate solutions have been developed, such as Spielman & Teng (2004), Andersen, Chung & Lang (2007), and Andersen & Peres (2009). Others took a machine learning approach, proposing efficient approximations by enforcing various smoothness and regularization conditions (Avrachenkov et al., 2011; Bertozzi & Flenner, 2012).
While most community detection algorithms do not explicitly model the dynamical process that defines the interactions between vertices, the connection between conductance and unbiased random walks is quite well studied (Kannan, Vempala & Vetta, 2004; Spielman & Teng, 2004; Chung, 1997). In particular, Chung’s work on heat kernel page rank and Cheeger inequality, where a dynamical system is built using the normalized Laplacian, provides a theoretical framework for provably good approximations to the isoperimetric number (Chung, 2007). Intuitively, the relationship between clustering and dynamics can be captured as: a community is a cluster of vertices that “trap” a random walk for a long period of time before it jumps to other communities (Lovász, 1996; Shi & Malik, 2000; Rosvall & Bergstrom, 2008; Spielman & Teng, 2004). Therefore, the presence of a good cluster based on conductance implies that it will take a random walk a long time to reach its stationary distribution. Similar interplays with community structures can also be generalized to richer dynamical processes, with different time scale, biases and locality settings (Lambiotte, Delvenne & Barahona, 2008; Lambiotte et al., 2011; Jeub et al., 2015).
Parameterized Laplacian Framework
Consider a linear dynamical process of the following form: (4)$\frac{d\mathbf{\theta}}{dt}=-\mathcal{L}\mathbf{\theta},$ where $\mathbf{\theta}$ is a column vector of size n containing the values of the dynamical variable for all vertices, and $\mathcal{L}$ is a positive semi-definite matrix, the spreading operator, which defines the dynamical process.
As discussed in the introduction, we focus on dynamical processes that generalize the traditional normalized Laplacian for diffusion and random walks. Recall that the symmetric normalized Laplacian matrix of a weighted graph $G=\left(V,E,\mathit{A}\right)$ is defined as ${\mathit{D}}_{\mathit{A}}^{-1\u22152}\left({\mathit{D}}_{\mathit{A}}-\mathit{A}\right){\mathit{D}}_{\mathit{A}}^{-1\u22152}$, where ${\mathit{D}}_{\mathit{A}}$ is the diagonal matrix defined by (d_{1}, …, d_{n}). We study the properties of a dynamical process that can be further parameterized as: (5)$\mathcal{L}\left(\rho ,\mathit{T},\mathit{W}\right)={\left(\mathit{T}{\mathit{D}}_{\mathit{W}}\right)}^{-1\u22152-\rho}\left({\mathit{D}}_{\mathit{W}}-\mathit{W}\right){\left({\mathit{D}}_{\mathit{W}}\mathit{T}\right)}^{-1\u22152+\rho}.$ We name this operator with parameters $\u3008\rho ,\mathit{T},\mathit{W}\u3009$ parameterized Laplacian and represent it using $\mathcal{L}$ in the rest of the paper. Here $\mathit{T}$ is the n × n diagonal matrix of vertex delay factors. Its ith element τ_{i} represents the average delay of vertex i. We assume that the operator is properly scaled: specifically, τ_{i} ≥ 1, for all i ∈ V. Another generalization from the traditional Laplacian is the use of the interaction matrix $\mathit{W}$ instead of the adjacency matrix $\mathit{A}$. In theory, $\mathit{W}$ can be any n × n symmetric positive matrix. Note that the degree matrix ${\mathit{D}}_{\mathit{W}}$ is now also defined in terms of the interaction matrix, that is ${{d}_{\mathit{W}}}_{i}={\sum}_{j}{w}_{ij}$. While the ρ parameter can technically be any real number, in this work we limit ourselves to three special cases: ρ = 1∕2, 0, − 1∕2. These cases correspond to three equivalent linear operators with “consensus”, “symmetric” and “random walk” interpretations respectively.
We show that by transforming the parameterized Laplacian in different ways we can express a number of different dynamic processes. We focus on three simple transformations: (a) the similarity transformations, which correspond to the parameter ρ in parameters in Eq. (5), (b) scaling transformations, governed by the parameter $\mathit{T}$, and (c) the reweighing transformation, governed by $\mathit{W}$.
Similarity transformations
Changing ρ in Eq. (5) leads to different representations of the same linear operator, unifying seemingly unrelated dynamics, such as “consensus” and “random walk.” To see this, we refer to the idea of matrix similarity.
In linear algebra, similarity is an equivalence relation for square matrices. Two n × n matrices X and Y are similar if (6)$X=QY{Q}^{-1},$ where the invertible n × n matrix Q is called the change of basis matrix. Similar matrices share many key properties, including their rank, determinant and eigenvalues. Eigenvectors are also transforms of each other under a change of basis.
Recall that under our framework, the symmetric version of the parameterized Laplacian matrix is ${\mathcal{L}}^{SYM}={\mathit{T}}^{-1\u22152}{\mathit{D}}_{\mathit{W}}^{-1\u22152}\left({\mathit{D}}_{\mathit{W}}-\mathit{W}\right){\mathit{D}}_{\mathit{W}}^{-1\u22152}{\mathit{T}}^{-1\u22152}.$ We can rewrite the operator describing random walk dynamics as: (7)${\mathcal{L}}^{RW}=\left({\mathit{D}}_{\mathit{W}}-\mathit{W}\right){\left({\mathit{D}}_{\mathit{W}}\mathit{T}\right)}^{-1}={\left({\mathit{D}}_{\mathit{W}}\mathit{T}\right)}^{1\u22152}{\mathcal{L}}^{SYM}{\left({\mathit{D}}_{\mathit{W}}\mathit{T}\right)}^{-1\u22152}.$ Thus, continuous time random walk with delay factors $\mathit{T}$ is similar to the symmetric normalized Laplacian. Similarly, we can rewrite the continuous time consensus dynamics under our framework as (8)${\mathcal{L}}^{CON}={\left({\mathit{D}}_{\mathit{W}}\mathit{T}\right)}^{-1}\left({\mathit{D}}_{\mathit{W}}-\mathit{W}\right)={\left({\mathit{D}}_{\mathit{W}}\mathit{T}\right)}^{-1\u22152}{\mathcal{L}}^{SYM}{\left({\mathit{D}}_{\mathit{W}}\mathit{T}\right)}^{1\u22152}={{\mathcal{L}}^{RW}}^{T}.$ The fact that “consensus,” “symmetric” and “random walk” operators are similar means that they model the same dynamics on a network, provided that we observe them in a consistent basis.
The random walk Laplacian matrix provides a physical intuition for our framework. An unbiased random walk on the interaction graph $\mathit{W}$ is equivalent to a biased random walk on the original adjacency matrix $\mathit{A}$ (Lambiotte et al., 2011). On the other hand, τ_{i} specifies the mean delay time of the random walk on vertex i before a transition. This interpretation reveals the orthogonal nature of the parameters: namely $\mathit{W}$ controls the distribution of walk trajectories while $\mathit{T}$ controls the delay time of vertex transitions along each trajectory.
While we use symmetric operators for mathematical convenience in definitions and proofs and abuse the notation $\mathcal{L}={\mathcal{L}}^{SYM}$, it is often more intuitive to think from the random walk or consensus perspective. In the following subsections, we will use the random walk formulation (ρ = − 1∕2) as examples, but all results apply to arbitrary ρ values under a simple change of basis. More discussion about the similarity transformation follows after we introduce a few properties of the parameterized Laplacian.
Scaling transformations
Uniform scaling.
One of the simplest transformations is uniform scaling, which is given by the diagonal matrix $\mathit{T}$ with identical entries: (9)$X=YQ=\gamma Y,$ where the scalar matrix Q can be rewritten as $\gamma \mathit{I}$, where γ is a scalar. Uniform scaling preserves almost all matrix properties, including the eigenvalue and eigenvector pairs associated with the operator.
Intuitively, uniform scaling can be understood as rescaling time by 1∕γ. In other words, a bigger global “time delay” slows down the random walk. Uniform scaling is a useful transformation that enables the parameterized Laplacian to include arbitrary time delay factors ${\mathit{T}}^{\prime}$. The trick is to rescale $\mathit{T}$ to meet the condition τ_{i} ≥ 1 by making $\mathit{T}=\frac{{\mathit{T}}^{\prime}}{\underset{i}{max}{\tau}_{i}}$ without affecting any other matrix properties. We will later use it to define special operators under the framework.
Non-uniform scaling.
Non-uniform scaling enables us to use the $\mathit{T}$ parameter to control the time delay at each vertex. Non-uniform scaling is written as (10)$X=YQ,$ where the diagonal matrix Q can have different entries. Unlike uniform scaling, this scaling does not preserve the matrix’s spectral properties.
Under the parameterized Laplacian framework, non-uniform scaling can be understood as rescaling the mean waiting time at each vertex i by τ_{i}. Non-uniform scaling does not affect the trajectory of the random walk: the sequence of vertices, i_{0}, i_{1}, …, i_{t}, visited by the random walk during some time interval does not depend on $\mathit{T}$. What changes with $\mathit{T}$ is only the waiting time at each vertex, i.e., the time the walk spends on the vertex before a transition.
Reweighing transformations
The last parameterization we explore is one that transforms the adjacency matrix of a graph, $\mathit{A}$, to the interaction matrix $\mathit{W}$. Given an adjacency matrix $\mathit{A}$, the choice of $\mathit{W}$ is a rather flexible design option. In fact, we can arbitrarily manipulate the adjacency matrix as long as the result is still a positive and symmetric matrix, for any perceived dynamics.
In this paper, we limit our attention to bias transformations of the original adjacency matrix $\mathit{A}$. We call them the reweighing transformations. Whereas the scaling transformation changes the delay time at each vertex, the reweighing transformation changes the trajectory of the dynamic process. Note that this transformation also changes the degree matrix ${\mathit{D}}_{\mathit{W}}$.
As described in ‘Background and Related Work’, a biased random walk with transition probability P_{ij} ∝ b_{i}a_{ij} is equivalent to an unbiased random walk on an “interaction graph,” represented by the reweighed adjacency matrix: (11)${w}_{ij}={b}_{i}{a}_{ij}{b}_{j},$ where we constrain b_{i} > 0. This transformation allows the parameterized Laplacian to model many different types of dynamic processes by transforming them into a unbiased walk on the reweighted interaction graph.
Special cases
The simple parameterization of the Laplacian in terms of $\mathit{T}$ and $\mathit{W}$ allows us to model a variety of dynamic processes,^{5} including those described by the Laplacian and normalized Laplacian, as well as a continuous family of new operators that are not as well studied. It also contains operators for modeling some types of epidemic processes. The consideration of this family of operators is also partially motivated by recent experimental work in understanding network centrality (Ghosh & Lerman, 2011; Lerman & Ghosh, 2012).
Normalized Laplacian.
If the interaction matrix is the original adjacency matrix of the graph $\mathit{W}=\mathit{A}$, and vertex delay factor is simply the identity matrix $\mathit{T}=\mathit{I}$, then we recover the symmetric normalized Laplacian: $\mathcal{L}=\mathit{I}-{\mathit{D}}_{\mathit{A}}^{-1\u22152}\mathit{A}{\mathit{D}}_{\mathit{A}}^{-1\u22152}.$ The “random walk” and “consensus” formulations of this dynamic process correspond to the unbiased random walk and consensus processes described in ‘Background and Related Work’: ${\mathcal{L}}^{RW}=\mathit{I}-\mathit{A}{\mathit{D}}_{\mathit{A}}^{-1}$ and ${\mathcal{L}}^{CON}=\mathit{I}-{\mathit{D}}_{\mathit{A}}^{-1}\mathit{A}$.
(Scaled) Graph Laplacian.
When $\mathit{W}=\mathit{A}$, $\mathit{T}={d}_{max}{\mathit{D}}_{\mathit{A}}^{-1}$, the parameterized Laplacian operator corresponds to the (scaled) graph Laplacian $\mathcal{L}=\frac{1}{{d}_{max}}\left({\mathit{D}}_{\mathit{A}}-\mathit{A}\right).$ This operator is often used to describe heat diffusion processes (Chung, 2007), where $\mathcal{L}$ is replacing the continuous Laplacian operator ∇^{2}.
Notice that by setting $\mathit{T}={d}_{max}{\mathit{D}}_{\mathit{A}}^{-1}$, the diagonal matrix $\mathit{T}{\mathit{D}}_{\mathit{W}}$ becomes effectively a scalar. As a result, different similarity transformation (other values of ρ in Eq. (5)) lead to identical linear operators, meaning the “random walk” and “consensus” formulations are exactly the same as the symmetric formulation.
Replicator.
Let $\stackrel{\u20d7}{{v}_{\mathit{A}}}$ be the eigenvector of $\mathit{A}$ associated with its largest eigenvalue λ_{max}: $\mathit{A}\stackrel{\u20d7}{{v}_{\mathit{A}}}={\lambda}_{max}\stackrel{\u20d7}{{v}_{\mathit{A}}}$. We can then construct a diagonal matrix ${\mathit{V}}_{\mathit{A}}$ whose elements are the components of the eigenvector $\stackrel{\u20d7}{{v}_{\mathit{A}}}$. Let us scale the adjacency matrix according to $\mathit{W}={\mathit{V}}_{\mathit{A}}\mathit{A}{\mathit{V}}_{\mathit{A}}$ and use it as the interaction matrix. Setting the vertex delay factor to identity, the spreading operator is: $\mathcal{L}=\mathit{I}-{\mathit{D}}_{\mathit{W}}^{-1\u22152}\mathit{W}{\mathit{D}}_{\mathit{W}}^{-1\u22152}=\mathit{I}-\frac{1}{{\lambda}_{max}}\mathit{A},$ where the entries in ${\mathit{D}}_{\mathit{W}}$ simplifies as ${{d}_{\mathit{W}}}_{i}={\sum}_{j}\stackrel{\u20d7}{{v}_{{\mathit{A}}_{i}}}{a}_{ij}\stackrel{\u20d7}{{v}_{{\mathit{A}}_{j}}}=\stackrel{\u20d7}{{v}_{{\mathit{A}}_{i}}}{\sum}_{j}{a}_{ij}\stackrel{\u20d7}{{v}_{{\mathit{A}}_{j}}}={\lambda}_{max}{\stackrel{\u20d7}{{v}_{{\mathit{A}}_{i}}}}^{2}$. This operator is known as the replicator matrix $\mathit{R}$, and it models epidemic diffusion at the epidemic threshold on a graph (Lerman & Ghosh, 2012). It is simply the normalized Laplacian of the interaction graph ${\mathit{V}}_{\mathit{A}}\mathit{A}{\mathit{V}}_{\mathit{A}}$ (Smith et al., 2013), given by reweighting the adjacency graph $\mathit{A}$ with the eigenvector centralities of the vertices.
Using the random walk formulation, an URW on ${\mathit{V}}_{\mathit{A}}\mathit{A}{\mathit{V}}_{\mathit{A}}$ is equivalent to a maximum entropy random walk on the original graph $\mathit{A}$ (Burda et al., 2009; Lambiotte et al., 2011). Its solution is (12)${\theta}_{i}\left(t+1\right)=\sum _{j}\frac{\stackrel{\u20d7}{{v}_{{\mathit{A}}_{i}}}{w}_{ij}}{{\lambda}_{max}\stackrel{\u20d7}{{v}_{{\mathit{A}}_{j}}}}{\theta}_{j}\left(t\right).$ This means that both dynamics have exactly the same state vector θ at each time step. In particular, the stationary distributions are both ${\pi}_{i}=\frac{\stackrel{\u20d7}{{v}_{{\mathit{A}}_{i}}^{2}}}{{\sum}_{i}\stackrel{\u20d7}{{v}_{{\mathit{A}}_{i}}^{2}}}.$
The consensus formulation of the replicator gives a maximum entropy agreement dynamics: ${\mathcal{L}}^{CON}=\mathit{I}-\frac{1}{{\lambda}_{max}}{\mathit{V}}_{\mathit{A}}^{-1}\mathit{A}{\mathit{V}}_{\mathit{A}}.$
Unbiased Laplacian.
Reweighing each edge by the inverse of the square root of the endpoint degrees gives the what is known as the normalized adjacency matrix $\mathit{W}={\mathit{D}}_{\mathit{A}}^{-1\u22152}\mathit{A}{\mathit{D}}_{\mathit{A}}^{-1\u22152}$ (Chung, 1997). Then, the degree of vertex i of the reweighted graph is ${{d}_{\mathit{W}}}_{i}={\sum}_{j\in V}\mathit{W}\left[i,j\right]$. With $\mathit{T}={{d}_{\mathit{W}}}_{max}{D}_{\mathit{W}}^{-1}$ we define the unbiased Laplacian matrix: $\mathcal{L}=\frac{1}{{{d}_{\mathit{W}}}_{max}}\left({\mathit{D}}_{\mathit{W}}-\mathit{W}\right).$
Unbiased Laplacian is an example of the degree based biased random walk with ${P}_{ij}\propto {d}_{i}^{-1\u22152}{a}_{ij}$ (‘Background and Related Work’). An URW on the reweighed adjacency matrix $\mathit{W}$ is equivalent to a BRW on the original adjacency matrix of the following dynamics (13)${\theta}_{i}\left(t+1\right)=\sum _{j}\frac{{d}_{i}^{-1\u22152}{a}_{ij}}{\sum _{k}{d}_{k}^{-1\u22152}{a}_{kj}}{\theta}_{j}\left(t\right).$
The stationary distribution for this class of BRWs in general is ${\pi}_{i}=\frac{{\sum}_{i}{d}_{i}^{\beta}{a}_{ij}{d}_{j}^{\beta}}{{\sum}_{ij}{d}_{i}^{\beta}{a}_{ij}{d}_{j}^{\beta}}.$
Equivalent to the (scaled) graph Laplacian of the normalized adjacency matrix, the diagonal matrix $\mathit{T}{\mathit{D}}_{\mathit{W}}$ of the unbiased Laplacian is also effectively a scalar. As a result, the “random walk” and “consensus” formulations are exactly the same as the symmetric formulation.
These four special cases are related to each other through various transformations introduced earlier in this section, which are captured by Fig. 1.
Parameterized Centrality
Centrality is used to capture how “central” or important a vertex is in a network. In dynamical systems, a centrality measure should have the following properties: (1) it should be a per-vertex measure, with all values positive scalars; (2) it should be strongly related to that’s vertex’s state variable; (3) it should be independent of initial state of the state vector. These conditions ensure that centrality of a vertex is determined by the topology of the network as well as the interactions taking place on it. It also follows our intuition that the importance of a vertex should not depend on the specific initializations of the dynamical process. It is sometimes desirable to define a centrality measure as a function of time (Taylor et al., 2015). In this paper, however, we stick to the more conventional notion of time-invariant centralities.
The various centrality measures introduced in the past have lead to very different conclusions about the relative importance of vertices (Katz, 1953; Bonacich, 1972; Page et al., 1999), including degree centrality, eigenvector centrality and PageRank. Our parameterized Laplacian framework unifies some of these measures by showing that they are related to solutions of different dynamic processes on the network.
Stationary distribution of a random walk
A vertex has high centrality with respect to a random walk if it is visited frequently by it. This is specified by the distribution of the dynamic process at time t: (14)$\mathbf{\theta}\left(t\right)={e}^{-{\mathcal{L}}^{RW}t}\cdot \mathbf{\theta}\left(0\right)=\sum _{k=0}^{\infty}\frac{{\left(-t\right)}^{k}}{k!}{{\mathcal{L}}^{RW}}^{k}\mathbf{\theta}\left(0\right),$ where $\mathbf{\theta}\left(0\right)$ is the state vector describing the initial distribution of the random walk. The stationary distribution of the random walk: (15)$\underset{t\to \infty}{lim}\mathbf{\theta}\left(t\right)=\mathbf{\pi}\phantom{\rule{10.00002pt}{0ex}}\text{with}\phantom{\rule{10.00002pt}{0ex}}{\pi}_{i}=\frac{{{d}_{\mathit{W}}}_{i}{\tau}_{i}}{\sum _{j}{{d}_{\mathit{W}}}_{j}{\tau}_{j}},$ because $\left({\mathit{D}}_{\mathit{W}}-\mathit{W}\right){\left(\mathit{T}{\mathit{D}}_{\mathit{W}}\right)}^{-1}\Pi =\left({\mathit{D}}_{\mathit{W}}-\mathit{W}\right)\stackrel{\u20d7}{1}=\stackrel{\u20d7}{0},$ with $\mathbf{\pi}$ being the vector with π entries and Π being the diagonal matrix with the same elements. By convention, $\mathbf{\pi}$ is the standard centrality measure in conservative processes, including random walks (Ghosh & Lerman, 2012).
If we define centrality as the stationary distribution of a random walk, the importance of a vertex can be thought of as the total time a random walk spends at the vertex in the steady state. This is proportional to both vertex degree and delay factor, which we will later relate to the volume measure. If ${\mathcal{L}}^{RW}$ is a normalized Laplacian, this centrality measure is exactly the heat kernel page rank (Chung, 2009), which is identical to degree centralities since $\mathit{W}=\mathit{A}$ and $\mathit{T}=\mathit{I}$.
Stationary distribution of consensus dynamics
In consensus processes, the state vector always converges to a uniform state, where each vertex has the same value of the dynamic variable. As a result, the stationary distribution is not an appropriate measure of vertex centrality, since it deem all vertices to be equally important. However, the final consensus value associated with each vertex is (16)${\pi}_{i}=\frac{1}{\sum _{j}{{d}_{\mathit{W}}}_{j}{\tau}_{j}}\sum _{i\in \mathit{V}}{\theta}_{i}\left(0\right){{d}_{\mathit{W}}}_{i}{\tau}_{i},$ where weight of vertex i in this average is $\frac{{{d}_{\mathit{W}}}_{i}{\tau}_{i}}{{\sum}_{j}{{d}_{\mathit{W}}}_{j}{\tau}_{j}}.$
Intuitively, as a measure of importance, it make sense to define the centrality of a vertex in the consensus process as its contribution to the final value. This consistency between “consensus” and “random walk” leads us to define the parameterized centrality.
Parameterized centrality
As shown in ‘Similarity transformations,’ the matrices connected through a similarity transformation represent the same linear operator up to a change of basis. For example, the relationship between “consensus” and “random walk” dynamics are captured by Fig. 2.
The above equivalence applies to all state vectors at any time t, including the stationary state. To verify, we first rewrite the initial state vector in terms of the eigenvectors of $\mathcal{L}$ $\left\{\stackrel{\u20d7}{{v}_{1}},\stackrel{\u20d7}{{v}_{2}},\dots ,\stackrel{\u20d7}{{v}_{n}}\right\}$, indexed by their corresponding eigenvalues in ascending order λ_{1} < λ_{2} < ⋯, with the smallest λ_{1} as the dominant eigenvalue. (17)$\mathbf{\theta}\left(t\right)={e}^{-\mathcal{L}t}\cdot \mathbf{\theta}\left(0\right)=\sum _{k=0}^{\infty}\frac{{\left(-t\right)}^{k}}{k!}{\mathcal{L}}^{k}\mathbf{\theta}\left(0\right)=\sum _{i}\sum _{k=0}^{\infty}\frac{{\left(-t\right)}^{k}}{k!}{{\lambda}_{i}}^{k}{z}_{i}\stackrel{\u20d7}{{v}_{i}}=\sum _{i}{z}_{i}{e}^{-{\lambda}_{i}t}\stackrel{\u20d7}{{v}_{i}}=\sum _{i}{\mathit{u}}_{i}^{T}\mathbf{\theta}\left(0\right){e}^{-{\lambda}_{i}t}\stackrel{\u20d7}{{v}_{i}}=\mathbb{V}{e}^{-\Lambda t}{\mathbb{U}}^{T}\mathbf{\theta}\left(0\right),$ where in the last step we used matrices to simplify the notation, with Λ being the diagonal matrix of eigenvalues, $\mathbb{V}$ composed of $\left\{\stackrel{\u20d7}{{v}_{1}},\stackrel{\u20d7}{{v}_{2}},\dots ,\stackrel{\u20d7}{{v}_{n}}\right\}$ as columns and ${\mathbb{U}}^{T}={\mathbb{V}}^{-1}$. One interesting observation is that by left multiplying both sides with ${\mathbb{U}}^{T}$, we have ${\mathbb{U}}^{T}\mathbf{\theta}\left(t\right)={\mathbb{U}}^{T}\mathbb{V}{e}^{-\Lambda t}{\mathbb{U}}^{T}\mathbf{\theta}\left(0\right)={e}^{-\Lambda t}{\mathbb{U}}^{T}\mathbf{\theta}\left(0\right).$ Recall that ${\mathbb{U}}^{T}\mathbf{\theta}$ is a vector in the eigenbasis $\mathbb{V}$. Applying the operator $\mathcal{L}$ to any input vector simply re-scales it according to eigenvalues. Since the smallest eigenvalue of the parameterized Laplacian is always 0, we have ${\mathit{u}}_{1}^{T}\mathbf{\theta}\left(t\right)={e}^{-{\lambda}_{1}t}{\mathit{u}}_{1}^{T}\mathbf{\theta}\left(0\right)={\mathit{u}}_{1}^{T}\mathbf{\theta}\left(0\right),$ which states that the state vector is conserved along the direction of the dominant eigenvector $\stackrel{\u20d7}{{v}_{1}}$.
The state vector reaches a stationary distribution π (18)$\pi =\underset{t\to \infty}{lim}\mathbf{\theta}\left(t\right)=\underset{t\to \infty}{lim}{e}^{{\lambda}_{1}t}\mathbf{\theta}\left(t\right)={z}_{1}{\left(\frac{{e}^{{\lambda}_{1}}}{{e}^{{\lambda}_{1}}}\right)}^{t}\stackrel{\u20d7}{{v}_{1}}+{z}_{2}{\left(\frac{{e}^{{\lambda}_{1}}}{{e}^{{\lambda}_{2}}}\right)}^{t}\stackrel{\u20d7}{{v}_{2}}+\cdots +{z}_{n}{\left(\frac{{e}^{{\lambda}_{1}}}{{e}^{{\lambda}_{n}}}\right)}^{t}\stackrel{\u20d7}{{v}_{n}}\approx {z}_{1}\stackrel{\u20d7}{{v}_{1}}.$ Since all terms vanish as t → ∞, the stationary state vector π only depends on $\stackrel{\u20d7}{{v}_{1}}$. ${z}_{1}\stackrel{\u20d7}{{v}_{1}}$ qualifies as a time invariant, initialization-independent vertex centrality measure.
Table 2 summarizes the properties of the stationary distributions and centralities associated with different similarity transformation of the parameterized Laplacian. ${\left[\mathbf{\theta}\right]}_{\rho}$ represents the vector $\mathbf{\theta}$ under the basis specified by the ρ parameter, with the random walk vector under the standard basis being $\mathbf{\theta}\left(0\right)$.
Formulations | ${\left[\mathbf{\theta}\left(0\right)\right]}_{\rho}$ | ${\mathit{u}}_{1i}$ | z_{1} | ${\stackrel{\u20d7}{v}}_{1i}$ | [π_{i}]_{ρ} |
---|---|---|---|---|---|
${\mathcal{L}}^{SYM}$ | ${\left(DT\right)}^{-1\u22152}\mathbf{\theta}\left(0\right)$ | $\frac{\sqrt{{d}_{i}{\tau}_{i}}}{\sqrt{{\sum}_{j}{d}_{j}{\tau}_{j}}}$ | $\frac{1}{\sqrt{{\sum}_{j}{d}_{j}{\tau}_{j}}}$ | $\frac{\sqrt{{d}_{i}{\tau}_{i}}}{\sqrt{{\sum}_{j}{d}_{j}{\tau}_{j}}}$ | $\frac{\sqrt{{d}_{j}{\tau}_{j}}}{{\sum}_{j}{d}_{j}{\tau}_{j}}$ |
${\mathcal{L}}^{RW}$ | $\mathbf{\theta}\left(0\right)$ | $\frac{1}{\sqrt{{\sum}_{j}{d}_{j}{\tau}_{j}}}$ | $\frac{1}{\sqrt{{\sum}_{j}{d}_{j}{\tau}_{j}}}$ | $\frac{{d}_{i}{\tau}_{i}}{\sqrt{{\sum}_{j}{d}_{j}{\tau}_{j}}}$ | $\frac{{d}_{j}{\tau}_{j}}{{\sum}_{j}{d}_{j}{\tau}_{j}}$ |
${\mathcal{L}}^{CON}$ | ${\left(DT\right)}^{-1}\mathbf{\theta}\left(0\right)$ | $\frac{{d}_{i}{\tau}_{i}}{\sqrt{{\sum}_{j}{d}_{j}{\tau}_{j}}}$ | $\frac{1}{\sqrt{{\sum}_{j}{d}_{j}{\tau}_{j}}}$ | $\frac{1}{\sqrt{{\sum}_{j}{d}_{j}{\tau}_{j}}}$ | $\frac{1}{{\sum}_{j}{d}_{j}{\tau}_{j}}$ |
The spectral theorem states that any symmetric real matrix, regardless if its rank, has an orthonormal basis $\mathbb{V}$ which consists of its eigenvectors. Under the parameterized Laplacian framework, the symmetric formulation with ρ = 0 falls into this category. In the above table, we have chosen the normalization of the orthonormal basis $\sqrt{{\sum}_{j}{d}_{j}{\tau}_{j}}$ as the common normalization for all formulations.
As the table shows, similarity transformations of the same operator give the same the state vector $\mathbf{\theta}$, as long as the input and output vectors are properly transformed into the correct basis. They represent the same dynamics in different coordinate systems. Since centrality is determined by the dynamic process on a given network, it should be unified across these similarity transformations. In theory, any coordinate system can be set as the standard. Here, following the intuitions described earlier, we define the unnormalized stationary state vector of the random walk as the parameterized centrality: (19)${c}_{i}={{d}_{\mathit{W}}}_{i}{\tau}_{i}.$
Another motivation behind this definition is to establish a direct connection between centrality and community measures, as we will later demonstrate with the notion of parameterized volume (23).
Transformations and special cases
Parameterized centrality includes many well known centrality measures as special cases. Below, we summarize the induced special cases discussed in the previous subsection.
Normalized Laplacian.
$\mathit{W}=\mathit{A}$ and $\mathit{T}=\mathit{I}$, and hence the parameterized centrality reduces to degree centrality c_{i} = d_{i}.
(Scaled) Graph Laplacian.
$\mathit{W}=\mathit{A}$ and $\mathit{T}={d}_{max}{\mathit{D}}_{\mathit{A}}^{-1}$, hence the parameterized centrality measure here is uniform with c_{i} = d_{max}. This intuition is easier to see if one considers the unnormalized Laplacian as a consensus operator, as it is often used to calculate the unweighted average of vertex states (Olfati-Saber, Fax & Murray, 2007).
Replicator.
$\mathit{W}={\mathit{V}}_{\mathit{A}}\mathit{A}{\mathit{V}}_{\mathit{A}}$ and $\mathit{T}=\mathit{I}$. Recall that $\stackrel{\u20d7}{{v}_{\mathit{A}}}$ is the eigenvector of $\mathit{A}$ associated with the largest eigenvalue ${\lambda}_{\mathrm{max}}$. The parameterized centrality in this case is ${c}_{i}={\lambda}_{\mathrm{max}}\stackrel{\u20d7}{{v}_{{\mathit{A}}_{i}}^{2}}$, which corresponds to the stationary distribution of a maximal-entropy random walk on the original graph $\mathit{A}$. Note that $\stackrel{\u20d7}{{v}_{\mathit{A}}}$, also known as the eigenvector centrality, was introduced by Bonacich (Bonacich & Lloyd, 2001) to explain the importance of actors in a social network based on the importance of the actors to which they were connected.
Unbiased Laplacian.
$\mathit{W}={\mathit{D}}_{\mathit{A}}^{-1\u22152}\mathit{A}{\mathit{D}}_{\mathit{A}}^{-1\u22152}$ and $\mathit{T}={{d}_{\mathit{W}}}_{max}{\mathit{D}}_{\mathit{W}}^{-1}$. Similar to the (scaled) graph Laplacian, the parameterized centrality measure here is uniform with ${c}_{i}={{d}_{\mathit{W}}}_{max}$.
Other transformations.
Besides the above special cases, we can use any transformation introduced in the last section for new dynamics, and the corresponding parameterized centrality will be immediately apparent. Scaling transformations change τ_{i} terms, while reweighing transformations change ${{d}_{\mathit{W}}}_{i}$. Similarity transform has no effect on parameterized centrality by definition.
Parameterized Community Quality
Now we investigate the impact of dynamics on network communities. A community is a subset of vertices that interact more with each other according to the rules of a dynamic process than with outside vertices. A quality function measures the degree to which this interaction is confined within communities. Here in the context of dynamical processes, we use the following considerations to constrain our choice of quality function: 1. Community quality should be a global measure of interactions; 2. Community quality should be invariant of initial state vectors; 3. Community quality of a subset should be strongly correlated to the change of state variable of member vertices.
The above conditions ensure that the quality function is solely determined by the choice of communities, network structure and the interactions between vertices. We assume that the underlying network structure remains static as the dynamics unfolds. Similar to parameterized centralities, we focus on the time-invariant communities. There is a catch, however, by simply dividing each vertex into its own community, we would have a optimal but trivial community division. Therefore, we need additional constraint on the size of the communities.
A closely related problem in geometry is the isoperimetric problem, which relates the circumference of a region to its area. Isoperimetric inequalities lie at the heart of the study of expander graphs in graph theory. In graphs, area translates into the size of the vertex subset, and the circumference translates into the size of their boundary (Chung, 1997). In particular, we will focus on the graph bisection (cut) problem, which restricts the number of communities to two. For bisections, the constraint on community sizes becomes a balancing problem.
Just as for centrality, various community measures used in previous literature lead to very different conclusions about community structure (Fortunato, 2010; Newman, 2006; Rosvall & Bergstrom, 2008; Zhu, Yan & Moore, 2014). In this section, we will demonstrate that for graph bisection, some of them are essentially graph isoperimetric solutions under our parameterized Laplacian framework, and more importantly, each one corresponds to a unified community measure for a class of similar operators including seemingly different formulations of “consensus,” “symmetric” and “random walk.”^{6}
Parameterized conductance
Recall that conductance is a community quality measure associated with unbiased random walks. (20)$\varphi \left(S\right)=\frac{{\mathrm{cut}}_{\mathit{A}}\left(S,\stackrel{\u0304}{S}\right)}{min\left(\mathrm{vol}\left(S\right),\mathrm{vol}\left(\stackrel{\u0304}{S}\right)\right)},$ where $\mathrm{vol}\left(S\right)={\sum}_{i\in S}{d}_{i}$ and ${\mathrm{cut}}_{\mathit{A}}={\sum}_{i\in S,j\in \stackrel{\u0304}{S}}{a}_{ij}$.
We generalize this notion with a claim that every dynamic process has an associated function that measures the quality of the cluster with respect to that process. Optimizing the quality function leads to cohesive communities, i.e., groups of vertices that “trap” the specific dynamic process for a long period of time.
Consider a dynamic process defined by the spreading operator $\mathcal{L}={\mathit{T}}^{-1\u22152}{\mathit{D}}_{\mathit{W}}^{-1\u22152}\left({\mathit{D}}_{\mathit{W}}-\mathit{W}\right){\mathit{D}}_{\mathit{W}}^{-1\u22152}{\mathit{T}}^{-1\u22152}$. We define the parameterized conductance of a set S with respect to $\mathcal{L}$ as: (21)${h}_{\mathcal{L}}\left(S\right)=\frac{{\mathrm{cut}}_{\mathit{W}}\left(S,\stackrel{\u0304}{S}\right)}{min\left({\mathrm{vol}}_{\mathcal{L}}\left(S\right),{\mathrm{vol}}_{\mathcal{L}}\left(\stackrel{\u0304}{S}\right)\right)}=\frac{\sum _{i\in S,j\in \stackrel{\u0304}{S}}{w}_{ij}}{min\left(\sum _{i\in \mathrm{S}}{{d}_{\mathit{W}}}_{i}{\tau}_{i},\sum _{i\in \stackrel{\u0304}{\mathrm{S}}}{{d}_{\mathit{W}}}_{i}{\tau}_{i}\right)}.$ The minimum over all possible S is the parameterized conductance of the graph, (22)${\varphi}_{\mathcal{L}}\left(G\right)=\underset{S\in V}{min}{h}_{\mathcal{L}}\left(S\right).$ Notice that we have also defined the parameterized volume of a set S⊆V as (23)${\mathrm{vol}}_{\mathcal{L}}\left(S\right)=\sum _{i\in S}{c}_{i}=\sum _{i\in S}{{d}_{\mathit{W}}}_{i}{\tau}_{i},$ which is the sum of parameterized centralities of member vertices. Using the random walk perspective, the numerator measures the random jumps across communities, while the denominator ensures a balanced bisection. As previously pointed out, the presence of a good cut implies that it will take a random walk a long time to cross this boundary and reach its stationary distribution. This corresponds to a small numerator. The parameterized volume can be interpreted as the total time a random walk stays within a community after convergence, as it is proportional to both vertex degrees and vertex delay factors. This interpretation of the denominator coincides with our definition of parameterized centrality (19).
Transformations and special cases
We can use any transformation to produce new dynamics, and the corresponding parameterized conductance will be redefined according to Eq. (21), (24)${h}_{\mathcal{L}}\left(S\right)=\frac{\sum _{i\in S,j\in \stackrel{\u0304}{S}}{w}_{ij}}{min\left(\sum _{i\in \mathrm{S}}{{d}_{\mathit{W}}}_{i}{\tau}_{i},\sum _{i\in \stackrel{\u0304}{\mathrm{S}}}{{d}_{\mathit{W}}}_{i}{\tau}_{i}\right)}.$
However, the effect of transformations on the resulting communities is not as obvious when compared with the parameterized centrality. Below, we elaborate the effect of transformations on the parameterized conductance measure in cases and examples.
First of all, the similarity transformation keeps both numerator and denominator the same, which makes the quality function of the same communities identical. This ultimately leads to identical parameterized conductances, which is the minimum over all possible bisections. Uniform scaling does change the denominator. However, because all possible bisections are scaled uniformly, the relative quality measure remain the same, leading to identical parameterized conductances communities.
From the algorithmic perspective, both similarity and uniform scaling transformations preserve spectral properties of the operator. Since the spectrum is the only input information our spectral dynamics clustering Algorithm 1 uses, we always expect to get the same solution after the transformations. This is not the case with non-uniform scaling and reweighing transformations.
With non-uniform scaling, the numerator remains unchanged. It is each vertex’s delay time change that scales the volume measures in the denominator, which in turn results in different optimal bisections because of the balance constraint.
The reweighing transformation is the most complex of all, changing both the numerator and denominator in Eq. (21). This trade-off between cut and balance can oftentimes be very complicated to analyze (as will be seen with real world networks).
Finally, we summarize the induced special cases.
Normalized Laplacian.
$\mathit{W}=\mathit{A}$ and $\mathit{T}=\mathit{I}$, and hence ${h}_{\mathcal{L}}\left(S\right)$ is the conductance.
(Scaled) Graph Laplacian.
$\mathit{W}=\mathit{A}$ and $\mathit{T}={d}_{max}{\mathit{D}}_{\mathit{A}}^{-1}$, hence ${h}_{\mathcal{L}}\left(S\right)=\frac{{\mathrm{cut}}_{\mathit{A}}\left(S,\stackrel{\u0304}{S}\right)}{min\left({d}_{\mathrm{max}}\left|S\right|,{d}_{max}\left|\stackrel{\u0304}{S}\right|\right)}=\frac{1}{{d}_{\mathrm{max}}}\cdot \frac{\sum _{i\in S,j\in \stackrel{\u0304}{S}}{a}_{ij}}{min\left(\left|S\right|,\left|\stackrel{\u0304}{S}\right|\right)}.$ This is the ratio cut scaled by $1\u2215{d}_{\mathrm{max}}$.
Replicator.
$\mathit{W}={\mathit{V}}_{\mathit{A}}\mathit{A}{\mathit{V}}_{\mathit{A}}$ and $\mathit{T}=\mathit{I}$. Recall $\stackrel{\u20d7}{{v}_{\mathit{A}}}$ is the eigenvector of $\mathit{A}$ associated with the largest eigenvalue ${\lambda}_{\mathrm{max}}$. The redefined cut size is ${\sum}_{i\in S,j\in \stackrel{\u0304}{S}}{w}_{ij}={\sum}_{i\in S,j\in \stackrel{\u0304}{S}}\stackrel{\u20d7}{{v}_{{\mathit{A}}_{i}}}{a}_{ij}\stackrel{\u20d7}{{v}_{{\mathit{A}}_{j}}}$. Therefore, ${h}_{\mathcal{L}}\left(S\right)=\frac{\sum _{i\in \mathrm{S},j\in \stackrel{\u0304}{\mathrm{S}}}\stackrel{\u20d7}{{v}_{{\mathit{A}}_{i}}}{a}_{ij}\stackrel{\u20d7}{{v}_{{\mathit{A}}_{j}}}}{{\lambda}_{\mathrm{max}}min\left(\sum _{i\in \mathrm{S}}\stackrel{\u20d7}{\underset{{\mathit{A}}_{i}}{\overset{2}{v}}},\sum _{i\in \stackrel{\u0304}{\mathrm{S}}}\stackrel{\u20d7}{\underset{{\mathit{A}}_{i}}{\overset{2}{v}}}\right)}.$ Since the degree of a vertex in the interaction graph $\mathit{W}$ is ${{d}_{\mathit{W}}}_{i}={\sum}_{j}{w}_{ij}={\lambda}_{\mathrm{max}}\stackrel{\u20d7}{{v}_{{\mathit{A}}_{i}}^{2}}$, the parameterized conductance of the replicator is simply the conductance of the interaction graph $\mathit{W}$ (Smith et al., 2013).
Unbiased Laplaican.
$\mathit{W}={\mathit{D}}_{\mathit{A}}^{-1\u22152}\mathit{A}{\mathit{D}}_{\mathit{A}}^{-1\u22152}$ and $\mathit{T}={{d}_{\mathit{W}}}_{max}{D}_{\mathit{W}}^{-1}$. The associated quality function is ${h}_{\mathcal{L}}\left(S\right)=\frac{1}{{{d}_{\mathit{W}}}_{max}}\cdot \frac{\sum _{i\in S,j\in \stackrel{\u0304}{S}}\frac{{a}_{ij}}{\sqrt{{d}_{i}{d}_{j}}}}{min\left(\left|S\right|,\left|\stackrel{\u0304}{S}\right|\right)}.$
Notice that here the parameterized conductance for graph Laplacian and unbiased Laplacian share the same denominator even though they are related through both reweighing and scaling transformations. This is a result of their scaling cancelling out the reweighing effect on volumes (centralities). This is part of the motivation behind our design of the unbiased Laplacian operator for easier comparisons. Another simple obseravation is that graph Laplacian shares the same numerator with its normalized counterpart. We will be using these relationships for analyzing experimental results in the next section.
Parameterized Cheeger inequality
Given the parameterized conductance measure, finding the best community bisection is still a combinatorial problem, which quickly becomes computationally intractable as the network grows in size. In this subsection we will extend the theorems for the classic Laplacian to our parameterized setting, ultimately leading to efficient approximate algorithms with theoretical guarantees. For mathematical convenience we will use the symmetric formulation and assume that ρ = 0 for $\mathcal{L}$. Cheeger inequality (Cheeger, 1970) states that ${\varphi}^{2}\left(G\right)\u22152\le {\lambda}_{2}\le 2\varphi \left(G\right)$ where λ_{2} is the second smallest eigenvalue of the symmetric normalized Laplacian, $\mathcal{L}=\mathit{I}-{\mathit{D}}^{-1\u22152}{\mathit{WD}}^{-1\u22152},$ and ϕ(G) is conductance. The relationship between conductance and spectral properties of the Laplacian enables the use of its eigenvectors for partitioning graphs, particularly the nearest-neighbor graphs and finite-element meshes (Spielman & Teng, 1996).
In this section, we generalize Cheeger inequality to any spreading operator under our framework and its associated parameterized conductance of the graph (given by Eq. (22)). Compared with classic results in Markov chain mixing times (Jerrum & Sinclair, 1988; Lawler & Sokal, 1988), we generalize Cheeger inequality to accommodate the asyncronized delay factors in $\mathit{T}$. It also comes with algorithmic consequences, leading to spectral partitioning algorithms that are efficient in finding low conductance cuts for a given operator.
Parameterized Cheeger Inequality
Consider the dynamic process described by a (properly scaled) spreading operator $\mathcal{L}={\mathit{T}}^{-1\u22152}{\mathit{D}}_{\mathit{W}}^{-1\u22152}\left({\mathit{D}}_{\mathit{W}}-\mathit{W}\right){\mathit{D}}_{\mathit{W}}^{-1\u22152}{\mathit{T}}^{-1\u22152}$. Let λ_{1} ≤ λ_{2} ≤ ⋯ ≤ λ_{n} be the eigenvalues of $\mathcal{L}$. Then λ_{1} = 0 and λ_{2} satisfies the following inequalities: ${\varphi}_{\mathcal{L}}{\left(G\right)}^{2}\u22152\le {\lambda}_{2}\le 2{\varphi}_{\mathcal{L}}\left(G\right)$ where ${\varphi}_{\mathcal{L}}\left(G\right)$ is given by Eq. (22).
We prove the theorem by following the approach for proving the classic Cheeger inequality (see Chung, 1997).
Let (τ_{1}, …, τ_{n}) be the diagonal entries of $\mathit{T}$, and $\stackrel{\u20d7}{{v}_{1}}$ be the eigenvector associated with λ_{1}. Note that $\stackrel{\u20d7}{{v}_{1}}={\mathit{T}}^{1\u22152}{\mathit{D}}_{\mathit{W}}^{1\u22152}\cdot \stackrel{\u20d7}{1}$, where $\stackrel{\u20d7}{1}$ denotes the column vector of all 1’s, is an eigenvector of $\mathcal{L}$ associated with eigenvalue λ_{0} = 0. Let ${\mathrm{vol}}_{\mathcal{L}}\left(S\right)={\sum}_{i\in S}{d}_{i}{\tau}_{i}$ for S⊆V, where for clarity we abuse the notation d_{i} and use it as ${{d}_{\mathit{W}}}_{i}$.^{7} Suppose $\mathit{f}$ is the eigenvector associated with λ_{2}. Then, $\mathit{f}\perp \stackrel{\u20d7}{{v}_{1}}$. Consider vector $\mathit{g}$ such that $g\left[u\right]=f\left[u\right]\u2215\sqrt{{d}_{u}{\tau}_{u}}$. The fact that $\mathit{f}\perp \stackrel{\u20d7}{{v}_{1}}$ then implies ∑_{v}g[v]d_{v}τ_{v} = 0. Then, ${\lambda}_{2}=\frac{{\mathit{f}}^{T}\mathcal{L}\mathit{f}}{{\mathit{f}}^{T}\mathit{f}}=\frac{\sum _{u,v\in V}{\left(\frac{f\left[u\right]}{\sqrt{{d}_{u}{\tau}_{u}}}-\frac{f\left[v\right]}{\sqrt{{d}_{v}{\tau}_{v}}}\right)}^{2}{w}_{u,v}}{\sum _{v}f{\left[v\right]}^{2}}=\frac{\sum _{u,v\in V}{\left(g\left[u\right]-g\left[v\right]\right)}^{2}{w}_{u,v}}{\sum _{v}g{\left[v\right]}^{2}{d}_{v}{\tau}_{v}}.$ Instead of sweeping the vertices of G according to the eigenvector $\mathit{f}$ itself, we sweep the vertices of the graph G according to $\mathit{g}$ by ordering the vertices of G so that $g\left[{v}_{1}\right]\ge g\left[{v}_{2}\right]\ge \cdots \ge g\left[{v}_{n}\right]$ and consider sets S_{i} = {v_{1}, …, v_{i}} for all 1 ≤ i ≤ n.
Similar to Chung (1997), we will eventually only consider the first “half” of the sets S_{i} during the sweeping: let r denote the largest integer such that ${\mathrm{vol}}_{\mathcal{L}}\left({S}_{r}\right)\le {\mathrm{vol}}_{\mathcal{L}}\left(V\right)\u22152$. Note that $\sum _{v}{\left(g\left[v\right]-g\left[{v}_{r}\right]\right)}^{2}{d}_{v}{\tau}_{v}=\sum _{v}g{\left[v\right]}^{2}{d}_{v}{\tau}_{v}+g{\left[{v}_{r}\right]}^{2}{d}_{v}{\tau}_{v}\ge \sum _{v}g{\left[v\right]}^{2}{d}_{v}{\tau}_{v}$ where the first equation follows from ∑_{v}g[v]d_{v}τ_{v} = 0. We denote the positive and negative part of g − g[v_{r}] as g_{+} and g_{−} respectively: (25)${g}_{+}\left[v\right]=\left\{\begin{array}{cc}g\left[v\right]-g\left[{v}_{r}\right],\phantom{\rule{10.00002pt}{0ex}}\hfill & \text{if}g\left[v\right]\ge g\left[{v}_{r}\right]\text{}.\hfill \\ 0,\phantom{\rule{10.00002pt}{0ex}}\hfill & \text{otherwise}.\hfill \end{array}\right.$ (26)${g}_{-}\left[v\right]=\left\{\begin{array}{cc}|g\left[v\right]-g\left[{v}_{r}\right]|,\phantom{\rule{10.00002pt}{0ex}}\hfill & \text{if}g\left[v\right]\le g\left[{v}_{r}\right]\text{}.\hfill \\ 0,\phantom{\rule{10.00002pt}{0ex}}\hfill & \text{otherwise}.\hfill \end{array}\right.$ Now ${\lambda}_{2}=\frac{\sum _{u,v\in V}{\left(g\left[u\right]-g\left[v\right]\right)}^{2}{w}_{u,v}}{\sum _{v}g{\left[v\right]}^{2}{d}_{v}{\tau}_{v}}\ge \frac{\sum _{u,v\in V}{\left({g}_{+}\left[u\right]-{g}_{+}\left[v\right]\right)}^{2}{w}_{u,v}+{\left({g}_{-}\left[u\right]-{g}_{-}\left[v\right]\right)}^{2}{w}_{u,v}}{\sum _{v}\left({g}_{+}{\left[v\right]}^{2}+{g}_{-}{\left[v\right]}^{2}\right){d}_{v}{\tau}_{v}}\ge min\left[\frac{\sum {\left({g}_{+}\left[u\right]-{g}_{+}\left[v\right]\right)}^{2}{w}_{u,v}}{\sum _{v}{g}_{+}{\left[v\right]}^{2}{d}_{v}{\tau}_{v}},\frac{\sum {\left({g}_{-}\left[u\right]-{g}_{-}\left[v\right]\right)}^{2}{w}_{u,v}}{\sum _{v}{g}_{-}{\left[v\right]}^{2}{d}_{v}{\tau}_{v}}\right].$ Without loss of generality, we assume the first ratio is at most the second ratio, and will mostly focus on the vertices {v_{1}, …., v_{r}} in the first “half” of the graph in the analysis below. Thus, $\lambda}_{2}\ge \frac{\sum _{u,v}{\left({g}_{+}\left[u\right]-{g}_{+}\left[v\right]\right)}^{2}{w}_{u,v}}{\sum _{v}{g}_{+}{\left[v\right]}^{2}{d}_{v}{\tau}_{v}}\ge \frac{{\left(\sum _{u,v}\left({g}_{+}^{2}\left[u\right]-{g}_{+}^{2}\left[v\right]\right){w}_{u,v}\right)}^{2}}{\left(\sum _{v}{g}_{+}{\left[v\right]}^{2}{d}_{v}{\tau}_{v}\right)\left(\sum _{u,v}{\left({g}_{+}\left[u\right]+{g}_{+}\left[v\right]\right)}^{2}{w}_{u,v}\right)$ which follows from the Cauchy-Schwartz inequality.
We now separately analyze the numerator and denominator. To bound the denominator, we will use the following property of τ_{i}: because $\mathcal{L}$ is properly scaled, τ_{i} ≥ 1 for all i ∈ V. Therefore, $\sum _{u,v}{\left({g}_{+}\left[u\right]+{g}_{+}\left[v\right]\right)}^{2}{w}_{u,v}\le \sum _{u,v}2\left({g}_{+}^{2}\left[u\right]+{g}_{+}^{2}\left[v\right]\right){w}_{u,v}=2\sum _{u\in V}{g}_{+}^{2}\left[u\right]{d}_{u}\le 2\sum _{u\in V}{g}_{+}^{2}\left[u\right]{d}_{u}{\tau}_{u}.$ Hence, the denominator is at most $2{\left(\sum _{u\in V}{g}_{+}^{2}\left[u\right]{d}_{u}{\tau}_{u}\right)}^{2}.$
To bound the numerator, we consider subsets of vertices S_{i} = {v_{1}, …, v_{i}} for all 1 ≤ i ≤ r and define S_{0} = ∅. First note that (27)${\mathrm{vol}}_{\mathcal{L}}\left({S}_{i}\right)-{\mathrm{vol}}_{\mathcal{L}}\left({S}_{i-1}\right)={d}_{{v}_{i}}{\tau}_{{v}_{i}}.$ By the definition of ${\varphi}_{\mathcal{L}}\left(G\right)$, we know ${\varphi}_{\mathcal{L}}\left(G\right)\le \underset{i}{min}{h}_{\mathcal{L}}\left({S}_{i}\right)$ for all 1 ≤ i ≤ r, where recall the function ${h}_{S}\left(\mathcal{L}\right)$ is defined by Eq. (21). Since ${\mathrm{vol}}_{\mathcal{L}}\left({S}_{i}\right)\le {\mathrm{vol}}_{\mathcal{L}}\left(\stackrel{\u0304}{{S}_{i}}\right)$ for all 1 ≤ i ≤ r, we have (28)$\mathrm{cut}\left({S}_{i},\stackrel{\u0304}{{S}_{i}}\right)\ge {\varphi}_{\mathcal{L}}\cdot {\mathrm{vol}}_{\mathcal{L}}\left({S}_{i}\right).$
By orienting vertices according to v_{1}, …, v_{n}, we can express the numerator $\text{Num}={\left(\sum _{u,v}\left({g}_{+}^{2}\left[u\right]-{g}_{+}^{2}\left[v\right]\right){w}_{u,v}\right)}^{2}={\left(\sum _{i<j}\left(\sum _{k=0}^{j-i-1}{g}_{+}^{2}\left[{v}_{i+k}\right]-{g}_{+}^{2}\left[{v}_{i+k+1}\right]\right){w}_{{v}_{i},{v}_{j}}\right)}^{2}.\text{Rewrite the difference as a telescoping series}={\left(\sum _{i=1}^{n-1}\left({g}_{+}^{2}\left[{v}_{i}\right]-{g}_{+}^{2}\left[{v}_{i+1}\right]\right)\cdot \mathrm{cut}\left({S}_{i},\stackrel{\u0304}{{S}_{i}}\right)\right)}^{2}\text{Collecting}\left({v}_{i},{v}_{i+1}\right)\text{terms}\ge {\left(\sum _{i=1}^{n-1}\left({g}_{+}^{2}\left[{v}_{i}\right]-{g}_{+}^{2}\left[{v}_{i+1}\right]\right)\cdot {\varphi}_{\mathcal{L}}\cdot {\mathrm{vol}}_{\mathcal{L}}\left({S}_{i}\right)\right)}^{2}\text{By Eq. (28)}={\varphi}_{\mathcal{L}}^{2}\cdot {\left(\sum _{i=1}^{n}{g}_{+}^{2}\left[{v}_{i}\right]\cdot \left({\mathrm{vol}}_{\mathcal{L}}\left({S}_{i}\right)-{\mathrm{vol}}_{\mathcal{L}}\left({S}_{i+1}\right)\right)\right)}^{2}\text{By Eq. (27) and}{g}_{+}\left({v}_{n}\right)=0={\varphi}_{\mathcal{L}}{\left(G\right)}^{2}\cdot {\left(\sum _{i=1}^{n}{g}_{+}^{2}\left[{v}_{i}\right]\cdot {d}_{{v}_{i}}{\tau}_{i}\right)}^{2}.$
Combining the bounds for the numerator and the denominator, we obtain ${\lambda}_{2}\le {\varphi}_{\mathcal{L}}^{2}\u22152$ as stated in the theorem. The upper bound of λ_{2} follows from the same argument for the standard Cheeger inequality.□
Spectral partitioning for parameterized conductance
The parameterized Cheeger inequality is essential for providing theoretical guarantees for greedy community detection algorithms. In this section, we extend traditional spectral clustering algorithm to the parameterized Laplacian setting.
Given a weighted graph $G=\left(V,E,\mathit{A}\right)$ and a operator $\mathcal{L}$, we can use the standard sweeping method in the proof of Theorem 1 to find a partition $\left(S,\stackrel{\u0304}{S}\right)$. This procedure is described in Algorithm 1.
Before stating the quality guarantee of the above algorithm, we quickly discuss its implementation and running time. The most expensive step is the computation of the eigenvector f associated with the second smallest eigenvalue of $\mathcal{L}$. While one can use standard numerical methods to find an approximation of this eigenvector—the analysis would depend on the separation of the second and the third eigenvalue of $\mathcal{L}$. Since $\mathcal{L}$ is a diagonally scaled normalized Laplacian matrix, one can use the nearly-linear-time Laplacian solvers (e.g., by Spielman–Teng (Spielman & Teng, 2004) or Koutis–Miller–Peng (Koutis, Miller & Peng, 2010)) to solve linear systems in $\mathcal{L}$.
Following Spielman & Teng (2004), let us consider the following notion of spectral approximation of $\mathcal{L}$: suppose ${\lambda}_{2}\left(\mathcal{L}\right)$ the second smallest eigenvalue of $\mathcal{L}$. For ε ≥ 0, $\stackrel{\u0304}{\mathit{f}}$ is an ε-approximate second eigenvector of $\mathcal{L}$ if $\stackrel{\u0304}{\mathit{f}}\perp {\mathit{D}}_{\mathit{A}}^{1\u22152}{\mathit{T}}^{1\u22152}\cdot \stackrel{\u20d7}{1}$, and $\frac{{\stackrel{\u0304}{\mathit{f}}}^{T}\mathcal{L}\stackrel{\u0304}{\mathit{f}}}{{\stackrel{\u0304}{\mathit{f}}}^{T}\stackrel{\u0304}{\mathit{f}}}\le \left(1+\epsilon \right)\cdot {\lambda}_{2}\left(\mathcal{L}\right).$
The following proposition follows directly from the algorithm and Theorem 7.2 of Spielman & Teng (2004) (using the solver from Koutis, Miller & Peng, 2010).
Input: weighted network: $G=\left(V,E,\mathit{A}\right)$, and spreading operator $\mathcal{L}$ defined by the interaction matrix $\mathit{W}$ and the vertex delay factor $\mathit{T}$. |
Output partition: $\left(S,\stackrel{\u0304}{S}\right)$ |
Algorithm |
∙ Find the eigenvector $\mathit{f}$ of $\mathcal{L}={\mathit{T}}^{-1\u22152}{\mathit{D}}_{\mathit{W}}^{-1\u22152}\left({\mathit{D}}_{\mathit{W}}-\mathit{W}\right){\mathit{D}}_{\mathit{W}}^{-1\u22152}{\mathit{T}}^{-1\u22152}$ associated with the second smallest eigenvalue of $\mathcal{L}$. |
∙ Let vector $\mathit{g}$ be $g\left[u\right]=f\left[u\right]\u2215\sqrt{{{d}_{\mathit{W}}}_{u}{\tau}_{u}}$. |
∙ Order the vertices of G into (v_{1}, …., v_{n}) such that g[v_{1}] ≥ g[v_{2}] ≥ ⋯ ≥ g[v_{n}]. |
∙ Sweeping: for each S_{i} = {v_{1}, …, v_{i}}, compute |
${h}_{\mathcal{L}}\left({S}_{i}\right)=\frac{\mathrm{cut}\left({S}_{i},\stackrel{\u0304}{{S}_{i}}\right)}{min\left({\mathrm{vol}}_{\mathcal{L}}\left({S}_{i}\right),{\mathrm{vol}}_{\mathcal{L}}\left({\stackrel{\u0304}{S}}_{i}\right)\right)}.$ |
∙ Output the S_{i} with the smallest ${h}_{\mathcal{L}}\left({S}_{i}\right)$. |
For any interaction graph $G=\left(V,E,\mathit{W}\right)$ and vertex scaling factor $\mathit{T}$, and ε, p > 0, with probability at least 1 − p, one can compute an ε-approximate second eigenvector of operator $\mathcal{L}$ in time $O\left(\left|E\right|lognloglognlog\left(1\u2215p\right)log\left(1\u2215\epsilon \right)\u2215\epsilon \right).$
To use this spectral approximation algorithm (and in fact any numerical approximation to the second eigenvector of $\mathcal{L}$) in our spectral partitioning algorithm for the dynamics, we will need a strengthened theorem of Theorem 1.
Extended Cheeger Inequality with Respect to Rayleigh Quotient
For any interaction graph $G=\left(V,E,\mathit{W}\right)$ and vertex scaling factor $\mathit{T}$, (whose diagonals are (τ_{1}, …, τ_{n})), for any vector $\mathit{u}$ such that $\mathit{u}\perp {\mathit{D}}_{\mathit{A}}^{1\u22152}{\mathit{T}}^{1\u22152}\cdot \stackrel{\u20d7}{1}$, if we order the vertices of G into (v_{1}, …., v_{n}) such that g[v_{1}] ≥ ⋯ ≥ g[v_{n}], where $\mathit{g}={\left(\mathit{D}\mathit{T}\right)}^{-1\u22152}\cdot \mathit{u}$ then $\frac{{\left(\underset{i}{min}{h}_{\mathcal{L}}\left({S}_{i}\right)\right)}^{2}}{2}\le \frac{{\mathit{u}}^{T}\mathcal{L}\mathit{u}}{{\mathit{u}}^{T}\mathit{u}},$ where $\mathcal{L}={\mathit{T}}^{-1\u22152}{\mathit{D}}_{\mathit{W}}^{-1\u22152}\left({\mathit{D}}_{\mathit{W}}-\mathit{W}\right){\mathit{D}}_{\mathit{W}}^{-1\u22152}{\mathit{T}}^{-1\u22152}$ and S_{i} = {v_{1}, …, v_{i}}.
The theorem follows directly from the proof of Theorem 1 if we replace vector $\mathit{f}$ (the eigenvector of associated with the second smallest eigenvalue of $\mathcal{L}$) by $\mathit{u}$. This theorem is the analog of a theorem by Mihail (1989) for Laplacian matrices.□
The next theorem then follows directly from Proposition 1, Theorem 2 and the definition of ε-approximate second eigenvector of $\mathcal{L}$ that provide a guarantee of the quality of the algorithm of this subsection.
For any interaction graph $G=\left(V,E,\mathit{W}\right)$ and vertex delay factor $\mathit{T}$, (whose diagonals are (τ_{1}, …, τ_{n})), one can compute in time $O\left(\left|E\right|lognloglognlog\left(1\u2215\epsilon \right)\u2215\epsilon \right)$ a partition $\left(S,\stackrel{\u0304}{S}\right)$ such that $h}_{\mathcal{L}}\left(S\right)=\frac{\sum _{v\in S,u\in \stackrel{\u0304}{S}}{w}_{u,v}}{min\left(\sum _{v\in S}{{d}_{\mathit{W}}}_{v}{\tau}_{v},\sum _{v\in \stackrel{\u0304}{S}}{{d}_{\mathit{W}}}_{v}{\tau}_{v}\right)}\le \sqrt{2\left(1+\epsilon \right){\lambda}_{2}\left(\mathcal{L}\right)$ where ${\mathit{T}}^{-1\u22152}{\mathit{D}}_{\mathit{W}}^{-1\u22152}\left({\mathit{D}}_{\mathit{W}}-\mathit{W}\right){\mathit{D}}_{\mathit{W}}^{-1\u22152}{\mathit{T}}^{-1\u22152}$, w_{u,v} is the (u, v)^{th} entry of the interaction matrix $\mathit{W}$, and ${\lambda}_{2}\left(\mathcal{L}\right)$ is the second smallest eigenvalue of $\mathcal{L}$. Consequently, ${h}_{\mathcal{L}}\left(S\right)\le 2\sqrt{\left(1+\epsilon \right){\varphi}_{\mathcal{L}}\left(G\right)}=2\left(1+\epsilon \right)\sqrt{\underset{{S}^{\ast}\in V}{min}\frac{\sum _{v\in {S}^{\ast},u\in \stackrel{\u0304}{{S}^{\ast}}}{w}_{u,v}}{min\left(\sum _{v\in {S}^{\ast}}{{d}_{\mathit{W}}}_{v}{\tau}_{v},\sum _{v\in \stackrel{\u0304}{{S}^{\ast}}}{{d}_{\mathit{W}}}_{v}{\tau}_{v}\right)}}.$
Experiments
We demonstrate that different dynamic processes can lead to divergent views of network structure in several well studied real-world networks. These networks come from different domains and embody a variety of dynamical processes and interactions, from real-world friendships (Zachary karate club (Zachary, 1977)), to online social networks (Facebook (McAuley & Leskovec, 2012)), to electrical power distribution (Power Grid (Watts & Strogatz, 1998)), to co-voting records (House of Representatives (Poole, 2012)) and hyperlinked weblogs on US politics (Political Blogs (Adamic & Glance, 2005)), to games played between NCAA football teams (Girvan & Newman, 2002). Table 3 lists these networks and their properties. We treat all as undirected networks.
Network | #vertices | #edges | Diameter | Clustering | #communities |
---|---|---|---|---|---|
Zachary’s Karate Club | 34 | 78 | 5 | 0.588 | 2 |
College Football | 115 | 613 | 4 | 0.403 | 12 |
House of Representatives | 434 | 51,033 | 4 | 0.882 | 2 |
Political Blogs | 1,490 | 16,714 | 9 | 0.21 | 2 |
Facebook Egonets | 4,039 | 88,234 | 17 | 0.303 | N/A |
Power Grid | 4,941 | 6,594 | 46 | 0.107 | N/A |
To compare the different perspectives on network structure obtained under the parameterized Laplacian framework, we study the centrality and sweep profiles calculated using the four dynamic operators defined in ‘Special Cases.’ The centrality profile gives the parameterized centrality of each vertex under a given operator. To improve visualization, vertices are ordered by their centrality according to the normalized Laplacian and then rescaled to fall within the same range. Thus, only relative differences in centrality are relevant. The sweep profile is similar to the community profile used in Leskovec et al. (2008) to study network partitioning. Community profile shows the conductance of the best bisection of the network into two communities of size k and N − k as k is varied. They found that community profiles of real-world networks reveal a “core and whiskers” organization, with a large core and many small peripheral communities, or whiskers, loosely connected to the core. In contrast, sweep profile gives the parameterized conductance Eq. (21) of a bisection of the network into communities of size k and N − k using Algorithm 1, not necessarily the best bisection. To improve visualization, we rescale sweep profiles to lie within the same range.
In addition to the sweep profile, we also visualize the best bisection obtained using Algorithm 1 (which corresponds to the minimum of the sweep profile). The visualizations are created using network layout that combines “Yifan Hu” and “Force Atlas” algorithms from the Gephi software package (Bastian, Heymann & Jacomy, 2009). Nodes in the same partition have the same color. We also compare with the ground truth communities, where possible, and report accuracy of the comparison.
Zachary’s Karate Club
The first network we study is a social network consisting of 34 members of a karate club in a university, where undirected edges represent friendships (Zachary, 1977). This well-studied network is made up of two assortative blocks centered around the instructor and the club president, each with a high degree hub and lower-degree peripheral vertices. With a simple community structure, this network often serves as a benchmark for community detection algorithms. Its centrality and sweep profiles identified by each operator are shown in Fig. 3. The visualizations show the best bisection of the network obtained by each operator, and the last visualization, which gives the ground truth communities.
Just as many other community detection algorithms, the four parameterized Laplacians give almost identical optimal bisections of this simple network, all of which are close to the ground truth communities, with accuracies ranging from 94.1% to 97.1% (Fig. 3G). Furthermore, their centrality and sweep profiles are very similar as well (Figs. 3A and 3B). This is a excellent example showing that most good community measures capture the same fundamental idea of communities, those well-interacting subsets of vertices with relatively sparse connection in between. They do differ, however, in finer details of their mathematical definitions, as we will see in more complicated networks in the following subsections.
College football
The second network represents American football games played between Division IA colleges during the regular season in Fall 2000 (Girvan & Newman, 2002), where two vertices (colleges) are linked if they played in a game. Following the structure of the divisions, the network naturally breaks up into 12 smaller conferences, roughly corresponding to the geographic locations of colleges. Most games are played within each conference which leads to densely connected local clusters. Its centrality and sweep profiles and visualizations of optimal bisections under each operator are shown in Fig. 4.
The centrality profiles show heavy tailed distributions, which corresponds to evenly spread out degrees across the network Fig. 4A. This is consistent with the reality of the network, where every football team plays roughly the same number of games each season.
Unlike Karate Club, College Football starts to give us different community divisions under different dynamic operators. Most operators lead to a balanced east–west bisection (Figs. 4C, 4D and 4F). This division is mostly consistent (around 95%) with the bisection produced by merging 6 conferences (label 0,1,4,5,6,9 for the east cluster) on each side, as illustrated by the accuracy numbers. The replicator, however, places the “swing” Big Ten Conference (contains mostly colleges in the midwest) into the east cluster (Fig. 4E). Upon further investigation, we discovered that while both bisections have almost the same cross community edges, the seemingly more balanced division does lead to a slight imbalance in terms of links within each community. The the parametrized centrality under the replicator magnified this imbalance, ultimately pushed the “swing” conference to the east side.
In fact, the sweep profile Fig. 4B clearly shows that all four special cases actually see both bisections as plausible solutions, with closely matched local optima. This phenomenon where different dynamics agrees on multiple local optima but favor different ones as the global solution is a repeating theme in the following examples. This means that while different special cases of the parameterized conductance can differ in finer details, they will agree on strong community structures that impact all dynamics in similar ways. Figure 4H further illustrates the point. All four special cases here agree on the first local optimum in the sweep profiles, and this local cluster corresponding to the Pacific 10 conference (it later becomes the Pacific 12).
House of representatives
The House of Representatives network is built from the voting records of the members of 98th United States House of Representatives (Poole, 2012). Unlike previously studied variants (Waugh et al., 2009), here we use a special version taking into account all 908 votes. The resulting network has a dense two-party structure with 166 Republicans and 268 Democrats. This network better differentiates some of the dynamics under our framework. Its centrality and sweep profiles and visualizations of optimal bisections under each operator are given below.
The “House of Representatives” network is an excellent example of how centralities and communities are closely related under our framework. First, the centrality profile of this network looks similar to that of the College Football, but quite different from the other networks in Table 3. Because we have taken into account all votes, this network is very densely connected, and its degree distribution also has a heavy tail as demonstrated by the red curve in Fig. 5A.
Since the degree distribution is relatively uniform, we expect the change of the cut size (numerator) in Eq. (21) to be relatively small. The exception here is the optimal bisection produced by the regular Laplacian (Fig. 5D), which is most prone to “whiskers,” leading to a low accuracy of 38.5%. For the other three special cases, the volume balance (denominator) is the determining factor in communities measures, and all produce fairly “balanced” bisections according their own parametrized volume measures.
Another observation is that centrality measures disagree about importance of vertices. In particular, centralities given by the normalized Laplacian might differ from those of the unbiased Laplacian by the degree, but given its relative uniform distribution, leads to almost identical optimal bisections (Figs. 5C and 5F). The replicator, on the other hand, scales vertex centrality according to eigenvector centralities, which places more volume to the high degree vertices on the cyan cluster. The resulting optimal bisection is thus shifted to the right to balance volumes (Fig. 5E). In this case, the ground truth aligns closer to the formers with over 90% accuracies as Democrats dominated the 98th Congress.
Political Blogs
The next example is the political blogs network (Adamic & Glance, 2005). Here we focus on the largest component, which consists of 1,222 blogs and 19,087 links between them. The blogs have known political leanings, and were labeled as either liberal or conservative. The network is assortative and has a highly skewed degree distribution. Its centrality and sweep profiles and visualizations of optimal bisections under each special case dynamic are given below.
The Political Blogs network demonstrates a pitfall of many commonly used community quality measures. Many real world networks have a skewed degree distributions, which often corresponds to a “core-whiskers” (also known as core–periphery) structure. As shown in Leskovec et al. (2008), such structures have “whisker” cuts that are so cheap that balance constrains can be effectively ignored. The same happened here for three of our special cases, whose optimal bisections are highly unbalanced. They have below 50% accuracies when compared to the ground truth.
Unlike the House of Representatives, community measure in Political Blogs is dominated by the cut size (numerator). In particular, both the normalized Laplacian and the Laplacian share the same cut size measures, give the same solution (Figs. 6C and 6D), despite their differences in volume/centrality measures (see curves in Fig. 6A). The unbiased Laplacian produces a different whisker cut, because it has a reweighed cut size measure (Fig. 6F). Further investigation reveals that the unbiased Laplacian cuts off a whisker from two highly connected vertices, which according to Eq. (21) greatly reduces the cut size.
The exception here is the replicator operator (Fig. 6E). By reweighing the adjacency matrix by eigenvector centralities, the parameterized volume measure now considers highly connected vertices near the core to be even more important (see the red curve in the centrality profile). The difference in parameterized volume is now too drastic to be ignored. As a result, replicator does not fall for the “whisker” cuts and produces balanced communities with a respectable accuracy of 95.3%.
Facebook Egonets
The Facebook Egonets dataset was collected using a Facebook app (McAuley & Leskovec, 2012). Each egonet is a local network that consists of one user’s Facebook “friends” that represent that user’s social circles. We use the combined network that merges all egonets. This network has many typical social network properties, including a heavy tailed degree distribution. However, it also differs from traditional social networks because of the sampling bias in the data collection process, leading to lower clustering coefficient and a bigger diameter than what one might expect. Its centrality and sweep profiles and visualizations of optimal bisections under each special case dynamic are given below.
As with Political Blogs, the overall multi-core structure leads to unbalanced bisections. Due to its bigger size and an even more heterogeneous degree distribution (Fig. 7A), all four special cases of the parameterized Laplacian fall for local clusters, each in a different fashion. Again, the ordinary Laplacian finds the smallest local community with the minimal cut size of 17 links (Fig. 7D). In contrast, the unbiased Laplacian which has the same volume measure, finds a superset of vertices as the optimal cut, with 40 inter community edges (Fig. 7F). The normalized Laplacian measures cut sizes the same way as the Laplacian, but its different volume measure leads to a much more balanced cut (Fig. 7C). Last but not least, the replicator finds a local core structure with an average degree of 85.7 (Fig. 7E). This is consistent with what we observed on House of Representatives, where the eigenvector centrality places more volume in the cyan cluster, and the resulting cut is actually much more balanced than it looks.
Power grid
The last example is an undirected, unweighted network representing the topology of the western United States power grid (Watts & Strogatz, 1998). Among the six datasets in Table 3, Power Grid is the largest network in terms of the number of vertices. However, it is extremely sparse with an average degree of 2.67, leading to a homogeneous connecting pattern across the whole network without core–periphery structure. Its centrality and sweep profiles and visualizations of optimal bisections are given below.
The long tails of the centrality profiles indicate existence of high degree vertices, or hubs Fig. 8A. However, as the visualizations of network bisection show, these hubs do not usually link to each other directly, resulting in negative degree assortativity (Newman, 2003). This is consistent with the geographic constrains when designing a power grid, as the final goal is to distribute power from central stations to end users. These important difference in overall structure prevented core or whiskers from appearing, and changes how different dynamics behave on Power Grid.
Replicator, which demonstrated the most consistent performance on social networks with core–periphery structure, performs the worst on bisecting the Power Grid. In fact, the visualization shown in Fig. 8E is obtained by manually fixing negative eigenvector centrality entries in ‘Replicator’ (the numeric error comes from the extreme sparse and ill-conditioned adjacency matrix).
The other three special cases all give reasonable results. Laplacian and unbiased Laplacian share the same volume measure, and they have nearly identical solutions with well balanced communities (Figs. 8D and 8F). Their different cut size measures only lead to slightly different boundaries thanks to the homogeneous connecting pattern. Normalized Laplacian share the same cut size measure with the regular Laplacian, and its volume balance is usually more robust on social networks with core-whisker structures. On Power Grid, however, it opts for a smaller cut size at the cost of volume imbalance (Fig. 8C). It turns out the volume of the cyan cluster is compensated by its relative high average degree.
Conclusion
The parameterized Laplacian framework presented in this paper can describe a variety of dynamical processes taking place on a network, including random walks and simple epidemics, but also new ones, such as one captured by the unbiased Laplacian. We extended the relationships between the properties of centrality, community-quality measures and properties of the Laplacian operator, to this more general setting. Each dynamical process has a stationary distribution that gives centrality of vertices with respect to that process. In addition, we show that the parameterized conductance with respect to the dynamical process is related to the eigenvalues of the operator describing that process through a Cheeger-like inequality. We used these relationships to develop efficient algorithm for spectral bisection.
The parameterized Laplacian framework also provides a tool for comparing different dynamical processes. By making the dynamics explicit, we gain new insights into network structure, including who the central nodes are and what communities exist in the network. By connecting the operators using standard linear transformations, we discovered an equivalence among different dynamical systems. In the future, we plan to investigate their differences based on how the vertex state variables change during the evolution of the dynamic process. In the analysis of massive networks, it is also desirable to identify subsets of vertices whose induced sub-graphs have “enough” community structure without examining the entire network. Chung (2007) and Chung (2009) derived a local version of the Cheeger-like inequality to identify random walk-based local clusters. Similarly, our framework can be adapted to such local clustering procedures.
While our framework is flexible enough to represent several important types of dynamical processes, it does not represent all possible processes, for example, those processes that even after a change of basis, do not conserve the total volume. In order to describe such dynamics, an even more general framework is needed. We speculate, however, that the more general operators will still obey the Cheeger-like inequality, and that other theorems presented in this paper can be extended to these processes.