Scientific Research

An Academic Publisher

Centrality Measures Based on Matrix Functions

**Author(s)**Leave a comment

KEYWORDS

1. Introduction

Since its introduction by Euler in eighteen century, graph theory has proven its important applications in many different scientific fields. Graphs and linear algebra have been used to model social interactions. Recently network models are now commonplace not only in the hard sciences but also in various technological, social and biological scenarios. Networks are used to model a variety of highly interconnected systems, both in nature and man-made world of technology. These networks include protein-protein interaction networks, social networks, food webs, scientific collaboration networks, metabolic networks, lexical or semantic networks, neural networks, the World Wide Web and others. The use of network analysis is in various situations: from determining network structure and communities, describing the interactions between various elements of the network and investigating the dynamics phenomena taking place in the network [1].

One of the ground laying questions analysis of network is how to determine the “most important” nodes in a given network. Many centrality measures have been proposed, starting with the simplest of all, node degree centrality. This measure has being considered too “local”, as it does not take into account the connectivity of the immediate neighbours of the node under consideration. A number of centrality measures have been introduced that take into account the global connectivity properties of the network. These include various types of eigenvector centrality for both directed and undirected networks, Katz centrality, subgraph centrality and PageRank centrality [1]. The use of centrality scores provides rankings of the nodes in the networks. The higher the ranking of a node, the more important the node is believed to be within the network. There are many different ranking methods in use, and many algorithms have been developed to compute these rankings.

The purpose of this paper is to discuss some of the centrality measures and analyse the relationship between degree centrality, eigenvector centrality, and Katz centrality and to discuss the measures of centrality based on matrix functions including the logarithmic, sine, cosine, exponential and hyperbolic function. The main aim is to determine which of the matrix functions is highly correlated to “standard” centrality measures. We will use the Kendall correlation coefficient [2]in the experimental work to determine the correlations.

2. Literature Review

Bavelas [3]introduces the application of centrality of networks in human communication by measuring the communication within a small group in terms of the relationship between the structural centrality and influence in a group process. Afterwards, an application of centrality was made under the direction of Bavelas at the Group Networks Laboratory, M.I.T in the late 1940s. Leavitt in 1949 and Smith in 1950 conducted a study on centrality measure on which Bavelas in 1950 and Bavelas and Barrett in 1951 reported. These experiments all concluded that centrality was related to group efficiency in problem-solving and agreed with the subjective perception of leadership [4].

Various centrality measures in various contexts were then explored in the following decade. Cohn and Marriott in 1958 attempted to use the centrality to understand political integration in Indian social life [5]. Pitts examined the consequences of centrality in communication paths for urban development [6]. Later, Czepiel used the concept of centrality to explain the pattern of diffusion of a technological innovation in the steel industry [7].

Recently, Bolland analysed the stability of the degree (DC), closeness (CC), betweenness (BC), and eigenvector (EC) centrality in random and systematic variations of network structure, and found that betweenness centrality changes significantly with the variation of the network structure, while degree and closeness centrality are usually stable. He also found that eigenvector centrality is the most stable of all the indices analysed [8]. Borgatti and Frantz extended the studies on the stability of centrality indices by considering the addition and deletion of nodes and links [9]as well as by differentiating several types of network topology such as uniformly random, small-world, core-periphery, scale-free, and cellular. Landherr reviewed critically the role of centrality measure in social networks [10]. Estrada analysed examples of how a particular centrality measure is applied in social networks [11].

Benzi and Klymko analysed centrality measures, such as degree, eigenvector, Katz and subgraph for both undirected and directed networks. They used the local and global influence of a given node in the network as measured by walks of different lengths through that node. They analysed the relationship between centrality measures based on the diagonal entries and row sums of the matrix exponential and resolvent of the adjacency matrix involving the degree and eigenvector centrality. They showed experimentally that the rankings produced by exponential subgraph centrality, total communicability and resolvent subgraph centrality converge to those produced by degree centrality [1].

Most of the centrality measures notations considered are combinatorial in nature and based on the discrete structure of the underlying networks. We can extend our studies by defining the centrality measure by using the spectral techniques from linear algebra. Benzi and Klymko considered the diagonal entries of the matrix exponential $f\left(A\right)={\text{e}}^{A}$ and the Katz function $f\left(A\right)={\left(I-\alpha A\right)}^{-1}$ where $\alpha >0$ and $A$ is the adjacency matrix of the network [1]. Though, none of the previous studies considered other matrix functions such as the logarithmic, cosine, sine, hyperbolic functions and the generalized Katz centrality as centrality measures. In this work we will develop the notions of centrality based on matrix functions, and we will use the Kendall correlation coefficient [2]to determine the agreement between the node rankings produced by these matrix functions and those produced by the standard centrality measures.

3. Elements of Graph

Graphs are discrete structures which consist of vertices connected by edges. A graph can be written as $G=\left(V,E\right)$ , where $V\left(G\right)$ is a non-empty set of vertices (also called nodes) and $E\left(G\right)$ is a set of edges. An edge of the graph consists of two vertices associated with it, these two vertices are called endpoints. An edge starting and ending at the same vertex is called a loop.

・ Graphs can be represented by using points as nodes (vertices) and joining them using line segments for edges. We write uv to denote an edge between nodes u and v.

・ We can assign numerical values to the edges of a graph in which case the graph is referred to as weighted. In an unweighted graph we assign to every edge the value 1.

・ If the edges of the graph are directed (Figure 1), then the graph is called a directed graph or digraph otherwise it is called an undirected graph. An undirected unweighted (Figure 2) graph without loops is simple if no two edges connect the same pair of vertices.

・ For undirected graphs, if there are multiple edges between a pair of nodes then the graph is called a multi-graph or pseudo-graph. In digraphs we can have two edges connecting two vertices.

3.1. Basic Graph-Theoretic Terminology

If $uv\in E$ is an edge in an undirected graph G, then nodes u and v are incident to the edge uv and we say that u and v are adjacent (neighbours) in G. It follows that u and v are the endpoints of an edge uv. If G is the directed graph and $uv\in E$ , then u is said to be adjacent to v and v is said to be adjacent from u. We call u the initial node of uv and v the terminal or end node of uv. For an undirected graph G, the degree $deg\left({v}_{i}\right)$ of a node ${v}_{i}$ is the total number of edges which are incident to the corresponding node ${v}_{i}$ .

The degree $deg\left({v}_{i}\right)$ is the number of “immediate neighbours” of a node ${v}_{i}$ in G. In a regular graph all nodes have the same degree. Nodes in directed graphs have an in-degree and an out-degree. The in-degree of a node ${v}_{i}$ , $de{g}^{-}\left({v}_{i}\right)$ , is the total number of edges with ${v}_{i}$ as their terminal node. Similarly,

Figure 1. Digraph.

Figure 2. Undirected.

the out-degree of ${v}_{i}$ , $de{g}^{+}\left({v}_{i}\right)$ , is the total number of edges with ${v}_{i}$ as their initial node. Loops contribute 1 to both the in-degree and out-degree.

A graph $H=\left(W,F\right)$ is a subgraph of $G=\left(V,E\right)$ if $W\subseteq V$ and $F\subseteq E$ . We write $H\subseteq G$ , meaning that H is contained in G (or G contains H). If H contains all edges of G that join two vertices in W, then we say that H is a subgraph induced or spanned by W.

The subgraphs of $G=\left(V,E\right)$ can be obtained by deleting edges and vertices of G. We denote by $G-W\equiv \langle V\backslash W\rangle $ with $W\subset V$ , the subgraph of G obtained by deleting the vertices in W and all the edges incident with those vertices. In a similar manner, we denote by $G-F\equiv \left(V\mathrm{,}E\backslash F\right)$ where $F\subset E$ , the subgraph of G obtained by deleting all the edges in F.

3.2. Walk, Trail and Path

A walk ${W}_{k}$ of length k from node ${v}_{0}$ to node ${v}_{k}$ is a finite sequence ${W}_{k}=\left(V,E\right)$ of the form

$V=\left\{{v}_{0}\mathrm{,}{v}_{1}\mathrm{,}\cdots \mathrm{,}{v}_{k}\right\}\mathrm{,}$

$E=\left\{\left({v}_{0},{v}_{1}\right),\left({v}_{1},{v}_{2}\right),\cdots ,\left({v}_{k-1},{v}_{k}\right)\right\}$

where ${v}_{i}$ and ${v}_{i+1}$ are adjacent.

A trail in G is a walk in which no edges of G appear more than once (a walk with all different edges). A trail which begins and ends at the same node is known as a closed trail or circuit.

A path in G is a walk in which no nodes appear more than once with the exception that ${v}_{k}$ can be equal to ${v}_{0}$ . A cycle or a closed path is a path which begins and ends at the same node.

Two nodes ${v}_{i}$ and ${v}_{j}$ in G are connected if there is a path between them. We say that graph G is connected if for every pair of nodes ${v}_{i}$ and ${v}_{j}$ there exists a path that starts at ${v}_{i}$ and ends at ${v}_{j}$ . An edge ${v}_{i}{v}_{j}\in E$ in a connected graph $G=\left(V,E\right)$ is a bridge if G becomes disconnected if ${v}_{i}{v}_{j}$ is deleted. If ${v}_{i}$ and ${v}_{j}$ are nodes of the directed graph G, then G is said to be strongly connected if for any two nodes ${v}_{i}$ and ${v}_{j}$ we can find a path from ${v}_{i}$ to ${v}_{j}$ and from ${v}_{j}$ to ${v}_{i}$ . It is weakly connected if there is a path between every two nodes in the underlying undirected graph. The undirected graph is obtained by ignoring the directions of the edges in the directed graph. All strongly connected directed graphs are also weakly connected.

The digraph in Figure 3 is strongly connected because there is a path between any two ordered vertices in the directed graph. The digraph in Figure 4 is not strongly connected, since, for example, there is no directed path from S to T, nor from V to T, but it is weakly connected.

4. Matrices in Graphs

This section discuses the way of representing graphs using matrices. There are multiple ways to do this and any graph $G\left(V\mathrm{,}E\right)$ can be represented by using either adjacency, incidence or Laplacian matrices.

Figure 3. Strongly connected.

Figure 4. Weakly connected.

4.1. Matrices for Undirected Graph

Given an undirected graph G with n vertices and m edges. The adjacency matrix of $G\left(V\mathrm{,}E\right)$ is given by $A={\left\{{a}_{ij}\right\}}_{n\times n}$ , where

${a}_{ij}=\{\begin{array}{ll}1\hfill & \text{ifnode}{v}_{i}\text{isadjacenttonode}{v}_{j}\hfill \\ 0\hfill & \text{otherwise}\hfill \end{array}$

The incidence matrix is given by $M={\left\{{m}_{i\mathrm{,}j}\right\}}_{n\times m}$ , where

${m}_{ij}=\{\begin{array}{ll}1\hfill & \text{whenedge}{e}_{j}\text{isincidentwithnode}{v}_{i}\hfill \\ 0\hfill & \text{otherwise}\hfill \end{array}$

The Laplacian matrix can be found by using the relation

$L=D-A\mathrm{,}$

where $D$ is a diagonal matrix whose ith diagonal entry is the degree of the ith node and $A$ is the adjacency matrix.

In other words, $L={\left\{{l}_{ij}\right\}}_{n\times n}$ , where

${l}_{ij}=\{\begin{array}{ll}-1\hfill & \text{ifnode}{v}_{i}\text{isadjacenttonode}{v}_{j}\hfill \\ deg\left({v}_{i}\right)\hfill & \text{if}i=j\hfill \\ 0\hfill & \text{otherwise}\hfill \end{array}$

Figure 5 is undirected graph with five nodes and six edges where there is a path from one node to the other nodes. The adjacency matrix $A$ , the incidence matrix $M$ and the diagonal matrix $D$ will be

$A=\left(\begin{array}{ccccc}0& 0& 0& 1& 1\\ 0& 0& 1& 1& 1\\ 0& 1& 0& 1& 0\\ 1& 1& 1& 0& 0\\ 1& 1& 0& 0& 0\end{array}\right)\mathrm{,}\text{\hspace{1em}}M=\left(\begin{array}{cccccc}1& 1& 0& 0& 0& 0\\ 0& 0& 1& 1& 0& 1\\ 0& 0& 0& 0& 1& 1\\ 0& 1& 0& 1& 1& 0\\ 1& 0& 1& 0& 0& 0\end{array}\right)\mathrm{,}$

Figure 5. Undirected Graph.

$D=\left(\begin{array}{ccccc}2& 0& 0& 0& 0\\ 0& 3& 0& 0& 0\\ 0& 0& 2& 0& 0\\ 0& 0& 0& 3& 0\\ 0& 0& 0& 0& 2\end{array}\right)\mathrm{.}$

The Laplacian matrix will be

$L=D-A=\left(\begin{array}{ccccc}2& 0& 0& -1& -1\\ 0& 3& -1& -1& -1\\ 0& -1& 2& -1& 0\\ -1& -1& -1& 3& 0\\ -1& -1& 0& 0& 2\end{array}\right)\mathrm{.}$

4.2. Matrices for Directed Graph

Given a directed graph G with n nodes and m edges. The adjacency matrix of $G\left(V\mathrm{,}E\right)$ is given by $A={\left\{{a}_{ij}\right\}}_{n\times n}$ , where

${a}_{ij}=\{\begin{array}{ll}1\hfill & \text{ifnode}{v}_{i}\text{isconnectedtonode}{v}_{j}\text{anddirectedfrom}{v}_{i}\text{to}{v}_{j}\hfill \\ -1\hfill & \text{ifnode}{v}_{i}\text{isconnectedtonode}{v}_{j}\text{anddirectedfrom}{v}_{j}\text{to}{v}_{i}\hfill \\ 0\hfill & \text{otherwise}\hfill \end{array}$

The incidence matrix is given by $M={\left\{{m}_{i\mathrm{,}j}\right\}}_{n\times m}$ , where

${m}_{ij}=\{\begin{array}{ll}1\hfill & \text{ifnode}{v}_{i}\text{isthestartingnodeofanedge}{e}_{j}\hfill \\ -1\hfill & \text{ifnode}{v}_{i}\text{istheendnodeofanedge}{e}_{j}\hfill \\ 0\hfill & \text{otherwise}\hfill \end{array}$

Figure 6 is a digraph which shows that there is path from node A to any other node of the graph, nevertheless there is no path from node C to any other node of the network. The adjacency matrix $A$ and the incidence matrix $M$ will be

$A=\left(\begin{array}{ccccc}0& 1& 0& 1& 0\\ -1& 0& 1& -1& 1\\ 0& -1& 0& 0& -1\\ -1& 1& 0& 0& 0\\ 0& -1& 1& 0& 0\end{array}\right)\mathrm{,}$

Figure 6. Directed Graph.

$M=\left(\begin{array}{cccccc}1& 0& 1& 0& 0& 0\\ -1& 1& 0& -1& 1& 0\\ 0& -1& 0& 0& 0& -1\\ 0& 0& -1& 1& 0& 0\\ 0& 0& 0& 0& -1& 1\end{array}\right)\mathrm{.}$

4.3. Distance in Graphs

Geodesic distance denoted as $d\left({v}_{i}\mathrm{,}{v}_{j}\right)$ between two nodes ${v}_{i}$ and ${v}_{j}$ in the graph G is defined as the length of the shortest path between nodes ${v}_{i}$ and ${v}_{j}$ . The diameter of the graph $G=\left(V,E\right)$ is given as

$diam\left(G\right)=\underset{{v}_{i}{v}_{j}\in V}{max}d\left({v}_{i}\mathrm{,}{v}_{j}\right)\mathrm{.}$

Geodesic distances in digraph Q in Figure 3:

$d\left(D\mathrm{,}C\right)=\mathrm{4,}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}d\left(A\mathrm{,}E\right)=3\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}d\left(D\mathrm{,}D\right)=0.$

The distance matrix of a graph, denoted as $D$ , is the square matrix ${\left\{d\left(ij\right)\right\}}_{n\times n}$ ,

where $d\left(ij\right)$ is equal to the length of the shortest path from node ${v}_{i}$ to node ${v}_{j}$ .

Consider the undirected unweighted graph in Figure 7 as example;

The distance matrix for the graph in Figure 7, is given as;

$D=\left(\begin{array}{cccccc}0& 1& 2& 2& 2& 1\\ 1& 0& 1& 1& 1& 1\\ 2& 1& 0& 1& 2& 2\\ 2& 1& 1& 0& 1& 2\\ 2& 1& 2& 1& 0& 1\\ 1& 1& 2& 2& 1& 0\end{array}\right)\mathrm{.}$

4.4. Perron―Frobenius Theorem

We will state (without giving the proof) the Perron-Frobenius theorem which will be used later on in our work.

Theorem 1 (Perron-Frobenius theorem). Let $A\in {\mathbb{R}}^{n\times n}$ be an irreducible matrix. Then the Perron-Frobenius theorem states that [12]:

・ $A$ has a principal eigenvalue ${\lambda}_{1}$ such that all other eigenvalues ${\lambda}_{i}$ , for $i=2,3,\cdots ,n$ , satisfy

・ $\left|{\lambda}_{1}\right|\ge \left|{\lambda}_{i}\right|\mathrm{.}$

Figure 7. Unweighted-Undirected Graph.

・ The principal eigenvalue ${\lambda}_{1}$ has algebraic and geometry multiplicity of 1, and has a right eigenvector $x$ with all positive elements, i.e. ${x}_{i}>0\text{\hspace{0.05em}},\text{\hspace{0.05em}}\forall i=1,2,\cdots ,n$ and a left eigenvector $v$ with all positive elements, i.e. ${v}_{i}=0,\text{\hspace{0.05em}}\text{\hspace{0.05em}}\forall i=1,2,\cdots ,n$ .

・ Any non-negative right eigenvector is a multiple of $x$ , any non-negative left eigenvector is a multiple of $v$ .

Furthermore, if $A$ is the adjacency matrix of a directed network with a strongly connected component, then

・ $A$ has a principal eigenvalue ${\lambda}_{1}$ such that all other eigenvalues ${\lambda}_{i}$ , for $i=2,3,\cdots ,n$ , satisfy

$\left|{\lambda}_{1}\right|\ge \left|{\lambda}_{i}\right|\mathrm{.}$

・ The principal eigenvalue ${\lambda}_{1}$ has algebraic and geometric multiplicity equal to 1, and has a left eigenvector $x$ with non-negative elements, i.e. ${x}_{i}>0$ if node i belongs to the strongly connected component of the network or the out-components of the network, and ${x}_{i}=0$ if node i belongs to the in-component of the strongly connected component of the network.

5. Centrality Measures

Centrality of a given node is a measure of the importance and influence of that node in the corresponding network. The identification of which nodes are more important or central than the others is a key issue in network analysis. We can ask the following questions:

・ Which are the most central nodes in a network?

・ Which are the most important nodes in a network?

・ Which are the most influential nodes in a network?

These types of questions can have different interpretations in different networks. For instance;

・ when dealing with a social network, the most central node can be the most popular person,

・ when dealing with a web portal network, the most central node can be a web page with the best quality of content in a specific field,

・ in terms of the internet network, the most central node might be a network gateway (router) with the highest bandwidth.

These ideas can be used to characterize types of centrality measure to find the most important nodes in a network in a given context. That is, there are many different centrality measures. When measuring the centrality of the node, we should be sure that:

・ we know what each centrality measure means;

・ what they measure well; and

・ why a particular centrality measure is the most appropriate for the kind of set we are investigating.

The most common centrality measures include degree centrality, betweenness centrality, eigenvector centrality, Katz centrality, PageRank centrality, closeness centrality and subgraph centrality [11][13].

5.1. Degree Centrality

Degree centrality of a node ${v}_{i}$ in a given network is given by the total degree ${d}_{i}$ of ${v}_{i}$ . The degree centrality measures the ability of a node to communicate directly with other nodes.

In an undirected network, the degree ${d}_{i}$ of a node is given as

${d}_{i}={\displaystyle \underset{j=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}{a}_{ij}={e}_{i}^{\text{T}}Ae\mathrm{,}$ (1)

where $A$ is the adjacency matrix, ${e}_{i}$ is the ith standard basis vector (ith column of the identity matrix) and $e$ is the vector of all entries one.

In a directed network, we can consider the in-degree of a node, given as

${d}_{i}^{in}={\displaystyle \underset{j=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}{a}_{ij}={e}_{i}^{\text{T}}Ae\mathrm{,}$ (2)

or the out-degree of the node, given as

${d}_{i}^{out}={\displaystyle \underset{j=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}{a}_{ij}={e}^{\text{T}}A{e}_{i}\mathrm{.}$ (3)

In a directed graph, a source is a node with zero in-degree and a sink is a node with zero out-degree.

As an example, consider the undirected graph in Figure 8. We are interested in finding the central node using degree centrality.

The adjacency matrix $A$ for Network-1 in Figure 8 is given as

$A=\left(\begin{array}{cccccccccccc}0& 1& 0& 0& 0& 0& 0& 1& 1& 1& 1& 1\\ 1& 0& 1& 0& 0& 0& 1& 0& 0& 0& 0& 0\\ 0& 1& 0& 1& 0& 0& 1& 0& 0& 0& 0& 0\\ 0& 0& 1& 0& 1& 1& 1& 0& 0& 0& 0& 0\\ 0& 0& 0& 1& 0& 1& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 1& 1& 0& 1& 0& 0& 0& 0& 0\\ 0& 1& 1& 1& 0& 1& 0& 0& 0& 0& 0& 0\\ 1& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 0\\ 1& 0& 0& 0& 0& 0& 0& 1& 0& 1& 1& 0\\ 1& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 1\\ 1& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0& 0\\ 1& 0& 0& 0& 0& 0& 0& 0& 0& 1& 0& 0\end{array}\right)$

The degree centralities are contained in the vector $d=Ae$ ;

$d={\left[\begin{array}{cccccccccccc}6& 3& 3& 4& 2& 3& 4& 2& 4& 3& 2& 2\end{array}\right]}^{\text{T}}$

Figure 8. Network-1.

Using the degree centrality measure, node 1 is the most central node due to the fact that it has the highest degree.

5.2. Closeness Centrality

Closeness centrality measures the average shortest path length from a node to all other nodes. It uses neighbours and the neighbours of neighbours of a node ${v}_{i}$ to determine its centrality. Thus, nodes that are not directly connected to ${v}_{i}$ are taken into consideration as opposed to the degree centrality case.

Letting $d\left(ij\right)$ be the length of the shortest path from node i to node j, the mean distance from node i to the other nodes in a network is given by;

${l}_{i}=\frac{1}{N-1}{\displaystyle \underset{i\ne j\in V}{\sum}}d\left(ij\right)=\frac{1}{N-1}{e}_{i}^{\text{T}}De\mathrm{,}$ (4)

where N is the total number of nodes and $D$ denotes the distance matrix.

In general, we want to associate high centrality score with important nodes. So we will use the reciprocal of ${l}_{i}$ as the value of the centrality. Thus, the closeness centrality ${c}_{i}$ for a node ${v}_{i}$ is given by:

${c}_{i}={l}_{i}^{-1}=\frac{N-1}{{e}_{i}^{\text{T}}De}$ (5)

For example, consider Network-1 in Figure 8.

The distance matrix is given by $D=\left(d\left(i\mathrm{,}j\right)\right)$

$D=\left(\begin{array}{cccccccccccc}0& 1& 2& 3& 4& 3& 2& 1& 1& 1& 1& 1\\ 1& 0& 1& 2& 3& 2& 1& 2& 2& 2& 2& 2\\ 2& 1& 0& 1& 2& 2& 1& 3& 3& 3& 3& 3\\ 3& 2& 1& 0& 1& 1& 1& 4& 4& 4& 4& 4\\ 4& 3& 2& 1& 0& 1& 2& 5& 5& 5& 5& 5\\ 3& 2& 2& 1& 1& 0& 1& 4& 4& 4& 4& 4\\ 2& 1& 1& 1& 2& 1& 0& 3& 3& 3& 3& 3\\ 1& 2& 3& 4& 5& 4& 3& 0& 1& 2& 2& 2\\ 1& 2& 3& 4& 5& 4& 3& 1& 0& 1& 1& 2\\ 1& 2& 3& 4& 5& 4& 3& 2& 1& 0& 2& 1\\ 1& 2& 3& 4& 5& 4& 3& 2& 1& 2& 0& 2\\ 1& 2& 3& 4& 5& 4& 3& 2& 2& 1& 2& 0\end{array}\right)\mathrm{.}$

Then,

${l}_{i}=\frac{{e}_{i}^{\text{T}}De}{N-1}=\frac{{e}_{i}^{\text{T}}d}{N-1}\mathrm{,}$

where $De=d$

$EVC=\left[\begin{array}{cccccccc}20& 20& 24& 29& 38& 30& 23& 29\end{array}\text{\hspace{1em}}\begin{array}{cccc}27& 28& 29& 29\end{array}\right]$

Thus, ${c}_{i}=\frac{1}{{l}_{i}}$ , gives

${c}_{1}=\frac{11}{20}=0.55,\text{\hspace{0.17em}}{c}_{2}=\frac{11}{20}=0.55,\text{\hspace{0.17em}}{c}_{3}=\frac{11}{24}=0.458$

${c}_{4}=\frac{11}{29}=0.379,\text{\hspace{0.17em}}{c}_{5}=\frac{11}{38}=0.289,\text{\hspace{0.17em}}{c}_{6}=\frac{11}{30}=0.367$

${c}_{7}=\frac{11}{23}=0.478,\text{\hspace{0.17em}}{c}_{8}=\frac{11}{29}=0.379,\text{\hspace{0.17em}}{c}_{9}=\frac{11}{27}=0.407$

${c}_{10}=\frac{11}{28}=0.393,\text{\hspace{0.17em}}{c}_{11}=\frac{11}{29}=0.379,\text{\hspace{0.17em}}{c}_{12}=\frac{11}{29}=\mathrm{0.379.}$

Now, nodes 1 and 2 are identified as the most important nodes in the network.

5.3. Eigenvector Centrality

In an undirected network which is connected we can write the measure of centrality by using the eigenvector centrality. Eigenvector centrality takes into consideration the importance of neighbours of a node. In degree centrality a node is awarded one centrality point for each neighbour. Eigenvector centrality gives each node a score which is proportional to the sum of the score of its neighbours. In eigenvector centrality, a node is important if it is linked to other important nodes. The larger an entry on the node, the more important the node is considered to be.

From the Perron-Frobenius theorem, the eigenvector associated to the principal eigenvalue of the adjacency matrix $A$ is unique if the network is strongly connected. We define the centrality of a node iteratively by using the sum of its neighbours’ centralities. We initially assume that a node j has centrality ${x}_{j}^{\left(0\right)}=1$ . Then we calculate a new iteration ${x}_{i}^{\left(1\right)}$ as the sum of the centralities of i’s neighbours.

That is,

${x}_{i}^{\left(1\right)}={\displaystyle \underset{j}{\sum}}\text{\hspace{0.05em}}{a}_{ij}{x}_{j}^{\left(0\right)},$ (6)

where ${a}_{ij}$ are the entries of the adjacency matrix.

In matrix form we write this as:

${x}^{\left(1\right)}=A{x}^{\left(0\right)}\mathrm{.}$ (7)

After k-steps, we have

${x}^{\left(k\right)}={A}^{\left(k\right)}{x}^{\left(0\right)}\mathrm{.}$ (8)

Note that the eigenvector centrality is defined as $li{m}_{k\to \infty}{x}^{\left(k\right)}$ .

We can write ${x}^{\left(0\right)}$ as a linear combination of eigenvectors ${v}_{i}$ of the adjacency matrix $A$ , that is,

${x}^{\left(0\right)}={\displaystyle \underset{i}{\sum}}\text{\hspace{0.05em}}{c}_{i}{v}_{i}\mathrm{,}$ (9)

where the ${c}_{i}$ are constants.

Then, from Equation (8), we have

${x}^{\left(k\right)}={A}^{\left(k\right)}{\displaystyle \underset{i}{\sum}}\text{\hspace{0.05em}}{c}_{i}{v}_{i}={\displaystyle \underset{i}{\sum}}\text{\hspace{0.05em}}{c}_{i}{A}^{\left(k\right)}{v}_{i}={\displaystyle \underset{i}{\sum}}\text{\hspace{0.05em}}{c}_{i}{\lambda}_{i}^{k}{v}_{i}={\displaystyle \underset{i}{\sum}}\text{\hspace{0.05em}}{c}_{i}{\lambda}_{1}^{k}{\left(\frac{{\lambda}_{i}}{{\lambda}_{1}}\right)}^{k}{v}_{i}={\lambda}_{1}^{k}{\displaystyle \underset{i}{\sum}}\text{\hspace{0.05em}}{c}_{i}{\left(\frac{{\lambda}_{i}}{{\lambda}_{1}}\right)}^{k}{v}_{i}\mathrm{.}$ (10)

Therefore,

$\frac{{x}^{\left(k\right)}}{{\lambda}_{1}^{k}}={\displaystyle \underset{i}{\sum}}\text{\hspace{0.05em}}{c}_{i}{\left(\frac{{\lambda}_{i}}{{\lambda}_{1}}\right)}^{k}{v}_{i}\mathrm{,}$ (11)

where ${\lambda}_{i}$ is the eigenvalue associated with the eigenvector ${v}_{i}$ and ${\lambda}_{1}$ is the principal eigenvalue.

Since $\frac{{\lambda}_{i}}{{\lambda}_{1}}<1$ , for $i=2,3,\cdots ,n$ , we have;

$\underset{k\to \infty}{lim}\frac{{x}^{k}}{{\lambda}_{1}^{k}}=\underset{k\to \infty}{lim}{\displaystyle \underset{i}{\sum}}\text{\hspace{0.05em}}{c}_{i}{\left(\frac{{\lambda}_{i}}{{\lambda}_{1}}\right)}^{k}{v}_{i}={\displaystyle \underset{i}{\sum}}\text{\hspace{0.05em}}{c}_{i}\underset{k\to \infty}{\mathrm{lim}}{\left(\frac{{\lambda}_{i}}{{\lambda}_{1}}\right)}^{k}{v}_{i}={c}_{1}{v}_{1}$

as $\underset{k\to \infty}{\mathrm{lim}}{\left(\frac{{\lambda}_{i}}{{\lambda}_{1}}\right)}^{k}\to 0,\text{\hspace{0.05em}}\text{\hspace{0.05em}}\forall i>1.$

This implies that the limiting centralities are proportional to the principal eigenvector ${v}_{1}$ of the adjacency matrix.

Therefore, in matrix form, the eigenvector centrality $x$ satisfies

$Ax={\lambda}_{1}x\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\Rightarrow \text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}x=\frac{1}{{\lambda}_{1}}Ax$ (12)

${x}_{i}=\frac{1}{{\lambda}_{1}}{\displaystyle \underset{j}{\sum}}\text{\hspace{0.05em}}{a}_{ij}{x}_{j}\mathrm{.}$ (13)

Note that in eigenvector centrality, the higher the centrality of the neighbours of the node, the more important the node is.

For example, consider the network in Figure 9.

The adjacency matrix for Network-2 in Figure 9, is given by

$A=\left(\begin{array}{cccccccccc}0& 1& 1& 1& 0& 0& 0& 0& 0& 0\\ 1& 0& 0& 0& 1& 0& 0& 0& 0& 0\\ 1& 0& 0& 1& 0& 0& 1& 0& 0& 0\\ 1& 0& 1& 0& 1& 0& 0& 0& 0& 0\\ 0& 1& 0& 1& 0& 1& 1& 1& 0& 0\\ 0& 0& 0& 0& 1& 0& 0& 1& 0& 0\\ 0& 0& 1& 0& 1& 0& 0& 1& 1& 1\\ 0& 0& 0& 0& 1& 1& 1& 0& 1& 0\\ 0& 0& 0& 0& 0& 0& 1& 1& 0& 1\\ 0& 0& 0& 0& 0& 0& 1& 0& 1& 0\end{array}\right)\mathrm{.}$

Figure 9. Network-2.

The eigenvalues of the adjacency matrix are

$\begin{array}{l}\rho \left(A\right)=(-2.464,-1.931,-1.543,-0.666,-0.477,\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-0.332,0.370,1.398,2.116,3.531)\end{array}$

The principal eigenvector corresponding to the principal eigenvalue ${\lambda}_{1}=3.531$ is

$\begin{array}{l}v=[\begin{array}{ccccc}0.198& 0.181& 0.261& 0.255& 0.443\end{array}\\ \text{\hspace{1em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\begin{array}{ccccc}0.243& 0.468& 0.416& 0.313& 0.221\end{array}]\end{array}$

Since the eigenvector centralities ( $EVC$ ) correspond to the principal eigenvector, the eigenvector centralities for Network-2 will be

$\begin{array}{l}EVC=[\begin{array}{ccccc}0.198& 0.181& 0.261& 0.255& 0.443\end{array}\\ \text{\hspace{1em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\begin{array}{ccccc}0.243& 0.468& 0.416& 0.313& 0.221\end{array}]\end{array}$

Using the eigenvector centrality, we conclude that node 7 is the most important node.

5.4. Katz Centrality

Katz centrality takes into consideration both the number of direct neighbours and the further connections of a node in the network. That is, a node is important in Katz centrality if it has universal connections to other nodes in the network. Katz centrality takes into account all paths of arbitrary length from a node i to other nodes in the network.

The Katz centrality $k$ is given by

$k=\left({\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}{\alpha}^{k}{A}^{k}\right)e={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}{\left(\alpha A\right)}^{k}e\mathrm{,}$ (14)

where $e$ is the column vector of ones, $\alpha $ is called the attenuation factor and $A$ is the adjacency matrix of the network. We can expand Equation (14) as

$k={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}{\left(\alpha A\right)}^{k}e=\left(I+\alpha A+{\left(\alpha A\right)}^{2}+\cdots \right)e\mathrm{,}$ (15)

and, if the sum converges, then

$k={\left(I-\alpha A\right)}^{-1}e\mathrm{,}$ (16)

where I is $n\times n$ identity matrix and $e$ is the column vector of ones.

To ensure the convergence of ${\left(I-\alpha A\right)}^{-1}$ and an accurate definition of Katz centrality, we must consider the attenuation factor $\alpha $ to be within the range

$\alpha \in \left(\mathrm{0,}\frac{1}{{\lambda}_{1}}\right)$ .

For example, we compute the Katz centrality of Network-2 in Figure 9. Since

the principal eigenvalue is ${\lambda}_{1}=3.531$ , then $\alpha \in \left(\mathrm{0,}\frac{1}{3.531}\right)$ . To calculate the

Katz centrality, we choose $\alpha =0.25$ , then the Katz centrality will be

$k={\left(I-0.25A\right)}^{-1}e\mathrm{,}$

$\begin{array}{l}\Rightarrow k=[\begin{array}{ccccc}5.826& 5.223& 7.086& 6.992& 11.065\end{array}\\ \text{\hspace{1em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\begin{array}{ccccc}6.316& 11.526& 10.201& 7.895& 5.855\end{array}]}^{\text{T}}\end{array}$

Therefore, node 7 is the most important node.

5.5. Subgraph Centrality

Subgraph centrality attempts to measures the centrality of a node by taking into consideration the participation of each node in all subgraphs of the network. It does this indirectly by counting the number of closed walks in the network which start and end at a given node in the network: a relationship can be shown between subgraphs and these walks.

If $A$ is the adjacency matrix of an unweighted network, we know that

・ ${\left({A}^{k}\right)}_{ii}$ corresponds to the number of closed walks of length k starting at node ${v}_{i}$ .

・ ${\left({A}^{k}\right)}_{ij}$ corresponds to the number of walks of length k that start at node ${v}_{i}$ and end at node ${v}_{j}$ .

We define

${\mu}_{k}\left(i\right)={\left({A}^{k}\right)}_{ii}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{asthelocalspectralmomentofnode}i$

In a similar way to Katz centrality, subgraph centrality of a node i is a weighted sum of closed walks of different lengths which start and end at node i. The shorter the closed walk, the more the centrality of the node is influenced.

The subgraph centrality of node i in the network is given by

$SC\left(i\right)={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\frac{{\mu}_{k}\left(i\right)}{k!}={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\frac{{\left({A}^{k}\right)}_{ii}}{k!}={\left[I+\frac{A}{\mathrm{1!}}+\frac{{A}^{2}}{\mathrm{2!}}+\frac{{A}^{3}}{\mathrm{3!}}+\cdots \right]}_{ii}$ (17)

$\Rightarrow \text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}SC\left(i\right)={\left({\text{e}}^{A}\right)}_{ii}$ (18)

Considering the exponential of the adjacency matrix,

${\text{e}}^{A}=I+A+\frac{{A}^{2}}{\mathrm{2!}}+\frac{{A}^{3}}{\mathrm{3!}}+\cdots +\frac{{A}^{k}}{k\mathrm{!}}+\cdots $

we observe that the numbers of closed walks associated with ${A}^{2}$ of length 2 are counted twice for every link in the network. On the other hand, the closed walk associated with ${A}^{3}$ of length 3 is counted $2\times 3=3!$ for every triangle. To avoid double counting, we penalize the closed walks of length 2 by $\mathrm{2!}$ and closed walks of length 3 are penalized by $\mathrm{3!}$ . In general, any circuit of length k can be traversed in 2 directions and there are k points where you can start, so that this circuit is counted $2\times k$ times. From the exponential of the adjacency matrix, when $k\ge 4$ , the penalization included is not the same as the penalization of counting the number of repeated closed walks.

For instance, let us consider Network-2 in Figure 9. We want to find the subgraph centrality. We have;

${\text{e}}^{A}=\left(\begin{array}{cccccccccc}3.962& 2.475& 3.393& 3.532& 2.914& 0.991& 2.345& 1.569& 0.944& 0.687\\ 2.475& 2.79& 1.858& 2.227& 3.306& 1.423& 2.188& 1.997& 1.055& 0.682\\ 3.393& 1.858& 4.283& 3.696& 3.498& 1.369& 4.021& 2.631& 2.037& 1.562\\ 3.532& 2.227& 3.696& 4.345& 4.107& 1.691& 3.297& 2.564& 1.508& 1.044\\ 2.914& 3.306& 3.498& 4.107& 7.713& 4.312& 6.554& 6.404& 3.924& 2.556\\ 0.991& 1.423& 1.369& 1.691& 4.312& 3.423& 3.448& 4.131& 2.24& 1.294\\ 2.345& 2.188& 4.021& 3.297& 6.554& 3.448& 8.37& 6.791& 5.762& 4.327\\ 1.569& 1.997& 2.631& 2.564& 6.404& 4.131& 6.791& 7.02& 4.948& 3.131\\ 0.944& 1.055& 2.037& 1.508& 3.924& 2.24& 5.762& 4.948& 5.016& 3.563\\ 0.687& 0.682& 1.562& 1.044& 2.556& 1.294& 4.327& 3.131& 3.563& 3.32\end{array}\right)\mathrm{.}$

Then the subgraph centrality will consist of the diagonal entries of ${\text{e}}^{A}$ , which is

$\begin{array}{l}SC\left(i\right)=[\begin{array}{ccccc}3.962& 2.79& 4.283& 4.345& 7.713\end{array}\\ \text{\hspace{1em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\begin{array}{ccccc}3.423& 8.37& 7.02& 5.016& 3.32\end{array}]}^{\text{T}}\end{array}$

We observe that node 7 has the highest subgraph centrality, thus, node 7 is the most central node.

6. Relationship between Centrality Measures

Among the challenges that arise in determining the importance of a node in a network using centrality is that it is not always clear which of the centrality measures should be used. It is not obvious whether two centrality measures will give the same ranking of the nodes in the given network. Also, there is the necessity of choosing the attenuation factor $\alpha $ in Katz centrality which adds another challenge. Different choices of $\alpha $ may lead to different rankings. Experimentally, it has been seen that different centrality measures provide highly correlated rankings [1]. Ranking becomes more stable when $\alpha $ approaches its limits, i.e. as

$\alpha \to 0\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\alpha \to \frac{1}{{\lambda}_{1}}\mathrm{.}$

We will prove these correlations and stability of ranking. This will relate the degree and eigenvector centrality to Katz centrality.

Theorem 2. Let $G=\left(V\mathrm{,}E\right)$ be undirected connected network with adjacency matrix $A$ . The Katz centrality $k$ is given as

$k={\left(I-\alpha A\right)}^{-1}e\mathrm{.}$

Then,

・ as $\alpha \to 0$ , the ranking produced by $k$ converges to that produced by degree centrality, and

・ as $\alpha \to \frac{1}{{\lambda}_{1}}$ , the ranking produced by $k$ converges to that produced by eigenvector centrality.

Proof. The Katz centrality $k$ is given as

$k={\left(I-\alpha A\right)}^{-1}e\mathrm{,}$ (19)

which can be written as

$\begin{array}{c}k={\left(I-\alpha A\right)}^{-1}e\\ =\left(I+\alpha A+{\left(\alpha A\right)}^{2}+{\left(\alpha A\right)}^{3}+\cdots \right)e\\ =e+\alpha Ae+{\left(\alpha A\right)}^{2}e+{\left(\alpha A\right)}^{3}e+\cdots \end{array}$ (20)

$k=e+\alpha d+{\left(\alpha A\right)}^{2}e+{\left(\alpha A\right)}^{3}e+\cdots $ (21)

where $d$ is the vector of the degree centralities of the nodes.

Consider the relation

$\psi =\frac{1}{\alpha}\left(k-e\right)\mathrm{.}$ (22)

It is clear that the ranking produced by $\psi $ will be exactly the same as that produced by $k$ , due to the fact that the score of each node has been scaled and shifted in the same way. Thus,

$\psi =\frac{1}{\alpha}\left(k-e\right)=\frac{1}{\alpha}\left(e+\alpha d+{\left(\alpha A\right)}^{2}e+{\left(\alpha A\right)}^{3}e+\cdots \right)$

$\psi =d+\alpha {A}^{2}e+{\alpha}^{2}{A}^{3}e+\cdots \mathrm{.}$ (23)

Then,

$\underset{\alpha \to 0}{lim}\psi =d$ (24)

where $d$ is the vector of the degree centralities of the nodes.

Therefore, the ranking produced by the Katz centrality reduces to that produced by degree centrality.

To show the second relation, we write the column vector $e$ , as

$e={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}{\beta}_{i}{v}_{i}\mathrm{,}$ (25)

where ${{\beta}^{\prime}}_{i}\text{s}$ are constants and ${{v}^{\prime}}_{i}\text{s}$ are eigenvectors of matrix $A$ .

Then, we can write the Katz centrality as

$\begin{array}{c}k={\left(I-\alpha A\right)}^{-1}e={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}{\left(\alpha A\right)}^{k}e\\ ={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}{\left(\alpha A\right)}^{k}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}{\beta}_{i}{v}_{i}={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}{\alpha}^{k}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}{\beta}_{i}{A}^{k}{v}_{i}\\ ={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}{\alpha}^{k}{\displaystyle \underset{i=1}{\overset{n}{\sum}}}\text{\hspace{0.05em}}{\beta}_{i}{\lambda}_{i}^{k}{v}_{i}={\displaystyle \underset{i=1}{\overset{n}{\sum}}}\frac{1}{1-\alpha {\lambda}_{i}}{\beta}_{i}{v}_{i}\end{array}$ (26)

$\begin{array}{l}\Rightarrow k=\frac{1}{1-\alpha {\lambda}_{1}}{\beta}_{1}{v}_{1}+{\displaystyle \underset{i\mathrm{=2}}{\overset{n}{\sum}}}\frac{1}{1-\alpha {\lambda}_{i}}{\beta}_{i}{v}_{i}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}=\frac{1}{1-\alpha {\lambda}_{1}}\left({\beta}_{1}{v}_{1}+{\displaystyle \underset{i=2}{\overset{n}{\sum}}}\frac{1-\alpha {\lambda}_{1}}{1-\alpha {\lambda}_{i}}{\beta}_{i}{v}_{i}\right)\end{array}$ (27)

Consider the relation

$\varphi =\left(1-\alpha {\lambda}_{1}\right)k$ (28)

That is, the ranking produced by $\varphi $ is exactly the same as that produced by $k$ , due to the fact that the score of each node has been scaled and shifted in the same way. This implies that

$\varphi \mathrm{=}{\beta}_{1}{v}_{1}+{\displaystyle \underset{i=2}{\overset{n}{\sum}}}\frac{1-\alpha {\lambda}_{1}}{1-\alpha {\lambda}_{i}}{\beta}_{i}{v}_{i}$ (29)

Then,

$\underset{\alpha \to \frac{1}{{\lambda}_{1}}}{lim}\varphi ={\beta}_{1}{v}_{1}$ (30)

This implies that the limiting centralities are proportional to the principal eigenvector ${v}_{1}$ of the adjacency matrix. Thus, the ranking produced by the Katz centrality reduces to those produced by eigenvector centrality.

7. Matrix Functions

This section discusses some of the matrix functions developed using Taylor series.

Matrix functions have applications throughout applied mathematics and scientific computing. Matrix functions are used in various fields, for example, in control theory and electromagnetism and can also be used to study complex networks like social networks.

Let $f\left(z\right)$ be a complex-valued function, such that $z\in {C}^{n\times n}$ and $f\left(z\right)$ is analytic in the disc $\left|z\right|<R$ , where $R\in \mathbb{R}$ . Using Taylor’s theorem, we can represent $f\left(z\right)$ as a convergent power series

$f\left(z\right)={a}_{0}+{a}_{1}z+{a}_{2}{z}^{2}+\cdots ,$ (31)

for $\left|z\right|<R$ , and ${a}_{k}\mathrm{,}k\ge 0$ are complex-valued constants [12].

Let $A\in {C}^{n\times n}$ be a complex-valued matrix. Then we define the matrix function of $A$ as

$f\left(A\right)={a}_{0}I+{a}_{1}A+{a}_{2}{A}^{2}+\cdots \mathrm{.}$ (32)

The matrix series in Equation (32) is convergent to the
$n\times n$ matrix
$f\left(A\right)$ if all n^{2} scalar series that make up
$f\left(A\right)$ are convergent. It turns out that the series of
$f\left(A\right)$ converges if all eigenvalues of
$A$ lie in the region of convergence of
$f\left(z\right)$ in Equation (31). This can be proved by the following theorem.

Theorem 3. Suppose that $f\left(z\right)$ has a power series representation, written as

$f\left(z\right)={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}{a}_{k}{z}^{k}$ (33)

in an open disc $\left|z\right|<R$ . Then the series

$\underset{k=0}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}{a}_{k}{A}^{k$ (34)

is convergent if and only if the eigenvalues of $A$ lie in $\left|z\right|<R$ [12].

Proof. We prove this theorem only for diagonalisable matrices using the Jordan form of matrix $A$ .

Let $Q$ be a transformation matrix which diagonalizes $A$ . Then we can write

$D=diag\left({\lambda}_{1}\mathrm{,}\cdots \mathrm{,}{\lambda}_{n}\right)={Q}^{-1}AQ\mathrm{.}$ (35)

Thus,

$\begin{array}{c}f\left(A\right)=Qdiag\left(f\left({\lambda}_{1}\right)\mathrm{,}\cdots \mathrm{,}f\left({\lambda}_{n}\right)\right){Q}^{-1}\\ =Qdiag\left({\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}{a}_{k}{\lambda}_{1}^{k},\cdots ,{\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}{a}_{k}{\lambda}_{n}^{k}\right){Q}^{-1}\\ =Q\left({\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}{a}_{k}{D}^{k}\right){Q}^{-1}\\ ={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}{a}_{k}{\left(QD{Q}^{-1}\right)}^{k}\\ ={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}{a}_{k}{A}^{k}\mathrm{.}\end{array}$ (36)

If $A$ has an eigenvalue $\left|{\lambda}_{i}\right|\ge R$ , then the series in Equation (33) diverges when evaluated at ${\lambda}_{i}$ . It follows that the series in Equation (34) also diverges. That is, if there exist eigenvalues of matrix $A$ which fall outside $\left|z\right|<R$ , then the series in Equation (34) diverges.

Therefore $f\left(A\right)$ converges if and only if $\left|{\lambda}_{i}\right|<R$ , where ${\lambda}_{i}\mathrm{,}\text{\hspace{0.05em}}1\le i\le n$ are the eigenvalues of matrix $A$ .

In general, if the function $f\left(z\right)$ can be expressed by using Taylor series and it converges in the disc $\left|z\right|<R$ which contains the eigenvalues of $A$ , then $f\left(A\right)$ can be computed by substituting the matrix $A$ for variable z in the function $f\left(z\right)$ . For instance,

$f\left(z\right)=\frac{1+z}{1-z}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\Rightarrow \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}f\left(A\right)={\left(I-A\right)}^{-1}\left(I+A\right)$

The most important matrix functions which can be expressed by using the Taylor series are the following:

・ Exponential function

$f\left(z\right)={\text{e}}^{z}=1+z+\frac{{z}^{2}}{2!}+\frac{{z}^{3}}{3!}+\cdots ={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\frac{{z}^{k}}{k!}$ (37)

$\Rightarrow f\left(A\right)={\text{e}}^{A}=I+A+\frac{{A}^{2}}{\mathrm{2!}}+\frac{{A}^{3}}{\mathrm{3!}}+\cdots $

$f\left(A\right)={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\frac{{A}^{k}}{k!}$ (38)

・ Cosine function

$f\left(z\right)=\mathrm{cos}\left(z\right)=1-\frac{{z}^{2}}{2!}+\frac{{z}^{4}}{4!}+\cdots ={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\frac{{\left(-1\right)}^{k}{z}^{2k}}{\left(2k\right)!}$

$\Rightarrow f\left(A\right)=cos\left(A\right)=I-\frac{{A}^{2}}{\mathrm{2!}}+\frac{{A}^{4}}{\mathrm{4!}}+\cdots $

$cos\left(A\right)={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\frac{{\left(-1\right)}^{k}{A}^{2k}}{\left(2k\right)!}$ (39)

・ Sine function

$f\left(z\right)=\mathrm{sin}\left(z\right)=z-\frac{{z}^{3}}{3!}+\frac{{z}^{5}}{5!}+\cdots ={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\frac{{\left(-1\right)}^{k}{z}^{2k+1}}{\left(2k+1\right)!}$

$\Rightarrow f\left(A\right)=sin\left(A\right)=A-\frac{{A}^{3}}{\mathrm{3!}}+\frac{{A}^{5}}{\mathrm{5!}}+\cdots ={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\frac{{\left(-1\right)}^{k}{A}^{2k+1}}{\left(2k+1\right)\mathrm{!}}\mathrm{.}$ (40)

・ Logarithmic function

$f\left(z\right)=log\left(1+z\right)=z-\frac{{z}^{2}}{2}+\frac{{z}^{3}}{3}-\frac{{z}^{4}}{4}+\cdots ={\displaystyle \underset{k=1}{\overset{\infty}{\sum}}}\frac{{\left(-1\right)}^{\left(k+1\right)}{z}^{k}}{k}$ (41)

$\Rightarrow f\left(A\right)=log\left(I+A\right)=A-\frac{{A}^{2}}{2}+\frac{{A}^{3}}{3}-\frac{{A}^{4}}{4}+\cdots ={\displaystyle \underset{k=1}{\overset{\infty}{\sum}}}\frac{{\left(-1\right)}^{\left(k+1\right)}{A}^{k}}{k}\mathrm{.}$ (42)

・ Hyperbolic function

1) sinh function

$f\left(z\right)=\mathrm{sinh}\left(z\right)=z+\frac{{z}^{3}}{3!}+\frac{{z}^{5}}{5!}+\cdots ={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\frac{{z}^{2k+1}}{\left(2k+1\right)!}$

$\Rightarrow f\left(A\right)=sinh\left(A\right)=A+\frac{{A}^{3}}{\mathrm{3!}}+\frac{{A}^{5}}{\mathrm{5!}}+\cdots ={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\frac{{A}^{2k+1}}{\left(2k+1\right)\mathrm{!}}$

2) cosh function

$f\left(z\right)=\mathrm{cosh}\left(z\right)=1+\frac{{z}^{2}}{2!}+\frac{{z}^{4}}{4!}+\cdots ={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\frac{{z}^{2k}}{\left(2k\right)!}$

$\Rightarrow f\left(A\right)=cosh\left(A\right)=I+\frac{{A}^{2}}{\mathrm{2!}}+\frac{{A}^{4}}{\mathrm{4!}}+\cdots ={\displaystyle \underset{k=0}{\overset{\infty}{\sum}}}\frac{{A}^{2k}}{\left(2k\right)\mathrm{!}}\mathrm{.}$

Each of these functions can (in theory) be used to define a centrality measure on a network with adjacency matrix $A$ . For example, to obtain the centralities of all the nodes we can compute $f\left(A\right)e$ , where $e$ is the vector of ones, or $\text{diag}\left(f\left(A\right)\right)$ .

We may need to be careful with these raw centrality measures as $f\left(A\right)$ may contain negative (or even complex) entries. For instance, to compute the logarithmic function $f\left(A\right)$ , we need to take care of the complex entries, since it is not possible to make ranking out of complex entries. To avoid the complex entries, we compute $\mathrm{log}\left(\gamma I+A\right)$ , where $\gamma $ is a real constant chosen so that $\left(\gamma I+A\right)$ has positive eigenvalues. The constant $\gamma $ differs for different networks.

We can also define centrality measures by applying analytic continuations of $f\left(z\right)$ outside its radius of convergence.

Recall that, if the attenuation factor $\alpha <\frac{1}{{\lambda}_{1}}$ , where ${\lambda}_{1}$ is the principal eigenvalue of $A$ , then

$\underset{k=0}{\overset{\infty}{\sum}}}\text{\hspace{0.05em}}{\alpha}^{k}{A}^{k}={\left(I-\alpha A\right)}^{-1}\mathrm{.$

But

$f\left(A\right)={\left(I-\alpha A\right)}^{-1}$

is also defined for $\left|\alpha \right|\ge \frac{1}{{\lambda}_{1}}$ as long as $\alpha \ne \frac{1}{{\lambda}_{i}}$ where ${\lambda}_{i}$ is an eigenvalue of $A$ . Then we can generalize Katz centrality by the following definition:

$k=\text{diag}{\left(I-\alpha A\right)}^{-1}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{or}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}k={\left(I-\alpha A\right)}^{-1}e\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}\alpha >\frac{1}{\rho (A)}$

where $e$ is the column vector of ones and $\rho \left(A\right)$ is the spectrum of $A$ .

To determine which of the matrix functions can be used to asses centrality in the network, we will do some experimental work on a variety of networks in the following section. We will perform the experimental work by making comparisons between the rankings based on the common centrality measures discussed in section V and the rankings based on these matrix functions.

8. Experimental Work and Discussion

In this section we aim to analyse experimentally the agreements between the centrality measures discussed in section V, and whether the matrix functions discussed in section VII can be used to determine the important nodes in a network. The experimental work will compare matrix functions to the common centrality measures.

A variety of techniques can be used to compute centrality measures (those discussed in section V) and matrix functions. To compute the exponential of a matrix, logarithmic of a matrix and other matrix functions we will use SciPy matrix functions [14]. In our new measures involving matrix functions and generalisations to Katz centrality, we will calculate centralities by using the diagonal entries of these functions. We will use the Kendall correlation coefficient in our experiments to compare the agreement between centrality measures.

8.1. Correlation (Kendall, Pearson, Spearman) Coefficient

Correlation is a bivariate analysis that measures the strengths of association between two variables. The value of the correlation coefficient varies between 1 and −1. The positive correlation signifies that the ranks of both variables are increasing, while the negative correlation signifies that as the rank of one variable increases, the rank of the other variable is decreasing. The correlation coefficient between the two variables is said to be a perfect association if it lies between ±1 [15]. The closer the value of the correlation coefficient to 1 or to −1, the stronger the relationship between the two variables. As the correlation coefficient value goes towards 0, the relationship between the two variables will be weaker. In statistics, we usually measure the strengths of association by: Pearson correlation, Kendall rank correlation and Spearman correlation.

The Kendall coefficient of correlation is the measure of the degree of correspondence between two set of ranks given to the same set of objects. The Kendall coefficient is interpreted as the difference between the probability of these objects being in the same order and the probability of these objects being in a different order [2].

Let X and Y be two observations, such that $\left({x}_{1},{y}_{1}\right),\left({x}_{2},{y}_{2}\right),\cdots ,\left({x}_{n},{y}_{n}\right)$ is the set of joint random variables of X and Y, respectively. The values $\left({x}_{i}\right)$ and $\left({y}_{i}\right)$ are all unique.

・ Any pair of observations $\left({x}_{i}\mathrm{,}{y}_{i}\right)$ and $\left({x}_{j}\mathrm{,}{y}_{j}\right)$ is said to be concordant if

$\frac{{x}_{i}-{x}_{j}}{{y}_{i}-{y}_{j}}>0,$

which implies that either ${x}_{i}>{x}_{j}$ and ${y}_{i}>{y}_{j}$ or ${x}_{i}<{x}_{j}$ and ${y}_{i}<{y}_{j}$ .

・ The pair is said to be discordant if

$\frac{{x}_{i}-{x}_{j}}{{y}_{i}-{y}_{j}}<0,$

which implies that either ${x}_{i}>{x}_{j}$ and ${y}_{i}<{y}_{j}$ or ${x}_{i}<{x}_{j}$ and ${y}_{i}>{y}_{j}$ .

・ The pair is neither concordant nor discordant if ${x}_{i}={x}_{j}$ or ${y}_{i}={y}_{j}$ .

The Kendall $\left(\tau \right)$ correlation coefficient is defined as

$\tau =\frac{{\text{N}}^{\text{o}}\text{\hspace{0.17em}}\text{concordantpairs}-{\text{N}}^{\text{o}}\text{\hspace{0.17em}}\text{discordantpairs}}{\frac{1}{2}n\left(n-1\right)}$

The Kendall rank coefficient can be interpreted as follows: the values of $\tau $ greater than zero show an agreement, being close to one indicates a strong agreement. On the other hand, values less than zero show a disagreement and those close to negative one indicate a strong disagreement [2]. Indeed, if all pairs are concordant, then $\tau =1$ , which implies that the variables are in exactly the same ranking (order). If they are all discordant then $\tau =-1$ , which implies that the variables are in exactly the opposite ranking (order).

Pearson correlation is a measure of degree of linear relationship between two variables and is denoted by r. Pearson correlation is basically used to draw a line of best fit through the data of two variables, r indicates how far away all these data points are to the line of best fit.

The following formula is used to calculate the Pearson r correlation:

$r=\frac{N{\displaystyle \sum}\text{\hspace{0.05em}}xy-{\displaystyle \sum}\text{\hspace{0.05em}}x{\displaystyle \sum}\text{\hspace{0.05em}}y}{\sqrt{\left(N{\displaystyle \sum}\text{\hspace{0.05em}}{x}^{2}-{\left({\displaystyle \sum}\text{\hspace{0.05em}}x\right)}^{2}\right)\left(N{\displaystyle \sum}\text{\hspace{0.05em}}{y}^{2}-{\left({\displaystyle \sum}\text{\hspace{0.05em}}y\right)}^{2}\right)}}$ (43)

where:

r = Pearson correlation coefficient

N = number of observation in each data set

$\sum}\text{\hspace{0.05em}}xy$ = the sum of the products of paired scores

$\sum}\text{\hspace{0.05em}}x$ = sum of scores of variable x

$\sum}\text{\hspace{0.05em}}y$ = sum of scores of variable y scores

$\sum}\text{\hspace{0.05em}}{x}^{2$ = sum of squared scores of variable x

$\sum}\text{\hspace{0.05em}}{y}^{2$ = sum of squared scores of variable y

To use Pearson correlation r, the two variables must be measured either in interval or ratio scale. However, both variables do not need to be measured on the same scale (for instance, one variable can be ratio and one can be interval). We can not use the Pearson correlation for ordinal data, instead we use Spearman’s rank correlation or a Kendall’s Correlation.

Spearman rank correlation is the nonparametric version of the Pearson correlation coefficient that is used to measure the degree of association between two two continuous or ordinal variables.

The following formula is used to calculate the Spearman rank correlation:

$\rho =1-\frac{6{\displaystyle \sum}\text{\hspace{0.05em}}{D}_{i}^{2}}{N\left({N}^{2}-1\right)}$ (44)

where; ${D}_{i}={R}_{{x}_{i}}-{R}_{{y}_{i}}$ is the difference between the two ranks of each observation; N is the number of observations.

We use the Spearman correlation coefficient when the relationship between variables is not linear.

Despite the fact that both Spearman and Kendall correlations measure monotonicity relationships and have a nice interpretation but in this paper we will opt to use the Kendall correlation coefficient due to the following reasons [16][17]:

・ The distribution of Kendall’s has better statistical properties.

・ The interpretation of Kendall’s in terms of the probabilities of observing the agreeable (concordant) and non-agreeable (discordant) pairs is very direct.

・ The Kendall correlation has a smaller gross error sensitivity (GES) (more robust) and a smaller asymptotic variance (AV) (more efficient), that is Kendall correlation has a $O\left({n}^{2}\right)$ computation complexity comparing with $O\left(nlogn\right)$ of Spearman correlation, where n is the sample size.

8.2. Network Models

Networks have been around us for so many years and the study is not new. Graph theorists and mathematicians have been surrounded by problems where they were trying to make sense of these complex networks. As a result of this, random network theory was generated stating that nodes and links in a graph are connected randomly to each other. In this paper we will consider three networks model due to its significance:

・ Erdös-Rényi model: are formed by completely random interactions between the nodes. Each node chooses its neighbours at random, constrained either by an overall number of relationships that might be assigned in the graph, or a probability of connecting to a certain neighbour [18]. Mathematically, each network would be following a poison distribution. This distribution is such that vast majority of nodes have equal number of links and it is almost impossible to find outliers.

・ Barabási-Albert model: these are scale-free networks which are formed by two simple mechanisms, growth and preferential attachment. The main prediction which a scale free network makes is the presence of few outlier nodes which have many connections. These nodes are also known as hubs. Preferential attachment is a probabilistic mechanism in which a new node is free to connect to any node in the network, whether it is a hub or has a single link [18][19].

・ Watts-Strogatz model: is important because it shows how the “small-world effect” in networks can coexist with other commonly observed features of social networks, like a high clustering coefficient. More specifically, the model showed how adding a small fraction of random long-range links in an otherwise regular network can lead to slow, logarithmic scaling of the typical distance between nodes with network size [20].

8.2.1. First Experiment

We begin our experiments by considering a small network with 20 nodes. The network was randomly generated in text editor and drawn using Sage. The aim is to determine which of the matrix functions give similar rankings as the common centrality measures. We have many functions to choose from and we want to limit our choice. Note that we will not consider the exponential functions since it is similar to the subgraph centrality.

The experiment shows that the diagonal entries of the logarithmic function and cosine function give the ranking of the nodes in reverse order as compared to other rankings. Also, we observe that the sine function does not match any other centrality measure. The network in Figure 10, having 20 nodes and 42 edges gives us a real picture on node ranking.

The rankings of the nodes obtained for the graph in Figure 10 by using different centrality measures including matrix functions are shown in Table 1.

Table 1. Rankings of nodes using centrality measures and matrix functions for the network in Figure 10.

Figure 10. Network with 20 nodes and 42 edges.

Note that the ranking is from the most to the least important/central node with respect to the centrality measure used.

To avoid making many comparisons using Kendall correlation coefficient between the common centrality measures and those produced by matrix functions, we will choose one centrality measure among the other one. To do this we need to investigate whether the chosen centrality measure agrees with the other centralities measures. In this case, we make a comparison between the closeness centrality (CC) and degree centrality (DC), eigenvector centrality (EC), Katz centrality (KC) and subgraph centrality (SC) for graph in Figure 10.

Table 2 shows that there is an agreement between closeness centrality and other centrality measures.

We observe from graph of Figure 11 that there is an agreement between closeness centrality and other centrality measures.

In Table 1, we have to modify the cosine and logarithmic functions so that they match with other rankings. The best way of doing this seems to be by reversing the order of their rankings.

To be more confident about the rankings of nodes using matrix functions, we will use the Kendall correlation coefficient to make the comparison between closeness centrality and the matrix functions. We chose closeness centrality among the other standard centrality measure to make the comparison with matrix functions in as much as it takes into account neighbours and the neighbours of neighbours of a node to determine its centrality. In the comparisons, we denote by $\tau \left(\text{CC}\mathrm{,}f\right)$ the Kendall coefficient between closeness centrality and centrality measure induced by $f\left(A\right)$ .

In Table 3, we reversed the rankings given by the cosine and the logarithmic functions before calculating the Kendall coefficients. We observe in Table 3 that the Kendall coefficients $\tau \left(\text{CC}\mathrm{,}log\right)$ , $\tau \left(\text{CC}\mathrm{,}cosh\right)$ , $\tau \left(\text{CC}\mathrm{,}\mathrm{sinh}\right)$ and $\tau \left(\text{CC}\mathrm{,}cos\right)$ are all positive. This implies the agreement of these matrix functions with the

Table 2. Kendall coefficients between closeness centrality and other centrality measure applied to graph in Figure 10.

Table 3. Kendall coefficients between closeness centrality and matrix functions applied to graph in Figure 10.

Figure 11. Agreement.

closeness centrality measure and the agreement is quite strong for logarithmic, cosh, and cosine functions since their Kendall coefficients are close to 1. On the other hand, the Kendall coefficient between closeness centrality and sine function is negative, which implies that there is no agreement between their rankings.

8.2.2. Second Experiment

We compare the agreement of centrality measures, by generating 10 random networks using the Barabási-Albert preferential attachment model.

The Barabási-Albert model is a simple scale-free random graph generator. The network begins with an initial set of ${m}_{0}\ge 2$ nodes. The degree of each node in the initial network should be at least 1, if not, the network will always end up being disconnected. New nodes are free to attach to an existing node in the network. At each step, a new node is created and connected to an existing node. Each new node is connected to $m\le {m}_{0}$ existing nodes with a probability that is proportional to the number of links that the existing nodes already have. To use this method, we specify the number of nodes in the network (n) and the number of new nodes form as they appear (m) in such a manner that nodes with higher degree have a higher chance of being selected for attachment [21].

The comparisons involve rankings of nodes using centrality measures such as closeness centrality, degree centrality, eigenvector centrality, Katz centrality and subgraph centrality. We use $k={\left(I-\alpha A\right)}^{-1}e$ in this experiment to compute

Katz centrality. In all cases we take $\alpha <\frac{1}{{\lambda}_{1}}$ . For each choice of n and m, we

generate 5 networks and record the mean values of the Kendall coefficients. We denote the Kendall coefficient correlation by ${\tau}_{i}$ according to the Table 4.

It is evident from Table 5, that all Kendall coefficients are positive, which indicates an agreement between the rankings. The Kendall coefficient shows that

Table 4. The notations of Kendall correlation coefficients.

Table 5. Kendall coefficients for centrality measures applied to different random networks generated by using the Barabási-Albert method.

n: Number of nodes in the network. m: Number of neighbours to attach to each new node.

closeness centrality is highly correlated with centrality measures corresponding to ${\tau}_{2}$ , ${\tau}_{3}$ and ${\tau}_{4}$ . The eigenvector, Katz and subgraph centralities are also highly related, as indicated by ${\tau}_{8}$ , ${\tau}_{9}$ and ${\tau}_{10}$ . The experiment shows that the agreement becomes stronger if the network is more connected. In general, we say that for sufficiently dense (i.e., very connected) networks, the two measures provide almost identical rankings, producing Kendall correlation coefficients close to 1.

8.2.3. Third Experiment

We generate 10 random networks as in the second experiment. This time, we fix the value of n to be 200 and we vary m. We calculate the Katz centrality for each network using different choices of $\alpha $ . Recall, the Katz centrality of the nodes is given by $k={\left(I-{\alpha}_{i}A\right)}^{-1}e$ . We choose

${\alpha}_{1}=\frac{0.1}{{\lambda}_{max}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}{k}_{\text{1}}\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\alpha}_{2}=\frac{0.8}{{\lambda}_{max}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}{k}_{\text{2}}\mathrm{,}$

${\alpha}_{3}=\frac{1.5}{{\lambda}_{max}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}{k}_{\text{3}}\mathrm{,}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\alpha}_{4}=\frac{10}{{\lambda}_{max}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}{k}_{\text{4}}\mathrm{.}$

We calculate the Kendall correlation coefficients $\tau \left({k}_{i}\mathrm{,}{k}_{j}\right)$ between all pairs of measures as denoted in Table 6.

Note that the Kendall coefficient involving ${k}_{3}$ and ${k}_{4}$ are computed by taking the absolute value of the inverse function, that is $\left|{\left(I-\alpha A\right)}^{-1}\right|e$ and the other coefficient are computed by using the normal formula.

We observe in Table 7 that the Kendall coefficient between ${k}_{3}$ and ${k}_{4}$ is exactly 1. The Kendall coefficient between ${k}_{1}$ and both ${k}_{3}$ and ${k}_{4}$ , ${k}_{2}$ and both ${k}_{3}$ and ${k}_{4}$ are the same. Since ${k}_{3}$ and ${k}_{4}$ correspond to the generalised

Katz centrality (i.e. $\alpha \ge \frac{1}{{\lambda}_{max}}$ ), then we conjecture that the rankings provided by any generalised Katz centrality are always the same regardless of the choice of $\alpha $ , this is when $\alpha \ge \frac{1}{{\lambda}_{1}}$ .

8.2.4. Fourth Experiment

We repeat the second experiment involving generating 10 random networks with different number of nodes by using the Erdös-Rényi method. We will use the same notation for Kendall coefficients as we used in the second experiment, see Table 4.

The Erdös-Rényi model is used to generate random networks in which edges are set between nodes with equal probabilities. The model can be used to prove

Table 6. The notations of Kendall correlation coefficients.

Table 7. Kendall coefficients for generalized Katz centrality applied to random networks generated by using the Barabási-Albert method.

the existence of networks satisfying various properties, it can also be used to provide a rigorous definition of what it means for a property to hold for almost all networks [22]. To generate random networks using the Erdös-Rényi model, we need to specify two parameters: the number of nodes in the network denoted by n and the probability p that a link should be formed between any two nodes [22].

The Kendall coefficients in Table 8 are all positive and they are close to 1. Using the concept of Kendall coefficients, we say that rankings of nodes using centrality measures for random networks generated by using the Erdös-Rényi are highly correlated. This implies that there is a strong agreement in their rankings.

We repeat the third experiment but this time, generating 10 random networks with 200 nodes by using the Erdös-Rényi method. We will use the same definition of Katz centrality and the same notation as in Table 6.

Table 9 shows that there is a high agreement between the rankings of ${k}_{1}$ and ${k}_{2}$ . On the other hand, we observe that the rankings of the nodes produced by

Katz centrality when $\alpha <\frac{1}{{\lambda}_{1}}$ and those produced by generalised Katz centrality, when $\alpha \ge \frac{1}{{\lambda}_{1}}$ disagree, as we see from Table 9 that the Kendall coefficient ${\tau}_{{k}_{1}{k}_{3}}$ and ${\tau}_{{k}_{2}{k}_{3}}$ are approximately zero.

8.2.5. Fifth Experiment

We repeat the second experiment, but now with 10 random networks generated by using the Watts-Strogatz method. The same notation for the Kendall coefficient will be used as that used in the second experiment, see Table 4.

Table 8. Kendall coefficients for centrality measures applied to different random networks generated by using the Erdös-Rényi method.

n: Number of nodes in the network. p: Probability of edge creation.

Table 9. Kendall coefficients for generalised Katz centrality applied to 10 random networks with 200 nodes generated by using the Erdös-Rényi method.

n: Number of nodes in the network. p: Probability of edge creation.

The Watts-Strogatz model was developed as a way to impose a high clustering coefficient onto classical random graphs. It produces networks with a small-world property. To generate these networks, we use watts_strogatz_graph(n, k, p) in Sage. Here, n denotes the number of nodes in the network which are arranged in a ring and connected to k nearest neighbours in the ring. Each node is considered independently and, with probability p, a link is added between the node and one of the other nodes in the network, chosen uniformly at random in accordance with experiments detailed in [1].

In our experiment, we varied n,k and p and in each case, five networks were created. The averages of the Kendall coefficients over these 5 networks for different centrality measures are given in Table 10. These Kendall coefficients are computed for the complete set of rankings. The Kendall coefficients show that the agreement between the centrality measures are much weaker than for the networks produced by Barabási-Albert and Erdös-Rényi methods. The experiment also shows that as the network becomes denser the correlation between measures becomes stronger.

We then repeat the third experiment using the Watts-Strogatz method. The same definition of Katz centrality and the same notation were used as in Table 6.

In Table 11 we see a high agreement between the rankings of ${k}_{1}$ and ${k}_{2}$ . The rankings by ${k}_{3}$ and ${k}_{4}$ are exactly the same. We also see the same pattern as in Table 9, so the same conclusion apply.

8.2.6. Sixth Experiment

The aim of this experiment is to use the three methods (Barabási-Albert, Erdös-Rényi and Watts-Strogatz) for generating random networks and to use

Table 10. Kendall coefficients for centrality measures applied to random networks generated by using the Watts-Strogatz method.

n: Number of nodes in the network. k: Each node is connected to k nearest neighbours. p: Probability of rewiring each edge.

Table 11. Kendall coefficients for generalised Katz centrality applied to 10 random networks with 200 nodes generated by using the Watts-Strogatz method.

the Kendall correlation coefficient to see whether there will be an agreement between the closeness centrality and matrix functions, such as the logarithmic, cosh, sinh, cosine and sine functions. Using each method, we generate 10 random networks and for each network we create five networks in which the Kendall coefficient will be obtained by taking the average over these 5 created networks. The aim is to see whether the pattern observed in our first experiment is repeated.

Table 12 shows that there is an agreement between closeness centrality and matrix functions (logarithmic, cosh and sinh) in their ranking measures for

Table 12. Kendall coefficients between closeness centrality and matrix functions for 10 random networks generated by using the Barabási-Albert method.

networks generated by using the Barabási-Albert method. On the other hand, the agreement between closeness centrality and other matrix functions such as cosine and sine is weak.

We now generate 10 random networks by using the Erdös-Rényi and Watts-Strogatz methods.

Table 13 shows that networks generated by using the Erdös-Rényi method agree in the ranking measures given by closeness centrality and matrix functions such as logarithmic, cosh and sinh.

Table 14 shows that the agreement between closeness centrality and the matrix functions (cosh, sinh, cosine and sine) are weak in many cases. When we use the Watts-Strogatz method to generate the networks, the logarithmic function, gives a ranking similar to closeness centrality when $m\ge 10$ .

In general, the agreement between closeness centrality and hyperbolic functions (cosh and sinh) is not strong for the networks generated by using the Watt-Strogatz methods. The agreement between closeness centrality and the cosine function as well as the sine function in networks generated by using Barabási-Albert, Erdös-Rényi and Watts-Strogatz methods are all weak. Among the tested matrix functions, the logarithmic function gives the best agreement of ranking with other centrality measures.

8.2.7. Real-World Network Experiments

In this experiment, we will study the Kendall correlation coefficient for real-world networks. The networks in this experiment come from a variety of sources, some of these data we have obtained from Gephi sample datasets [23]and others are found in Pajek Data Sets [24]. We will compare only some of the centrality measures (closeness, subgraph and Katz) and the logarithmic matrix function. Note that we are not interested in the meaning of each node within the

Table 13. Kendall coefficients between closeness centrality and matrix functions for 10 random network generated by using the Erdös-Rényi method.

Table 14. Kendall coefficients between closeness centrality and matrix functions for 10 random networks generated by using the Watts-Strogatz method.

network, what we really want to know is whether there is an agreement between these centrality measures in real-world networks. We chose the logarithmic function over other matrix functions since it shows a high agreement with other centralities when applied to random networks.

We can clearly see from Table 15 that all Kendall coefficient are positive. This implies that there is an agreement between the rankings of nodes. We also observe that the agreement between centrality measures (closeness, Katz, subgraph) and the logarithmic function is high, irrespective of the connectivity of the underlying network. In general, we can say that the logarithmic function is the best of the tried matrix functions and can be used as a centrality measure, since it gives a ranking similar to rankings of other centrality measures.

Table 15. Kendall coefficients for the logarithmic function and some standard centrality measures as applied to 10 real-world networks.

9. Conclusions

In this work we examined the centrality measures such as closeness, degree, eigenvector, Katz and subgraph. We showed the relationship between the Katz centrality and eigenvector as well as degree centrality. We developed our notion of centrality measure by considering the rankings of nodes based on matrix functions such as logarithmic, cosine, sine, hyperbolic functions and the generalised Katz centrality. We showed experimentally by using various classes of graphs that the rankings of the nodes given by closeness, degree, eigenvector, Katz and subgraph centrality are highly correlated. Moreover, we showed experimentally that the rankings of nodes given by different choices of attenuation

factor $\alpha $ for the generalised Katz centrality, in which $\alpha \ge \frac{1}{{\lambda}_{1}}$ where ${\lambda}_{1}$ is the

principal eigenvalue of the network adjacency matrix $A$ , are exactly the same. In terms of matrix functions, the experiment shows that there is a high agreement between the rankings of nodes given by the logarithmic function and other common centrality measures discussed in Section V.

Similar results were found to hold for real-world networks: the rankings given by the logarithmic function and those given by closeness, Katz and subgraph centrality are highly correlated irrespective of the connectivity of the network. In general, we concluded that the logarithmic function, out of the matrix functions we have considered, is the best and can be used as a centrality measure.

In this work, we considered only the diagonal entries of the matrix functions with some modifications of the calculations of their Kendall correlation coefficients. We also found that the logarithmic function gives a relatively good ranking as compared to the rankings given by other centrality measures. We suggest that in the future, we can consider the row sums of these matrix functions with or without any modification and examine whether they give similar rankings as other centrality measures. The paper did not analyse the significance and the uses of the centrality measures based on matrix and it has been left for future work.

Conflicts of Interest

The authors declare no conflicts of interest.

Cite this paper

*Open Journal of Discrete Mathematics*,

**8**, 79-115. doi: 10.4236/ojdm.2018.84008.

[1] |
Benzi, M. and Klymko, C. (2013) Total Communicability as a Centrality Measure. Journal of Complex Networks, 1, 124-149. https://doi.org/10.1093/comnet/cnt007 |

[2] |
Kendall, M.G. (1938) A New Measure of Rank Correlation. Biometrika, 30, 81-93. https://doi.org/10.1093/biomet/30.1-2.81 |

[3] | Bavelas, A. (1948) A Mathematical Model for Group Structures. Human Organization, 7, 16-30. https://doi.org/10.17730/humo.7.3.f4033344851gl053 |

[4] | Bavelas, A. and Barrett, D. (1951) An Experimental Approach to Organizational Communication. American Management Association, New York, 57-62. |

[5] | Cohn, B.S. and Marriott, M. (1958) Networks and Centres of Integration in Indian Civilization. Journal of Social Research, 1, 1-9. |

[6] |
Pitts, F.R. (1965) A Graph Theoretic Approach to Historical Geography. The Professional Geographer, 17, 15-20. https://doi.org/10.1111/j.0033-0124.1965.015_m.x |

[7] |
Czepiel, J.A. (1974) Word-of-Mouth Processes in the Diffusion of a Major Technological Innovation. Journal of Marketing Research, 11, 172-180. https://doi.org/10.2307/3150555 |

[8] |
Bolland, J.M. (1988) Sorting out Centrality: An Analysis of the Performance of Four Centrality Models in Real and Simulated Networks. Social Networks, 10, 233-253. https://doi.org/10.1016/0378-8733(88)90014-7 |

[9] |
Borgatti, S.P. and Everett, M.G. (2006) A Graph-Theoretic Perspective on Centrality. Social Networks, 28, 466-484. https://doi.org/10.1016/j.socnet.2005.11.005 |

[10] |
Landherr, A., Friedl, B. and Heidemann, J. (2010) A Critical Review of Centrality Measures in Social Networks. Business & Information Systems Engineering, 2, 371-385. https://doi.org/10.1007/s12599-010-0127-3 |

[11] | Estrada, E. (2012) The Structure of Complex Networks: Theory and Applications. Oxford University Press, Oxford. |

[12] |
Higham, N.J. (2008) Bibliography of Functions of Matrices: Theory and Computation. SIAM. https://doi.org/10.1137/1.9780898717778 |

[13] |
Boldi, P. and Vigna, S. (2014) Axioms for Centrality. Internet Mathematics, 10, 222-262. https://doi.org/10.1080/15427951.2013.865686 |

[14] | Higham, N.J. and Deadman, E. (2016) A Catalogue of Software for Matrix Functions. Version 2.0. |

[15] | Bethea, R.M. (2018) Statistical Methods for Engineers and Scientists. Routledge, Abingdon-on-Thames. |

[16] |
Fredricks, G.A. and Nelsen, R.B. (2007) On the Relationship between Spearman’s Rho and Kendall’s Tau for Pairs of Continuous Random Variables. Journal of Statistical Planning and Inference, 137, 2143-2150. https://doi.org/10.1016/j.jspi.2006.06.045 |

[17] | Xu, W., Hou, Y., Hung, Y.S. and Zou, Y. (2010) Comparison of Spearman’s Rho and Kendall’s Tau in Normal and Contaminated Normal Models. |

[18] | Prettejohn, B.J., Berryman, M.J. and McDonnell, M.D. (2011) Methods for Generating Complex Networks with Selected Structural Properties for Simulations: A Review and Tutorial for Neuroscientists. Frontiers in Computational Neuroscience, 5, 11. https://doi.org/10.3389/fncom.2011.00011 |

[19] |
Bayati, M., Kim, J.H. and Saberi, A. (2010) A Sequential Algorithm for Generating Random Graphs. Algorithmica, 58, 860-910. https://doi.org/10.1007/s00453-009-9340-1 |

[20] | Mej, N. (2010) Networks: An Introduction. |

[21] |
Albert, R. and Barabási, A.L. (2002) Statistical Mechanics of Complex Networks. Reviews of Modern Physics, 74, 47-97. https://doi.org/10.1103/RevModPhys.74.47 |

[22] |
Newman, M.E., Strogatz, S.H. and Watts, D.J. (2001) Random Graphs with Arbitrary Degree Distributions and Their Applications. Physical Review E, 64, Article ID: 026118. https://doi.org/10.1103/PhysRevE.64.026118 |

[23] |
Datasets, Wikipedia, the Free Encyclopedia. https://wiki.gephi.org/index.php/Datasets |

[24] |
Old Pajek Data Sets. http://pajek.imfm.si/doku.php?id=data:pajek:vlado |

Copyright © 2020 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.