Improved Approximation of Layout Problems on Random Graphs

Abstract

Inspired by previous work of Diaz, Petit, Serna, and Trevisan (Approximating layout problems on random graphs, Discrete Mathematics, 235, 2001, 245-253), we show that several well-known graph layout problems are approximable to within a factor arbitrarily close to 1 of the optimal with high probability for random graphs drawn from an Erdös-Renyi distribution with appropriate sparsity conditions using only elementary probabilistic analysis. Moreover, we show that the same results hold for the analogous problems on directed acyclic graphs.

Share and Cite:

Cheung, K. and Girardet, P. (2020) Improved Approximation of Layout Problems on Random Graphs. Open Journal of Discrete Mathematics, 10, 13-30. doi: 10.4236/ojdm.2020.101003.

1. Introduction

Many well-known optimization problems on graphs fall into the category of graph layout problems. A layout of a graph on n vertices consists of a bijection between the vertices of the graph and the set $\left\{1,2,\cdots ,n\right\}$ of natural numbers, which can be interpreted as arranging the vertices of the graph in some order on a line. A graph layout problem then consists of optimizing some objective function over the set of possible layouts of a graph. There is an analogous notion of layout and layout problems for directed acyclic graphs, i.e. directed graphs with no directed cycles $\left\{\left({v}_{1},{v}_{2}\right),\left({v}_{2},{v}_{3}\right),\cdots ,\left({v}_{n},{v}_{1}\right)\right\}$ where $\left({v}_{i},{v}_{i+1}\right)$ is a directed edge from the vertex ${v}_{i}$ to ${v}_{i+1}$ with the indices taken modulo n. A layout of a directed acyclic graph is simply a topological sort of it, so that the layout respects edge directions. The particular layout problems we consider in this paper are Minimum Cut Linear Arrangement (also known as Cutwidth), Vertex Separation, Edge Bisection, and Vertex Bisection, along with the analogous problems on directed acyclic graphs. These problems are well known to find applications in VLSI design, job scheduling, parallel computing, graph drawing, etc. We direct the interested reader to a survey  on the topic.

Graph layout problems are often computationally difficult to solve exactly. The decision versions of both the undirected and directed vertex separation problems are known to be NP-complete . The same is true for the undirected  and directed  minimum cut linear arrangement problems, and the vertex bisection  and edge bisection (shown in  as a special case of minimum cut into bounded sets) problems on undirected graphs. We do not know of a reference which proves the NP-hardness of the vertex and edge bisection problems on directed graphs, though we have no reason to believe that they are not. Due to the practical applications of the problems considered, many researchers have sought to find approximation algorithms for these problems. It is common to analyze the performance of algorithms on random instances as a proxy for their “real” performance, so that one might seek to analyze the approximability of layout problems on random graphs. Diaz et al.  showed that for any of the undirected layout problems considered above, if $C>2$, then for large enough random graphs with appropriate sparsity conditions, any solution of the problem has cost within a factor C of the optimal with high probability. Hence, these problems can be trivially approximated to within any factor of $C>2$ for large enough random graphs with high probability.

In this paper, in addition to showing that the constant of approximation can be improved to any $C>1$ with slightly weaker sparsity and convergence results, we show that the same result holds for the directed versions of the problems which were not considered in . Moreover, we only use the Hoeffding inequality for tail bounds of sums of independent and identically distributed (i.i.d) random variables and some well-known asymptotic estimates to obtain these results, thus avoiding the more technical “mixing graph” framework used in . In summary, for large enough random graphs with appropriate sparsity conditions, any solution of these layout problems will have cost arbitrarily close to optimal with high probability.

2. Definitions

We first recall some terminology in . Given an undirected graph $G=\left(V,E\right)$ with $|V|=n$, a linear arrangement (or a layout) of G is an bijective function $\varphi :V\to \left\{1,\cdots ,n\right\}$. The problems we consider all take the form of optimizing some objective function over the set of linear arrangements of a graph. For a linear arrangement $\varphi$ of G and each $i\in \left\{1,\cdots ,n\right\}$, the two sets

$L\left(i,\varphi ,G\right)=\left\{u\in V|\varphi \left(u\right)\le i\right\}$

$R\left(i,\varphi ,G\right)=\left\{u\in V|\varphi \left(u\right)>i\right\}$

and the two objective functions

$\theta \left(i,\varphi ,G\right)=|\left\{uv\in E|u\in L\left(i,\varphi ,G\right)\wedge v\in R\left(i,\varphi ,G\right)\right\}|,$

$\delta \left(i,\varphi ,G\right)=|\left\{u\in L\left(i,\varphi ,G\right)|\exists v\in R\left(i,\varphi ,G\right)\wedge uv\in E\right\}|.$

are defined. We may interpret $\theta \left(i,\varphi ,G\right)$ as the number of edges lying over the i-th “gap” in the arrangement, i.e. edges whose left vertex is in at most the i-th position in the arrangement and whose right vertex is in at least the $\left(i+1\right)$ -th position. Additionally, we may interpret $\delta \left(i,\varphi ,G\right)$ as the number of vertex to the left of the i-th “gap” which are connected by an edge to some vertex to the right of the gap. (In this paper, we sometimes casually refer to $L\left(i,\varphi ,G\right)$ as the set on the left and $R\left(i,\varphi ,G\right)$ as the set on the right.)

Let ${\Lambda }_{G}$ denote the set of linear arrangements of G. The problems we consider for undirected graphs are given in Table 1. These problems are all known to be NP-hard.

We also consider analogous problems on directed graphs. Given a directed acyclic graph $G=\left(V,E\right)$ with $|V|=n$, a linear arrangement (or layout) of G is an bijective function $\varphi :V\to \left\{1,\cdots ,n\right\}$ such that if $\left(u,v\right)\in E$ is a directed edge from $u\in V$ to $v\in V$ then $\varphi \left(u\right)<\varphi \left(v\right)$. Note that this is simply a topological sort of G, which exists as G is directed acyclic. Again, we let ${\Lambda }_{G}$ denote the set of linear arrangements of G. The definitions of $L\left(i,\varphi ,G\right)$, $R\left(i,\varphi ,G\right)$, $\theta \left(i,\varphi ,G\right)$, and $\delta \left(i,\varphi ,G\right)$ remain unchanged for directed acyclic graphs. The following problems are directed versions of the problems listed in Table 1.

• Directed cutwidth (DCUTWIDTH): Given a directed acyclic graph $G=\left(V,E\right)$, compute

$\text{MINDCW}\left(G\right)=\underset{\varphi \in {\Lambda }_{G}}{\mathrm{min}}\text{DCW}\left(G,\varphi \right),$

where $\text{DCW}\left(G,\varphi \right)=\underset{i\in \left\{1,\cdots ,n\right\}}{\mathrm{max}}\theta \left(i,\varphi ,G\right)$.

Table 1. Undirected layout problems.

• Directed vertex separation (DVERTSEP): Given a directed acyclic graph $G=\left(V,E\right)$, compute

$\text{MINDVS}\left(G\right)=\underset{\varphi \in {\Lambda }_{G}}{\mathrm{min}}\text{DVS}\left(G,\varphi \right),$

where $\text{DVS}\left(G,\varphi \right)=\underset{i\in \left\{1,\cdots ,n\right\}}{\mathrm{max}}\delta \left(i,\varphi ,G\right)$.

• Directed edge bisection (DEDGEBIS): Given a directed acyclic graph $G=\left(V,E\right)$, compute

$\text{MINDEB}\left(G\right)=\underset{\varphi \in {\Lambda }_{G}}{\mathrm{min}}\text{DEB}\left(G,\varphi \right),$

where $\text{DEB}\left(G,\varphi \right)=\theta \left(⌊\frac{n}{2}⌋,\varphi ,G\right)$.

• Directed vertex bisection (DVERTBIS): Given a directed acyclic graph $G=\left(V,E\right)$, compute

$\text{MINDVB}\left(G\right)=\underset{\varphi \in {\Lambda }_{G}}{\mathrm{min}}\text{DVB}\left(G,\varphi \right),$

where $\text{DVB}\left(G,\varphi \right)=\delta \left(⌊\frac{n}{2}⌋,\varphi ,G\right)$.

For each arrangement problem considered above, we also define the maximum-cost solution of the problem on a graph. For example, for CUTWIDTH, in contrast to $\text{MINCW}\left(G\right)$, we define $\text{MAXCW}\left(G\right)={\mathrm{max}}_{\varphi }\text{CW}\left(G,\varphi \right)$, and similarly for every other problem considered above. Moreover, we define the gap of a problem on a given graph G to be the ratio of the maximum-cost solution to the minimum-cost solution. For example, for CUTWIDTH, the gap is

$\text{GAPCW}\left(G\right)=\frac{\text{MAXCW}\left(G\right)}{\text{MINCW}\left(G\right)},$

and this quantity is defined in the same way for every other arrangement problem considered above.

Any discussion on random graphs requires a probability distribution on graphs. In this paper, we adopt a variant of the Erdös-Renyi probability distribution  for undirected graphs defined as follows:

For a positive integer n and probability $0\le p\le 1$, the Erdös-Renyi distribution $G\left(n,p\right)$ on the set of n-vertex graphs assigns an n-vertex graph

$G=\left(V,E\right)$ probability ${p}^{m}{\left(1-p\right)}^{\left(\begin{array}{c}n\\ 2\end{array}\right)-m}$, where $|E|=m$. That is, we sample n-vertex graphs by including each possible edge with probability p.

For a probability distribution on directed acyclic graphs, we use a variant of the Erdös-Renyi probability distribution  which produces directed acyclic graphs, defined as follows:

For a positive integer n and probability $0\le p\le 1$, the distribution $D\left(n,p\right)$ on the set of n-vertex directed acyclic graphs first samples a random graph from $G\left(n,p\right)$ on the vertex set $\left\{1,\cdots ,n\right\}$ and orients each edge $\left\{i,j\right\}$ from i to j if $i.

As the edges in the sampled directed graph always point from a lower numbered vertex to a higher numbered vertex, it is clear that the sampled graph is acyclic.

3. Preliminary Lemmas

We first list some technical lemmas necessary for carrying out the probabilistic analysis in our main theorems.

Lemma 1 (Hoeffding’s inequality). Suppose that ${X}_{1},\cdots ,{X}_{n}$ are independent identically distributed Bernoulli random variables with mean p, and let $H\left(n\right)=\stackrel{¯}{{X}_{1}}+\stackrel{¯}{{X}_{2}}+\cdots +\stackrel{¯}{{X}_{n}}$, where $\stackrel{¯}{{X}_{i}}$ is a sample of ${X}_{i}$. Then for $\gamma >0$,

$P\left[H\left(n\right)\le \left(p-\gamma \right)n\right]\le \mathrm{exp}\left(-2{\gamma }^{2}n\right)$

$P\left[H\left(n\right)\ge \left(p+\gamma \right)n\right]\le \mathrm{exp}\left(-2{\gamma }^{2}n\right).$

Proof. This is a special case of Theorem 1 in Hoeffding’s original paper  for Bernoulli random variables.

Lemma 2. If $k=o\left(n\right)$, then

$\mathrm{log}\left(\begin{array}{c}n\\ k\end{array}\right)=\left(1+o\left(1\right)\right)k\mathrm{log}\frac{n}{k}$

Proof. Recall the well-known inequalities

${\left(\frac{n}{k}\right)}^{k}\le \left(\begin{array}{c}n\\ k\end{array}\right)\le {\left(\frac{ne}{k}\right)}^{k},$

which can be obtained via Stirling’s approximation. It follows that

$k\mathrm{log}\frac{n}{k}\le \mathrm{log}\left(\begin{array}{c}n\\ k\end{array}\right)\le k\left(\mathrm{log}\frac{n}{k}+1\right).$

If $k=o\left(n\right)$, then $\mathrm{log}\frac{n}{k}\to \infty$. By the above chain of inequalities, we have that

$\mathrm{log}\left(\begin{array}{c}n\\ k\end{array}\right)=\left(1+o\left(1\right)\right)k\mathrm{log}\frac{n}{k},$

as desired.

Lemma 3. Suppose that $f\left(n\right)=\Omega \left({n}^{-c}\right)$ and $g\left(n\right)=O\left({n}^{d}\right)$ where $0. If $f\left(n\right)=o\left(1\right)$, then

$\underset{n\to \infty }{\mathrm{lim}}{\left(1-f\left(n\right)\right)}^{g\left(n\right)}=0.$

Proof. Taking logarithms, we find that

$\mathrm{log}\left(\underset{n\to \infty }{\mathrm{lim}}{\left(1-f\left(n\right)\right)}^{g\left(n\right)}\right)=\underset{n\to \infty }{\mathrm{lim}}g\left(n\right)\mathrm{log}\left(1-f\left(n\right)\right)\le \underset{n\to \infty }{\mathrm{lim}}{k}_{1}{n}^{d}\mathrm{log}\left(1-{k}_{2}{n}^{-c}\right)$

for appropriate constants ${k}_{1},{k}_{2}>0$. But then by L’Hopital’s rule,

$\underset{n\to \infty }{\mathrm{lim}}{k}_{1}{n}^{d}\mathrm{log}\left(1-{k}_{2}{n}^{-c}\right)=\underset{n\to \infty }{\mathrm{lim}}\frac{\mathrm{log}\left(1-{k}_{2}{n}^{-c}\right)}{{k}_{1}{n}^{-d}}=-\underset{n\to \infty }{\mathrm{lim}}\frac{c{k}_{2}{n}^{-c-1}}{\left(1-{k}_{2}{n}^{-c}\right)d{k}_{1}{n}^{-d-1}}.$

Since $0, we have that

$\frac{c{k}_{2}{n}^{-c-1}}{\left(1-{k}_{2}{n}^{-c}\right)d{k}_{1}{n}^{-d-1}}=\left(\frac{c{k}_{2}}{\left(1-{k}_{2}{n}^{-c}\right)d{k}_{1}}\right){n}^{d-c}\to \infty$

as $n\to \infty$. It follows that

$\mathrm{log}\left(\underset{n\to \infty }{\mathrm{lim}}{\left(1-f\left(n\right)\right)}^{g\left(n\right)}\right)=-\underset{n\to \infty }{\mathrm{lim}}\frac{c{k}_{2}{n}^{-c-1}}{\left(1-{k}_{2}{n}^{-c}\right)d{k}_{1}{n}^{-d-1}}=-\infty .$

Hence,

$\underset{n\to \infty }{\mathrm{lim}}{\left(1-f\left(n\right)\right)}^{g\left(n\right)}=0$

as desired.

4. Main Results

4.1. Undirected Graph Problems

For the theorems that follow, let ${\left\{{G}_{n}\right\}}_{n=1}^{\infty }$ be a sequence of random graphs such that for each $n=1,2,\cdots$, ${G}_{n}$ is given by $G\left(n,{p}_{n}\right)$ with edge probability ${p}_{n}$. The following theorems show that for each of the undirected graph arrangement problems GAPCW, GAPEP, GAPVS, and GAPVB that the ratio of the maximum value to the minimum value of the corresponding objective function over all arrangements of ${G}_{n}$ is asymptotically close to 1 with high probability, subject to appropriate sparsity conditions.

Theorem 1. Let ${p}_{n}$ satisfy ${p}_{n}=\Omega \left({n}^{-c}\right),c<1/2$. Then for all $ϵ>0,\delta >0$ there exists an N such that for all $n\ge N$,

$P\left[\text{GAPCW}\left({G}_{n}\right)<1+\delta \right]>1-ϵ.$

Theorem 2. Let ${p}_{n}$ satisfy ${p}_{n}=\Omega \left({n}^{-c}\right),c<1/2$. Then for all $ϵ>0,\delta >0$ there exists an N such that for all $n\ge N$,

$P\left[\text{GAPEB}\left({G}_{n}\right)<1+\delta \right]>1-ϵ.$

Theorem 3. Let ${p}_{n}$ satisfy ${p}_{n}=\Omega \left({n}^{-c}\right),c<1$. Then for all $ϵ>0,\delta >0$ there exists an N such that for all $n\ge N$,

$P\left[\text{GAPVS}\left({G}_{n}\right)<1+\delta \right]>1-ϵ.$

Theorem 4. Let ${p}_{n}$ satisfy ${p}_{n}=\Omega \left({n}^{-c}\right),c<1$. Then for all $ϵ>0,\delta >0$ there exists an N such that for all $n\ge N$,

$P\left[\text{GAPVB}\left({G}_{n}\right)<1+\delta \right]>1-ϵ.$

To prove Theorem 1, we first establish the following lemmas:

Lemma 4. Let ${p}_{n}$ satisfy ${p}_{n}=\Omega \left({n}^{-c}\right),c<1/2$. Then for all $ϵ>0,\delta >0$ there exists an N such that for all $n\ge N$,

$P\left[\text{MINCW}\left({G}_{n}\right)\ge \frac{{n}^{2}{p}_{n}\left(1-\delta \right)}{4}\right]>1-ϵ.$

Lemma 5. Suppose that ${p}_{n}=\Omega \left({n}^{-c}\right),c<1/2$. Then for all $ϵ>0,\delta >0$ there exists an N such that for all $n\ge N$,

$P\left[\text{MAXCW}\left({G}_{n}\right)\le \frac{{n}^{2}{p}_{n}\left(1+\delta \right)}{4}\right]>1-ϵ.$

We will make use of the following definition in our proofs for the above lemmas: For a graph $G=\left(V,E\right)$ and $A\subseteq V$, $c\left(G,A\right)$ is defined to be the number of edges joining a vertex in A and a vertex in $V-A$. That is,

$c\left(G,A\right)=|\left\{uv\in E|u\in A\wedge v\in V-A\right\}|.$

Proof of Lemma 4. For a linear arrangement $\varphi \in {\Lambda }_{G}$, define

${L}_{\varphi }=L\left(⌊n/2⌋,\varphi ,G\right)=\left\{u\in V|\varphi \left(u\right)\le ⌊n/2⌋\right\}.$

Clearly, $\text{CW}\left(G\right)\ge c\left(G,{L}_{\varphi }\right)$. It follows that $\text{MINCW}\left(G\right)\ge {\mathrm{min}}_{\varphi }c\left(G,{L}_{\varphi }\right)$. Suppose that $|V|=n$. Observe that ${\mathrm{min}}_{\varphi }c\left(G,{L}_{\varphi }\right)={\mathrm{min}}_{S\in \mathcal{B}}c\left(G,S\right)$ where $\mathcal{B}=\left\{S\subseteq V||S|=⌊n/2⌋\right\}$. Hence, for all $\alpha \ge 0$,

$P\left[\text{MINCW}\left({G}_{n}\right)\ge \alpha \right]\ge P\left[\underset{S\in \mathcal{B}}{\mathrm{min}}c\left({G}_{n},S\right)\ge \alpha \right].$

To prove the lemma, it suffices to show that

$P\left[\underset{S\in \mathcal{B}}{\mathrm{min}}c\left({G}_{n},S\right)\ge \frac{{n}^{2}{p}_{n}\left(1-\delta \right)}{4}\right]>1-ϵ$

for n sufficiently large.

Let S be an arbitrary subset of V with $|S|=⌊n/2⌋$. As ${G}_{n}$ is an Erdös-Renyi random graph with vertex set V and edge probability ${p}_{n}$, $c\left({G}_{n},S\right)$ is a binomial random variable with mean $\mu =⌊n/2⌋⌈n/2⌉{p}_{n}\ge \frac{\left({n}^{2}-1\right){p}_{n}}{4}$.

Applying the first inequality in Lemma 1 with $\gamma ={\gamma }_{n}$ where ${\gamma }_{n}>0$, we obtain that

$P\left[c\left({G}_{n},S\right)\le \mu \left(1-\frac{{\gamma }_{n}}{{p}_{n}}\right)\right]\le \mathrm{exp}\left(-\left({n}^{2}-1\right){\gamma }_{n}^{2}/2\right).$

As ${p}_{n}=\Omega \left({n}^{-c}\right)$ with $c<1/2$, we can choose ${\gamma }_{n}$ to get the desired convergence. Indeed, setting ${\gamma }_{n}={n}^{-l}$ where l satisfies $c, we have ${\gamma }_{n}=o\left({p}_{n}\right)$ and ${\gamma }_{n}^{2}=\Omega \left({n}^{-1+s}\right)$ for some $s>0$. Hence,

$\begin{array}{c}P\left[c\left({G}_{n},S\right)\le \mu \left(1-\frac{{\gamma }_{n}}{{p}_{n}}\right)\right]\le \mathrm{exp}\left(-\left({n}^{2}-1\right){\gamma }_{n}^{2}/2\right)\\ =\mathrm{exp}\left(-\left({n}^{2}-1\right)\Omega \left({n}^{-1+s}\right)\right)\\ =\mathrm{exp}\left(-\Omega \left({n}^{1+s}\right)\right).\end{array}$

Note that $\left(\begin{array}{c}n\\ ⌊n/2⌋\end{array}\right)\le {2}^{n}$. Hence, by the union bound,

$\begin{array}{l}P\left[\underset{S\in \mathcal{B}}{V}\text{ }c\left({G}_{n},S\right)\le \mu \left(1-\frac{{\gamma }_{n}}{{p}_{n}}\right)\right]\le \underset{S\in \mathcal{B}}{\sum }P\left[c\left({G}_{n},S\right)\le \mu \left(1-\frac{{\gamma }_{n}}{{p}_{n}}\right)\right]\le \underset{S\in \mathcal{B}}{\sum }\mathrm{exp}\left(-\Omega \left({n}^{1+s}\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=\left(\begin{array}{c}n\\ ⌊n/2⌋\end{array}\right)\mathrm{exp}\left(-\Omega \left({n}^{1+s}\right)\right)\le {2}^{n}\mathrm{exp}\left(-\Omega \left({n}^{1+s}\right)\right)<ϵ\end{array}$

for n sufficiently large. Thus,

$P\left[\underset{S\in \mathcal{B}}{\mathrm{min}}c\left({G}_{n},S\right)\ge \mu \left(1-\frac{{\gamma }_{n}}{{p}_{n}}\right)\right]\ge 1-P\left[\underset{S\in \mathcal{B}}{V}\text{ }c\left({G}_{n},S\right)\le \mu \left(1-\frac{{\gamma }_{n}}{{p}_{n}}\right)\right]>1-ϵ$

for n sufficiently large.

Since $\frac{{\gamma }_{n}}{{p}_{n}}=O\left({n}^{c-l}\right)=o\left(1\right)$, for a sufficiently large n,

$\mu \left(1-\frac{{\gamma }_{n}}{{p}_{n}}\right)\ge \frac{\left({n}^{2}-1\right){p}_{n}}{4}\left(1-\frac{{\gamma }_{n}}{{p}_{n}}\right)\ge \frac{{n}^{2}{p}_{n}\left(1-\delta \right)}{4}.$

Hence, for n sufficiently large,

$P\left[\underset{S\in \mathcal{B}}{\mathrm{min}}c\left({G}_{n},S\right)\ge \frac{{n}^{2}{p}_{n}\left(1-\delta \right)}{4}\right]>1-ϵ$

as desired.

Proof of 5. Let $ϵ,\delta >0$. Observe that for all $\alpha \ge 0$,

$P\left[\text{MAXCW}\left({G}_{n}\right)\le \alpha \right]=P\left[\underset{\begin{array}{c}S\subseteq V\\ |S|\le ⌊n⌋/2\end{array}}{\mathrm{max}}c\left({G}_{n},S\right)\le \alpha \right],$

and that

$P\left[c\left({G}_{n},S\right)\le \alpha \right]\ge P\left[c\left({G}_{n},T\right)\le \alpha \right]$

for all $S,T\subseteq V$ such that $|S|\le ⌊n/2⌋$ and $|T|=⌊n/2⌋$. To see the second inequality, note that $c\left({G}_{n},S\right)$ is the sum of $|S|\left(n-|S|\right)$ i.i.d. Bernoulli random variables with probability of success ${p}_{n}$ and $|S|\left(n-|S|\right)$ is maximized at $|S|=⌊n/2⌋$. Hence, to prove the lemma, it suffices to show that

$P\left[\underset{\begin{array}{c}T\subseteq V\\ |T|=⌊n⌋/2\end{array}}{\mathrm{max}}c\left({G}_{n},T\right)\le \frac{{n}^{2}{p}_{n}\left(1+\delta \right)}{4}\right]>1-ϵ$

for n sufficiently large.

Let $\mathcal{B}$ denote the set $\left\{S\subseteq V||S|=⌊n/2⌋\right\}$. Let $T\in \mathcal{B}$. Then, by the second inequality in Lemma 1 with $\gamma ={\gamma }_{n}$ where ${\gamma }_{n}>0$, we obtain that

$P\left[c\left({G}_{n},T\right)\ge \mu \left(1+\frac{{\gamma }_{n}}{{p}_{n}}\right)\right]\le \mathrm{exp}\left(-\left({n}^{2}-1\right){\gamma }_{n}^{2}/2\right)$

where $\mu =⌊n/2⌋⌈n/2⌉{p}_{n}$ as in the proof of Lemma 4.

As ${p}_{n}=\Omega \left({n}^{-c}\right)$ with $c>1/2$, we again set ${\gamma }_{n}={n}^{-l}$ where l satisfies $c, so that ${\gamma }_{n}=o\left({p}_{n}\right)$ and ${\gamma }_{n}^{2}=\Omega \left({n}^{-1+s}\right)$ for some $s>0$. Then,

$P\left[c\left({G}_{n},T\right)\ge \mu \left(1+\frac{{\gamma }_{n}}{{p}_{n}}\right)\right]\le \mathrm{exp}\left(-\Omega \left({n}^{1+s}\right)\right).$

Hence,

$\underset{T\in \mathcal{B}}{\sum }P\left[c\left({G}_{n},T\right)\ge \mu \left(1+\frac{{\gamma }_{n}}{{p}_{n}}\right)\right]\le {2}^{n}\mathrm{exp}\left(-\Omega \left({n}^{1+s}\right)\right)<ϵ$

for a sufficiently large n.

As $\frac{{\gamma }_{n}}{{p}_{n}}=O\left({n}^{c-l}\right)$, we have that

$\frac{{n}^{2}{p}_{n}\left(1+\frac{{\gamma }_{n}}{{p}_{n}}\right)}{4}\le \frac{{n}^{2}{p}_{n}\left(1+\delta \right)}{4},$

for a sufficiently large n. Thus, for n sufficiently large,

$\begin{array}{l}P\left[\underset{T\in \mathcal{B}}{\mathrm{max}}c\left({G}_{n},T\right)\le \frac{{n}^{2}{p}_{n}\left(1+\delta \right)}{4}\right]\\ \ge 1-\underset{T\in \mathcal{B}}{\sum }P\left[c\left({G}_{n},T\right)\ge \frac{{n}^{2}{p}_{n}\left(1+\frac{{\gamma }_{n}}{{p}_{n}}\right)}{4}\right]>1-ϵ\end{array}$

as desired.

With these two lemmas, the main theorem for the cut width gap can be readily established.

Proof of Theorem 1. As in the statement of the theorem, let ${p}_{n}$ satisfy ${p}_{n}=\Omega \left({n}^{-c}\right)$ for some $c<1/2$, and let $\delta ,ϵ>0$ be given. Since

$\underset{x\to 0}{\mathrm{lim}}\frac{1+x}{1-x}=1,$

there exists ${\delta }^{\prime }$ satisfying $\frac{1+{\delta }^{\prime }}{1-{\delta }^{\prime }}<1+\delta$. By Lemma 4, there exists ${N}_{1}$ such that for $n\ge {N}_{1}$,

$P\left[\text{MINCW}\left({G}_{n}\right)\ge \frac{{n}^{2}{p}_{n}\left(1-{\delta }^{\prime }\right)}{4}\right]>1-ϵ/2.$

By Lemma 5, there exists ${N}_{2}$ such that for $n\ge {N}_{2}$,

$P\left[\text{MAXCW}\left({G}_{n}\right)\le \frac{{n}^{2}{p}_{n}\left(1+{\delta }^{\prime }\right)}{4}\right]>1-ϵ/2.$

Hence, if $N=\mathrm{max}\left\{{N}_{1},{N}_{2}\right\}$, then for $n\ge N$ we have that

$\begin{array}{l}1-ϵ=1-2\left(ϵ/2\right)\\ \le P\left[\left(\text{MINCW}\left({G}_{n}\right)\ge \frac{{n}^{2}{p}_{n}\left(1-{\delta }^{\prime }\right)}{4}\right)\wedge \left(\text{MAXCW}\left({G}_{n}\right)\le \frac{{n}^{2}{p}_{n}\left(1+{\delta }^{\prime }\right)}{4}\right)\right].\end{array}$

As $\left(a\ge b>0\right)\wedge \left(0, it follows that

$P\left[\frac{\text{MAXCW}\left({G}_{n}\right)}{\text{MINCW}\left({G}_{n}\right)}\le \frac{1+{\delta }^{\prime }}{1-{\delta }^{\prime }}\right]>1-ϵ.$

Thus, $P\left[\text{GAPCW}\left({G}_{n}\right)<1+\delta \right]>1-ϵ$ as desired.

Since edge bisection is essentially a restricted version of the cutwidth problem, it is straightforward to carry over the proofs above to prove Theorem 2.

Proof of Theorem 2. Note that for a graph G, $\text{MINEB}\left(G\right)$ is simply the minimization of $c\left(G,S\right)$ over subsets $S\subset V$ with $|S|=⌊n/2⌋$, and $\text{MAXEB}\left(G\right)$ the corresponding maximization. Hence, the proof of Lemma 4 carries through to give that

$P\left[\text{MINEB}\left({G}_{n}\right)\ge \frac{{n}^{2}{p}_{n}\left(1-\delta \right)}{4}\right]>1-ϵ$

for any given $ϵ,\delta >0$ when n is sufficiently large.

Similarly, the proof of Lemma 5 carries through to yield that

$P\left[\text{MAXEB}\left({G}_{n}\right)\le \frac{{n}^{2}{p}_{n}\left(1+\delta \right)}{4}\right]>1-ϵ$

for any given $ϵ,\delta >0$ when n is sufficiently large. Combining these two results in the same manner as in the proof of Theorem 1 gives the desired result.

The following lemma essentially proves Theorem 3.

Lemma 6. Let ${p}_{n}$ satisfy ${p}_{n}=\Omega \left({n}^{-c}\right),c<1$. Then for all $ϵ>0,\delta >0$ there exists an N such that for all $n\ge N$,

$P\left[\text{MINVS}\left({G}_{n}\right)\ge \left(1-\delta \right)n-1\right]>1-ϵ.$

We will make use of the following definition in our proof for the above lemma. For a graph $G=\left(V,E\right)$ and $A\subseteq V$, $v\left(G,A\right)$ is defined to be the number of vertices in A which are connected to a vertex in $V-A$. That is,

$v\left(G,A\right)=|\left\{u\in A|\exists v\in V-A\wedge uv\in E\right\}|.$

Proof of Lemma 6. So as to obtain the desired convergence, for each positive integer n, set

${\delta }_{n}={n}^{-l},{ϵ}_{n}={n}^{-s},$

where $0. Consider a linear arrangement $\varphi \in {\Lambda }_{{G}_{n}}$. Note that for any k, $\text{VS}\left({G}_{n}\right)\ge v\left({G}_{n},L\left(k,\varphi ,{G}_{n}\right)\right)$. In particular, if we define

${S}_{\varphi ,{\epsilon }_{n}}=L\left(⌊\left(1-{\epsilon }_{n}\right)n⌋,\varphi ,{G}_{n}\right),$

then $\text{VS}\left({G}_{n}\right)\ge v\left({G}_{n},{S}_{\varphi ,{ϵ}_{n}}\right)$. It follows then that $\text{MINVS}\left({G}_{n}\right)\ge {\mathrm{min}}_{\varphi }v\left({G}_{n},{S}_{\varphi ,{ϵ}_{n}}\right)$. Observe that

${\mathrm{min}}_{\varphi }v\left({G}_{n},{S}_{\varphi ,{ϵ}_{n}}\right)={\mathrm{min}}_{S\in \mathcal{B}}v\left({G}_{n},S\right)$

where $\mathcal{B}=\left\{S\subseteq V||S|=⌊\left(1-{ϵ}_{n}\right)n⌋\right\}$. Hence, for all $\alpha \ge 0$,

$P\left[\text{MINVS}\left({G}_{n}\right)\ge \alpha \right]\ge P\left[\underset{S\in \mathcal{B}}{\mathrm{min}}v\left({G}_{n},S\right)\ge \alpha \right].$

To prove the lemma, it suffices to show that for any $\delta ,ϵ>0$,

$P\left[\underset{S\in \mathcal{B}}{\mathrm{min}}v\left({G}_{n},S\right)\ge \left(1-\delta \right)n-1\right]>1-ϵ$

for n sufficiently large.

Let S be an arbitrary element of $\mathcal{B}$. As there are $⌈{ϵ}_{n}n⌉$ vertices in $V-S$, the probability of a given vertex $v\in S$ not being connected to any vertex in $V-S$ is ${\left(1-{p}_{n}\right)}^{⌈{ϵ}_{n}n⌉}$. Hence, $v\left({G}_{n},S\right)$ is a binomial random variable on $m=⌊\left(1-{ϵ}_{n}\right)n⌋$ trials with event probability $q=1-{\left(1-{p}_{n}\right)}^{⌈{ϵ}_{n}n⌉}$. By Lemma 1, we have that

$P\left[v\left({G}_{n},S\right)\le \left(q-{\delta }_{n}\right)m\right]\le \mathrm{exp}\left(-2{\delta }_{n}^{2}m\right).$

As $|\mathcal{B}|=\left(\begin{array}{c}n\\ ⌈{ϵ}_{n}n⌉\end{array}\right)$,

$\begin{array}{l}P\left[\underset{S\in \mathcal{B}}{V}\text{ }v\left({G}_{n},S\right)\le \left(q-{\delta }_{n}\right)m\right]\le \underset{S\in \mathcal{B}}{\sum }P\left[v\left({G}_{n},S\right)\le \left(q-{\delta }_{n}\right)m\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=\left(\begin{array}{c}n\\ ⌈{ϵ}_{n}n⌉\end{array}\right)P\left[v\left({G}_{n},S\right)\le \left(q-{\delta }_{n}\right)m\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\le \left(\begin{array}{c}n\\ ⌈{ϵ}_{n}n⌉\end{array}\right)\mathrm{exp}\left(-2{\delta }_{n}^{2}m\right).\end{array}$

The first and third lines of the above computation follow respectively from the union bound and the preceding inequality. The second line follows from the fact that if $S,{S}^{\prime }$ are any two elements of $\mathcal{B}$, then the distributions of the random variables $v\left({G}_{n},S\right),v\left({G}_{n},{S}^{\prime }\right)$ with ${G}_{n}$ sampled from $G\left(n,{p}_{n}\right)$ are isomorphic via any permutation of the vertex set $\left\{1,2,\cdots ,n\right\}$ which carries S to ${S}^{\prime }$. In particular

$P\left[v\left({G}_{n},S\right)\le \left(q-{\delta }_{n}\right)m\right]=P\left[v\left({G}_{n},{S}^{\prime }\right)\le \left(q-{\delta }_{n}\right)m\right],$

so that the second line follows from the fact that $|\mathcal{B}|=\left(\begin{array}{c}n\\ ⌈{ϵ}_{n}n⌉\end{array}\right)$.

By Lemma 2, we find that

$\mathrm{log}\left(\begin{array}{c}n\\ ⌈{ϵ}_{n}n⌉\end{array}\right)=\left(1+o\left(1\right)\right)⌈{ϵ}_{n}n⌉\mathrm{log}\frac{1}{{ϵ}_{n}}$

since $⌈{ϵ}_{n}n⌉=⌈{n}^{1-s}⌉=o\left(n\right)$, so that

$\begin{array}{l}\mathrm{log}P\left[\underset{S\in \mathcal{B}}{V}\text{ }v\left({G}_{n},S\right)\le \left(q-{\delta }_{n}\right)m\right]\le \mathrm{log}\left(\left(\begin{array}{c}n\\ ⌈{ϵ}_{n}n⌉\end{array}\right)\mathrm{exp}\left(-2{\delta }_{n}^{2}m\right)\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=\mathrm{log}\left(\begin{array}{c}n\\ ⌈{ϵ}_{n}n⌉\end{array}\right)+\mathrm{log}\mathrm{exp}\left(-2{\delta }_{n}^{2}m\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=\left(1+o\left(1\right)\right)⌈{ϵ}_{n}n⌉\mathrm{log}\frac{1}{{ϵ}_{n}}-2{\delta }_{n}^{2}m.\end{array}$

Substituting in our definitions ${\delta }_{n}={n}^{-l},{ϵ}_{n}={n}^{-s}$, we find that since $⌊n⌋=n\left(1-o\left(1\right)\right)$,

$\begin{array}{l}\mathrm{log}P\left[\underset{S\in \mathcal{B}}{V}\text{ }v\left({G}_{n},S\right)\le \left(q-{\delta }_{n}\right)m\right]\\ \le \left(1+o\left(1\right)\right)⌈{n}^{1-s}⌉\mathrm{log}\left({n}^{s}\right)-2{n}^{-2l}⌊\left(1-{n}^{-s}\right)n⌋\\ =\left(s+o\left(1\right)\right)⌈{n}^{1-s}⌉\mathrm{log}\left(n\right)-2{n}^{1-2l}\left(1-{n}^{-s}\right)\left(1-o\left(1\right)\right).\end{array}$

This expression tends to negative infinity as $n\to \infty$ since $1-2l>1-s$ by the condition that $l<\frac{s}{2}$, implying that

$P\left[\underset{S\in \mathcal{B}}{V}\text{ }v\left({G}_{n},S\right)\le \left(q-{\delta }_{n}\right)m\right]$

tends to 0 from above. Thus, for n sufficiently large, we have that

$P\left[\underset{S\in \mathcal{B}}{V}\text{ }v\left({G}_{n},S\right)\le \left(q-{\delta }_{n}\right)m\right]<ϵ,$

and hence

$P\left[\underset{S\in \mathcal{B}}{\mathrm{min}}v\left({G}_{n},S\right)\ge \left(q-{\delta }_{n}\right)m\right]>1-ϵ.$

Moreover, if we substitute in our definitions $q=1-{\left(1-{p}_{n}\right)}^{⌈{ϵ}_{n}n⌉}$, ${\delta }_{n}={n}^{-l}$, $m=⌊\left(1-{ϵ}_{n}\right)n⌋$, we find that

$\left(q-{\delta }_{n}\right)m=\left(1-{\left(1-{p}_{n}\right)}^{⌈{n}^{1-s}⌉}-{n}^{-l}\right)⌊\left(1-{n}^{-s}\right)n⌋.$

Since by definition $s<1-c$, where ${p}_{n}=\Omega \left({n}^{-c}\right)$, we have that $1-s>c$, and hence by Lemma 3, we have that ${\left(1-{p}_{n}\right)}^{⌈{n}^{1-s}⌉}\to 0$ as $n\to \infty$. Thus,

$\begin{array}{c}\left(q-{\delta }_{n}\right)m=\left(1-{\left(1-{p}_{n}\right)}^{⌈{n}^{1-s}⌉}-{n}^{-l}\right)⌊\left(1-{n}^{-s}\right)n⌋\\ =\left(1-o\left(1\right)\right)⌊\left(1-{n}^{-s}\right)n⌋\\ =\left(1-o\left(1\right)\right)n-1,\end{array}$

so that for any $\delta >0$,

$\left(q-{\delta }_{n}\right)m\ge \left(1-\delta \right)n-1.$

for n sufficiently large. Hence, for any $\delta ,ϵ>0$, we find that for n sufficiently large,

$\begin{array}{l}P\left[\text{MINVS}\left({G}_{n}\right)\ge \left(1-\delta \right)n-1\right]\\ \ge P\left[\text{MINVS}\left({G}_{n}\right)\ge \left(q-{\delta }_{n}\right)m\right]\\ \ge P\left[\underset{S\in \mathcal{B}}{\mathrm{min}}v\left({G}_{n},S\right)\ge \left(q-{\delta }_{n}\right)m\right]\\ >1-ϵ,\end{array}$

as desired.

Proof of Theorem 3. Observe that $\text{MAXVS}\left({G}_{n}\right)\le n-1$. Let $\delta ,ϵ>0$ be given. Since $\frac{n-1}{\left(1-x\right)n-1}\to 1$ from above as $x\to 0$, set ${\delta }^{\prime }$ so that for large n, $\frac{n-1}{\left(1-{\delta }^{\prime }\right)n-1}<1+\delta$. By Lemma 6, there exists a natural number N such that for $n\ge N$,

$P\left[\text{MINVS}\left({G}_{n}\right)\ge \left(1-{\delta }^{\prime }\right)n-1\right]>1-ϵ.$

Thus, with probability greater than $1-ϵ$,

$\frac{\text{MAXVS}\left({G}_{n}\right)}{\text{MINVS}\left({G}_{n}\right)}\le \frac{n-1}{\left(1-{\delta }^{\prime }\right)n-1}<1+\delta ,$

so that

$P\left[\text{GAPVS}\left({G}_{n}\right)<1+\delta \right]>1-ϵ$

as desired.

Even though our proof of Theorem 4 does not follow quite as readily from the proof of Theorem 3 as for the edge bisection/cut width case above, it does not require any new techniques.

Proof of Theorem 4. Let $\delta ,ϵ>0$ be given. Since $\frac{1}{1-x}\to 1$ from above as $x\to 0$, choose ${\delta }^{\prime }>0$ so that $\frac{1}{1-{\delta }^{\prime }}<1+\delta$. As $\text{MAXVB}\left({G}_{n}\right)\le ⌊\frac{n}{2}⌋$ trivially,

we focus on the lower bound for $\text{MINVB}\left({G}_{n}\right)$. Set ${\delta }_{n}={n}^{-s}$ such that $1-s>c$, where ${p}_{n}={n}^{-c}$. Since ${\delta }_{n}\to 0$, we have that

$P\left[\text{MINVB}\left({G}_{n}\right)\le ⌊\frac{n}{2}⌋\left(1-{\delta }^{\prime }\right)\right]\le P\left[\text{MINVB}\left({G}_{n}\right)\le ⌊\frac{n}{2}⌋\left(1-{\delta }_{n}\right)\right]$

for all n sufficiently large. We will show that

$P\left[\text{MINVB}\left({G}_{n}\right)\le ⌊\frac{n}{2}⌋\left(1-{\delta }_{n}\right)\right]\to 0,$

which will prove the theorem.

For all $v\in V$, we denote the set of neighbors of v by $N\left(v\right)$ ; that is, $N\left(v\right)=\left\{u\in V|uv\in E\right\}$. If $\varphi \in {\Lambda }_{{G}_{n}}$ is a linear arrangement such that $\text{VB}\left({G}_{n},\varphi \right)\le ⌊\frac{n}{2}⌋\left(1-{\delta }_{n}\right)$, then there must exist a subset $S\subset V$ with $|S|\ge ⌊\frac{n}{2}⌋{\delta }_{n}$ such that

$|\underset{v\in S}{\cap }N{\left(v\right)}^{c}|\ge ⌊\frac{n}{2}⌋,$

where $N{\left(v\right)}^{c}$ is the complement of $N\left(v\right)$ in V. Indeed, if $\text{VB}\left({G}_{n},\varphi \right)\le ⌊\frac{n}{2}⌋\left(1-{\delta }_{n}\right)$ then by definition there exist ${\delta }_{n}⌊\frac{n}{2}⌋$ vertices in the left half of the arrangement $\varphi$ which are not connected to any of the $⌊\frac{n}{2}⌋$ vertices in the right half.

We estimate the probability that such a set S can exist. Note that it suffices to bound the probability that such an S with $|S|=⌈{\delta }_{n}⌊\frac{n}{2}⌋⌉$ can exist since for any such S with $|S|\ge {\delta }_{n}⌊\frac{n}{2}⌋$, we have that there exists a subset ${S}^{\prime }\subset S$ with $|{S}^{\prime }|=⌈{\delta }_{n}⌊\frac{n}{2}⌋⌉$ satisfying $|{\cap }_{v\in {S}^{\prime }}\text{ }N{\left(v\right)}^{c}|$. Let $S\subset V$ of size $⌈{\delta }_{n}⌊\frac{n}{2}⌋⌉$ be fixed. Each vertex $u\in {S}^{c}$ lies in $N{\left(v\right)}^{c}$ with probability $1-{p}_{n}$, so that for each $u\in {S}^{c}$,

$P\left[u\in \underset{v\in S}{\cap }N\left(v\right)\right]={\left(1-{p}_{n}\right)}^{|S|}={\left(1-{p}_{n}\right)}^{⌈{\delta }_{n}⌊\frac{n}{2}⌋⌉}={\left(1-{p}_{n}\right)}^{n{\delta }_{n}\left(1+o\left(1\right)\right)/2}.$

Let ${X}_{n}$ be the random variable over the set of random graphs on n vertices with edge probability ${p}_{n}$ whose value is the cardinality of ${\cap }_{v\in S}\text{ }N{\left(v\right)}^{c}$. (Recall that S has been fixed.) Applying the above reasoning to each vertex $u\in {S}^{c}$, we

see that ${X}_{n}$ is a Bernoulli random variable on $N=|{S}^{c}|=n\left(1-\frac{{\delta }_{n}}{2}\right)\left(1+o\left(1\right)\right)$ trials with event probability $q={\left(1-{p}_{n}\right)}^{n{\delta }_{n}\left(1+o\left(1\right)\right)/2}$. Since $1-s>c$ by construction, we have that

$q={\left(1-{p}_{n}\right)}^{n{\delta }_{n}\left(1+o\left(1\right)\right)/2}={\left(1-{p}_{n}\right)}^{{n}^{1-s}\left(1+o\left(1\right)\right)/2}\to 0$

as $n\to \infty$ by Lemma 3. Letting $N,q$ be as before, set

$\gamma =\frac{n}{2N}-q=\left(\frac{1}{2\left(1-{\delta }_{n}/2\right)}-{\left(1-{p}_{n}\right)}^{n{\delta }_{n}/2}\right)\left(1+o\left(1\right)\right),$

so that $\left(q+\gamma \right)N=n/2$, and $\gamma =\frac{1}{2}\left(1+o\left(1\right)\right)$. Using Hoeffding’s inequality in Lemma 1 with $q,N,\gamma$, we obtain that

$\begin{array}{c}P\left[{X}_{n}\ge n/2\right]=P\left[{X}_{n}\ge \left(q+\gamma \right)N\right]\\ \le \mathrm{exp}\left(-2{\gamma }^{2}N\right)\\ =\mathrm{exp}\left(-2{\left(\frac{1}{2}\right)}^{2}N\left(1+o\left(1\right)\right)\right)\\ =\mathrm{exp}\left(-\frac{n}{2}\left(1-\frac{{\delta }_{n}}{2}\right)\left(1+o\left(1\right)\right)\right)\\ =\mathrm{exp}\left(-\frac{n}{2}\left(1+o\left(1\right)\right)\right).\end{array}$

By the union bound, the probability that there exists any such S among the subsets of V of size $k=⌈{\delta }_{n}⌊\frac{n}{2}⌋⌉$ is at most

$\left(\begin{array}{c}n\\ k\end{array}\right)\mathrm{exp}\left(-\frac{n}{2}\left(1+o\left(1\right)\right)\right).$

Since $k={n}^{1-s}\left(1+o\left(1\right)\right)/2$, by Lemma 2, we have the estimate

$\begin{array}{c}\mathrm{log}\left(\begin{array}{c}n\\ k\end{array}\right)=\left(1+o\left(1\right)\right)k\mathrm{log}\frac{n}{k}\\ =\left(1+o\left(1\right)\right)\frac{{n}^{1-s}}{2}\mathrm{log}2{n}^{s}\\ =\Theta \left({n}^{1-s}\mathrm{log}n\right),\end{array}$

so that $\left(\begin{array}{c}n\\ k\end{array}\right)=\mathrm{exp}\left(\Theta \left({n}^{1-s}\mathrm{log}n\right)\right)$. Thus, the union bound probability that there exists any such S is at most

$\begin{array}{c}\left(\begin{array}{c}n\\ k\end{array}\right)\mathrm{exp}\left(-\frac{n}{2}\left(1+o\left(1\right)\right)\right)=\mathrm{exp}\left(\Theta \left({n}^{1-s}\mathrm{log}n\right)\mathrm{exp}\left(-\frac{n}{2}\left(1+o\left(1\right)\right)\right)\right)\\ =\mathrm{exp}\left(\Theta \left({n}^{1-s}\mathrm{log}n-\frac{n}{2}\right)\right)\\ \to 0\end{array}$

as $n\to \infty$, since the $-\frac{n}{2}$ term dominates the $\Theta \left({n}^{1-s}\mathrm{log}n\right)$ term.

We conclude that

$P\left[\text{MINVB}\left({G}_{n}\right)\le ⌊\frac{n}{2}⌋\left(1-{\delta }^{\prime }\right)\right]\le P\left[\text{MINVB}\left({G}_{n}\right)\le ⌊\frac{n}{2}⌋\left(1-{\delta }_{n}\right)\right]\le ϵ$

for all ${\delta }^{\prime },ϵ$ with n sufficiently large. As $P\left[\text{MAXVB}\left({G}_{n}\right)\le ⌊\frac{n}{2}⌋\right]=1$, since we chose ${\delta }^{\prime }$ to satisfy $\frac{1}{1-{\delta }^{\prime }}<1+\delta$, we conclude that

$\begin{array}{l}P\left[\text{GAPVB}\left({G}_{n}\right)\le 1+\delta \right]\\ \ge P\left[\text{MAXVB}\left({G}_{n}\right)\le ⌊\frac{n}{2}⌋\wedge \text{MINVB}\left({G}_{n}\right)\ge ⌊\frac{n}{2}⌋\left(1-{\delta }^{\prime }\right)\right]\\ =P\left[\text{MINVB}\left({G}_{n}\right)\ge ⌊\frac{n}{2}⌋\left(1-{\delta }^{\prime }\right)\right]\\ \ge 1-ϵ\end{array}$

for large enough n, as desired.

4.2. Directed Graph Problems

We now show that the analogous versions of the above problems for undirected graphs hold for directed graphs. Fortunately, no new techniques are required and the desired results for directed graphs follow immediately from our work for undirected graphs.

For the theorems that follow, let ${\left\{{G}_{n}\right\}}_{n=1}^{\infty }$ be a sequence of random directed acyclic graphs such that for each $n=1,2,\cdots$, ${G}_{n}$ is independently sampled from a $D\left(n,{p}_{n}\right)$ Erdös-Renyi distribution with edge probability ${p}_{n}$. Similar to before, the following theorems show that for each of the directed arrangement problems GAPDCW, GAPDEB, GAPDVS, and GAPDVB that the ratio of the maximum value to the minimum value of the corresponding objective function over all arrangements of ${G}_{n}$ is asymptotically close to 1 with high probability, subject to appropriate sparsity conditions.

Theorem 5. Let ${p}_{n}$ satisfy ${p}_{n}=\Omega \left({n}^{-c}\right),c<1/2$. Then for all $ϵ>0,\delta >0$ there exists an N such that for all $n\ge N$,

$P\left[\text{GAPDCW}\left({G}_{n}\right)<1+\delta \right]>1-ϵ.$

Theorem 6. Let ${p}_{n}$ satisfy ${p}_{n}=\Omega \left({n}^{-c}\right),c<1/2$. Then for all $ϵ>0,\delta >0$ there exists an N such that for all $n\ge N$,

$P\left[\text{GAPDEB}\left({G}_{n}\right)<1+\delta \right]>1-ϵ.$

Theorem 7. Let ${p}_{n}$ satisfy ${p}_{n}=\Omega \left({n}^{-c}\right),c<1$. Then for all $ϵ>0,\delta >0$ there exists an N such that for all $n\ge N$,

$P\left[\text{GAPDVS}\left({G}_{n}\right)<1+\delta \right]>1-ϵ.$

Theorem 8. Let ${p}_{n}$ satisfy ${p}_{n}=\Omega \left({n}^{-c}\right),c<1$. Then for all $ϵ>0,\delta >0$ there exists an N such that for all $n\ge N$,

$P\left[\text{GAPDVB}\left({G}_{n}\right)<1+\delta \right]>1-ϵ.$

We illustrate the proof for Theorem 5; the proofs of Theorems 6, 7, 8 will follow verbatim.

Proof of 5. For ${G}_{n}$ a directed acyclic graph sampled from $D\left(n,{p}_{n}\right)$, let

$\text{MAXCW}\left({G}_{n}\right),\text{MINCW}\left({G}_{n}\right),\text{GAPCW}\left(Gn\right)$

denote the quantities MAXCW, MINCW, GAPCW for the undirected graph corresponding to ${G}_{n}$ ; that is, ${G}_{n}$ with every directed edge replaced with an undirected edge. Recall that $\text{MAXCW}\left({G}_{n}\right)$ is the maximum of $\text{CW}\left({G}_{n},\varphi \right)$ over all permutations $\varphi$ of the vertex set V of ${G}_{n}$, whereas $\text{MAXDCW}\left({G}_{n}\right)$ is the maximum of $\text{CW}\left({G}_{n},\varphi \right)$ over all topological sorts $\varphi$ of ${G}_{n}$. Since the set of topological sorts is a subset of the set of permutations, we conclude that

$\text{MAXDCW}\left({G}_{n}\right)\le \text{MAXCW}\left({G}_{n}\right).$

Similarly, we deduce that

$\text{MINDCW}\left({G}_{n}\right)\ge \text{MINCW}\left({G}_{n}\right),$

so that

$\text{GAPDCW}\left({G}_{n}\right)=\frac{\text{MAXDCW}\left({G}_{n}\right)}{\text{MINDCW}\left({G}_{n}\right)}\le \frac{\text{MAXCW}\left({G}_{n}\right)}{\text{MINCW}\left({G}_{n}\right)}=\text{GAPCW}\left({G}_{n}\right).$

Note that this holds for all directed acyclic graphs ${G}_{n}$ drawn from $D\left(n,{p}_{n}\right)$.

In order to derive a gap convergence statement for GAPDCW from the convergence result for GAPCW, we need to relate the distributions $D\left(n,{p}_{n}\right)$ and $G\left(n,{p}_{n}\right)$. Let $\varphi$ be the map from the underlying set of $G\left(n,{p}_{n}\right)$ to the underlying set of $D\left(n,{p}_{n}\right)$ which takes an undirected graph ${G}_{n}$ on the vertex set $\left\{1,2,\cdots ,n\right\}$ to the corresponding directed graph with the oriented edge $\left(i,j\right)$ if $i and $\left\{i,j\right\}$ is an edge of ${G}_{n}$. By the definition of $D\left(n,{p}_{n}\right)$ we have that $\varphi$ is a bijection between the underlying sets of $G\left(n,{p}_{n}\right)$ and $D\left(n,{p}_{n}\right)$, with inverse just given by replacing each directed edge with the corresponding undirected edge. Moreover, by the definition of $D\left(n,{p}_{n}\right)$ we have that $\varphi$ preserves probabilities between $G\left(n,{p}_{n}\right)$ and $D\left(n,{p}_{n}\right)$, so that $\varphi$ is an isomorphism between the probability distributions $G\left(n,{p}_{n}\right)$ and $D\left(n,{p}_{n}\right)$.

Thus, let ${p}_{n},ϵ,\delta$ be as in the statement of Theorem 5. By Theorem 1, there exists an N such that for all $n\ge N$,

$P\left[\text{GAPCW}\left({G}_{n}\right)<1+\delta \right]>1-ϵ,$

where ${G}_{n}$ is sampled from $G\left(n,{p}_{n}\right)$. By the isomorphism between $D\left(n,{p}_{n}\right)$ and $G\left(n,{p}_{n}\right)$, we have that the same statement holds if ${G}_{n}$ is instead sampled from $D\left(n,{p}_{n}\right)$ and $\text{GAPCW}\left({G}_{n}\right)$ is defined as previously by taking the underlying undirected graph. Since

$\text{GAPDCW}\left({G}_{n}\right)\le \text{GAPCW}\left(Gn\right)$

for all ${G}_{n}$ sampled from $D\left(n,{p}_{n}\right)$, we conclude that

$P\left[\text{GAPDCW}\left({G}_{n}\right)<1+\delta \right]\ge P\left[\text{GAPCW}\left({G}_{n}\right)<1+\delta \right]>1-ϵ$

for all $n\ge N$, where ${G}_{n}$ is sampled from $D\left(n,{p}_{n}\right)$. This proves Theorem 5.

The same observation that the set of topological sorts of a directed graph is a subset of the set of permutations of the vertices implies that the quantities GAPDEB, GAPDVS, GAPDVB are bounded above by the corresponding undirected quantities as in the above proof. Thus, Theorems 6, 7, 8 follow precisely from the corresponding undirected versions, Theorems 2, 3, 4, as desired.

5. Concluding Remarks

In this paper, we have shown that many graph layout problems of interest can be approximated arbitrarily close to the optimal with high probability for large random graphs with appropriate sparsity conditions. We note that there is still room for improvement with our results. The previous factor of 2 approximations in  held for edge probabilities ${p}_{n}=\Omega \left({n}^{-1}\right)$, whereas our results for the layout problems on edges (that is, minimum cut linear arrangement, edge bisection, and directed versions) only hold for ${p}_{n}=\Omega \left({n}^{-c}\right)$ for $c<1/2$. Thus, we pose the following:

Question: Can the factor of 1 approximation results for MINCW, MINEB, be proven for random graphs with ${p}_{n}=\Omega \left({n}^{-1}\right)$, or more generally ${p}_{n}=\Omega \left(f\left(n\right)\right)$ for $f\left(n\right)=o\left({n}^{-1/2}\right)$ ? If not, can one determine the barrier between ${p}_{n}=\Omega \left({n}^{-1/2}\right)$ and ${p}_{n}=\Omega \left({n}^{-1}\right)$ where the factor of 1 approximation no longer holds and must be replaced by a factor of 2?

Both outcomes to the above question would be interesting, but we would find it more surprising if there were such a “sparsity barrier” between factor of 1 and factor of 2 approximations. Moreover, the results in  do not experience the same tradeoff between sparsity and speed of convergence that our results do, a seeming consequence of the strength of their “mixing graph” framework. Here by “speed of convergence”, we refer to how quickly $\delta$ shrinks in our statements of the form “ $\text{GAP}\left({G}_{n}\right)\le 1+\delta$ with probability $1-ϵ$ ”, where $\text{GAP}\left({G}_{n}\right)$ is a stand-in for GAPCW, GAPVS, ..., etc. In , they are able to take $\delta =O\left({n}^{-1}\right)$ independent of ${p}_{n}$, whereas we are only typically able to take $\delta =O\left({n}^{c-l}\right)$ where ${p}_{n}=O\left({n}^{-c}\right)$ and $c. Thus, we ask:

Question: Can the factor of 1 approximation results for MINCW, MINVS, MINEB, MINVB be proven with $\delta =O\left({n}^{-1}\right)$ with $\delta$ as above, or at least for a $\delta$ which does not depend on the asymptotics for ${p}_{n}$ ?

Our asymptotics for $ϵ$ as above do technically depend on ${p}_{n}$, but regardless of ${p}_{n}$ our asymptotics for $ϵ$ are still competitive with the $ϵ={2}^{-\Omega \left(n\right)}$ bound in , in contrast to the situation for $\delta$. For a possible solution of the above questions, we note that some of the key results about mixing graphs used in  call upon the Hoeffding inequality, which was our primary probabilistic tool in this paper. Hence, it would be interesting to see if the techniques of this paper and those of mixing graphs could be unified somehow to give our improved constant of approximation but retain the better sparsity and convergence conditions of .

Acknowledgements

The authors would like to thank Emmanuel Ruiz and Ashkan Moatamed for conversations on research involving graph layout problems. Additionally, the research in this paper was made possible by the support of the Fields Institute through its 2017 Fields Undergraduate Summer Research Program.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

  Díaz, J., Petit, J. and Serna, M. (2002) A Survey of Graph Layout Problems. ACM Computing Surveys, 34, 313-356. https://doi.org/10.1145/568522.568523  Lengauer, T. (1981) Black-White Pebbles and Graph Separation. Acta Informatica, 16, 465-475. https://doi.org/10.1007/BF00264496  Gavril, F. (2011) Some NP-Complete Problems on Graphs. 2011 45th Annual Conference on Information Sciences and Systems, Baltimore, MD, 23-25 March 2011.  Cheung, K., Girardet, P., Moatamed, A. and Ruiz, E. (2018) On a Directed Layout Problem. Manuscript in Preparation.  Brandes, U. and Fleischer, D. (2009) Vertex Bisection Is Hard, too. Journal of Graph Algorithms and Applications, 13, 119-131. https://doi.org/10.7155/jgaa.00179  Garey, M.R. and Johnson, D.S. (1979) Computers and Intractibility: A Guide to the Theory of NP Completeness. Freeman and Co., New York.  Díaz, J., Petit, J., Trevisan, L. and Serna, M. (2001) Approximating Layout Problems on Random Graphs. Discrete Mathematics, 235, 245-253. https://doi.org/10.1016/S0012-365X(00)00278-8  Gilbert, E.N. (1959) Random Graphs. The Annals of Mathematical Statistics, 30, 1141-1144.  Barak, A.B. and Erdös, P. (1984) On the Maximal Number of Strongly Independent Vertices in a Random Acyclic Directed Graph. SIAM Journal on Algebraic Discrete Methods, 5, 508-514.  Hoeffding, W. (1963) Probability Inequalities for Sums of Bounded Random Variables. Journal of the American Statistical Association, 58, 13-30. https://doi.org/10.1080/01621459.1963.10500830     customer@scirp.org +86 18163351462(WhatsApp) 1655362766  Paper Publishing WeChat 