A Probabilistic Alternative to Coverage Analysis in Uniform Random Wireless Networks

Junaid Farooq and Unnikrishna Pillai

Abstract

Abstract: The analysis of coverage probability based on signalto-interference-plus-noise-ratio (SINR) is a classical problem in the study of wireless and cellular networks. Stochastic geometry (SG) has opened up the possibility of accurate coverage characterization for random spatial deployment of base stations. While results obtained using SG are tractable and compact, in most cases, they are usually in the form of incomplete integrals, which need to be efficiently computed. Although that is possible with the computational capabilities available today, it masks the underlying structure in the analysis precluding the possibility of using it for solving optimization and system design problems. This paper provides an alternate approach to analyzing SINRbased coverage probability using direct probability computation. We analyze a uniformly random wireless network as a test case and compare the analytical results with widely accepted frameworks in the SG literature. Our analytical derivations, validated through simulation studies, agree with well-known results in the literature. The developed approach provides the groundwork for coverage analysis in more complex network scenarios and channel conditions.

Keywords: Coverage probability , Poisson point process , signal-to-interference-plus-noise-ratio , stochastic geometry , wireless networks

I. INTRODUCTION

THE theoretical analysis of performance in wireless networks has always been of interest to wireless engineers. In future cellular and wireless networks, there is a need to have ultra-reliability and low latency [1]–[3]. These depend on the ability to have performance guarantees, which needs analytical characterization. Various mathematical tools have been used to model wireless networks and to measure the level of performance achieved by a typical user served by the network [4]. A widely accepted metric for performance has been the signal-to-interference-plus-noise-ratio (SINR), which depends on the signal power received at the receiver as well as the combined powers of noise and the interference [5]. The received signal and interference powers are random quantities since for a typical user, the nearest base station (BS) and the interferers can be located at distances that are stochastic. Furthermore, the wireless channel between the nearest BS and the interferers can also experience fading effects which add to the randomness [6], [7]. Hence, the SINR itself becomes a random variable and it is important to understand its distribution function to assess the performance of a typical user probabilistically. Fig. 1 illustrates the scenario under consideration for a typical mobile station in a cellular network, which connects to the nearest available BS at a distance [TeX:] $$r \in \mathbb{R} .$$ The other BSs at distances [TeX:] $$R_i \in \mathbb{R}, i \geq 2 \text {, }$$ in ascending order, are assumed to cause interference to the mobiles’ communication with the associated BS. The closest of these interferers, at a distance [TeX:] $$R_2 \in \mathbb{R},$$ is marked in red color and it plays a dominant role in the received SINR at the mobile.

Fig. 1.
Top view of a cellular network displaying the signal and interference powers received by a typical mobile user in the network. The closest or associated BS is shown in green color while the nearest interfering BS is shown in red color. The other interferers are shown in black color.

One possible technique for analyzing SINR-based coverage probability is to use stochastic geometry (SG) [8], [9], which provides a spatial average of the coverage probability over all possible realizations of the BS locations, defined by a point process [10], [11]. Uniform random wireless networks, also referred to as a Poisson point process (PPP) in SG literature [12], and their variants are widely accepted models to capture BS locations in cellular networks. At the heart of the SG approach is the Campbell’s theorem, that helps in evaluating sum-product functionals, and is extremely useful is characterizing the interference in a random wireless network [13]. While the Campbell’s theorem gives a breakthrough by providing the exact Laplace transform of the aggregated interference, however, the challenge lies in the fact that it is a double integral over the 2-D region of interferers. Evaluation of such integrals in closed-form is possible in a select few cases and even those cases result in expressions that are nested integrals and therefore, have to be computed numerically. This precludes the possibility of using them as an objective function or constraints in convex optimization problems that can be formulated for the design of wireless systems, e.g., [14].

This paper revisits the classical analysis of down-link coverage probability in uniform random wireless cellular networks and provides an alternate probabilistic approach, which provides additional insights and leads to simpler expressions for special cases. In a cellular network setup as shown in Fig. 1, the interference power received by a typical mobile can be simplistically expressed as the sum [TeX:] $$\sum_{i=2}^N R_i^{-\eta},$$ where N − 1 is the number of interferers and η is the path-loss exponent, ignoring channel distortions and assuming unit transmit power. From a probabilistic viewpoint, it would make sense to apply the central limit theorem (CLT) on the interference term, considering a large enough N, and approximate the sum as a Gaussian random variable. However, it turns out that the interference term does not have finite moments, which rules out the applicability of the CLT [15]. Intuitively, the main reasons why the CLT fails are as follows: (i) The interference is a non-negative random variable and is therefore lower bounded by zero so it does not resemble a Gaussian when the mean is small, (ii) The interference is often ill-conditioned, i.e., variance is much larger than the mean. Interestingly, the ratio [TeX:] $$r / R_i$$ has a finite mean and finite variance, which lightens up the prospects of the CLT applicability to its summation. However, since [TeX:] $$r / R_i$$ is strictly less than 1, higher powers tend to be positively skewed, which again limits the applicability of the CLT for [TeX:] $$\sum_{i=2}^N\left(r / R_i\right)^\eta,$$ where [TeX:] $$\eta \geq 2 .$$ This difficulty has made any analytical development impossible, and hence, an SG approach bails out the analysis with the help of the Campbell-Mecke theorem [16] and the probability generating functional (PGFL) of PPPs [17]. Although the SG approach provides a blanket cover to all the hidden details of the model to get to the eventual coverage probability, we show that it is possible to maintain the structural details while maintaining tractability. We show that a direct computation of the coverage probability is possible and we also provide expressions for some commonly used cases. The main contributions of this paper are summarized as follows:

· We provide an alternate method to derive the probability of coverage in uniform random wireless networks using probabilistic modeling. This has been achieved by using a tractable modification of the interference characterization in the SINR analysis.

· We break down the analysis of coverage into a modular system where the statistics of interference and noise mixture is determined by considering a finite number of interferers. The resulting moments and joint moments have been used in accurate probabilistic modeling of the SINR.

· We provide closed form analytical expressions for the coverage probability in commonly used special cases of power loss models, i.e., quartic power loss, cubic power loss, and quadratic power loss.

· Analytical results have been compared against existing well-established results from SG and validated through extensive simulations for different model parameters.

The rest of the paper is organized as follows: Section II provides an overview of related works in the literature, Section III outlines the system model used, Section IV describes the methodology and analysis, Section V provides simulation results and comparisons, and Section VI concludes the paper.

II. RELATED WORK

Performance analysis of wireless networks using stochastic modeling is widely investigated due to its relevance in providing high quality communication services in the evolving cellular ecosystem, involving 5G and IoT technologies. In a bid to generalize the system model where the channel gains, transmit powers, and distances of interferers and associated BS are considered to be random, there has been a strong interest among the physical layer wireless researchers to accurately capture the coverage probability [18]. However, considering extensive randomness complicates analysis to the extent that the problem may become intractable [19], [20]. The state of the art in solving these problems relies on the use of SG, which has revolutionized the performance analysis of wireless communication systems in recent years to analyze more complex systems such as heterogeneous cellular networks with more structured user and BS locations [21]. SG based approach has also been successfully applied to analyze vehicular networks [22], IoT networks [23], and satellite networks [24] to name a few.

The fundamental problem in coverage analysis is dealing with the interference in the network [25]. More specifically, there is a singularity problem due to the nearest interferer, which makes the statistics impossible to evaluate [26]. Moreover, the distribution of interference power is not Gaussian despite the presence of a large number of interferers [27]. In fact, it is highly skewed and hence Laplace transforms of the aggregated interference are used for accurate characterization [28]. However, they are in the form of incomplete integrals similar to the SG analysis. Attempts have been made in the literature to use Gaussian approximations for interference modeling [29]. However, it results in loose bounds on the resulting analysis. Moreover, the approximation may not yield tractable expressions for the coverage as we show in this paper. We have circumvented the singularity problem by avoiding the process of finding the exact density functions related to the interference terms. Despite existing SG literature in this domain, this problem is worth revisiting due to its key role in the performance analysis of a diverse range of wireless networks.

The seminal work of Andrews et. al. [30] provides a SG based analytical results for PPP and is shown to reasonably model practical deployments of cellular networks. However, SG results are difficult to use in system design problems and researchers have resorted to alternate definitions of coverage such as [14], where the authors have optimized the network for energy consumption. Moreover, SG analysis masks the finer structural details of coverage probability that limits its applicability in many decision frameworks. In this work, we offer a modular approach that achieves the accuracy of SG results while breaking down the analysis into per-interferer components. This allows us to investigate how performance changes when specific interferers are considered or ignored. To the best of our knowledge, this is the first time such modularity has been demonstrated in SG-based coverage analysis. Our simplified expressions are not only analytically tractable but also provide a more flexible framework that can be easily applied to optimize system parameters, such as transmission capacity or frequency reuse. Furthermore, our method allows for varying power levels and fading parameters across individual interferers, an extension that traditional SG approaches struggle to accommodate. While our examples focus on uniform random wireless networks, the approach is generalizable to other system models based on the statistical properties of the model under investigation.

Consider the downlink performance of a typical user in a cellular network, where the BSs are assumed to be distributed uniformly in the 2-D plane according to a homogeneous PPP of intensity [TeX:] $$\lambda \in \mathbb{R} .$$ We use a signal capture model, where the user is connected to the nearest BS, and the other BSs act as interferers. Fig. 1 illustrates the downlink communication scenario where a typical mobile is considered to be placed at the origin, without loss of generality. The distance of a typical mobile to its nearest BS is referred to as [TeX:] $$r \in \mathbb{R},$$ while the distance to the [TeX:] $$i^{\text {th }}$$ interfering BS is referred to as [TeX:] $$R_i \in \mathbb{R}, i \geq 2 .$$ Notice that [TeX:] $$r \lt R_2 \lt R_3 \lt \cdots \lt R_N,$$ where [TeX:] $$N \in \mathbb{N}$$ denotes the total number of BSs in the region of consideration. Note that there are N − 1 interferers in this scenario. Since the BSs are distributed according to a PPP, [TeX:] $$N \sim \operatorname{Poisson}(\lambda \mathcal{A}),$$ where [TeX:] $$\mathcal{A}$$ is the area of the interference region. The distribution of distance from the mobile to the nearest BS is Rayleigh and is given as follows [30]:

(1)
[TeX:] $$f_r(r)=2 \pi \lambda r e^{-\lambda \pi r^2}, r \geq 0.$$

The distances of the mobile to the [TeX:] $$i^{\text {th }}$$ interfering BS is distributed according to the generalized Gamma distribution as follows [31]:

(2)
[TeX:] $$f_{R_i}(x)=\frac{2(\pi \lambda)^i}{\Gamma(i)} x^{2 i-1} e^{-\lambda \pi x^2}, x \geq 0, i \geq 2 .$$

It can subsequently be shown that [TeX:] $$X=r^2$$ is exponentially distributed with mean [TeX:] $$1 /(\pi \lambda) \text { and } Y_i=R_i^2$$ is distributed according to a Gamma distribution with parameters i and [TeX:] $$\pi \lambda$$, i.e.,

(3)
[TeX:] $$f_X(x)=\lambda \pi e^{-\lambda \pi x}, \quad x \geq 0,$$

(4)
[TeX:] $$f_{Y_i}\left(y_i\right)=\frac{(\lambda \pi)^i y_i^{i-1}}{\Gamma(i)} e^{-\lambda \pi y}, \quad y \geq 0, i \geq 2 .$$

In the sequel, we use the variables X and [TeX:] $$Y_i,$$ respectively in place of r and [TeX:] $$R_i$$ for notational convenience in the analytical development.

A. Downlink SINR Characterization

Assuming the transmission power of all BSs to be unity, the received power at the mobile receiver follows a power-law decay model. In this model, the channel power gain from the serving BS to the mobile is denoted by [TeX:] $$g_0,$$ and the channel power gain from the [TeX:] $$i^{\text {th }}$$ interfering BS to the mobile is denoted by [TeX:] $$g_i,$$ where [TeX:] $$i \geq 2 .$$ The received SINR at the mobile device is thus expressed as:

(5)
[TeX:] $$\mathrm{SINR}=\frac{g_0 r^{-\eta}}{\sum_{i=2}^N g_i R_i^{-\eta}+\sigma^2}=\frac{g_0 r^{-\eta}}{I+\sigma^2},$$

where [TeX:] $$I=\sum_{i=2}^N g_i R_i^{-\eta}$$ denotes the aggregated interference power and [TeX:] $$\sigma^2$$ represents the noise power. Here, [TeX:] $$g_0 \text { and } g_i$$ are random variables that denote the channel (power) gains, which model the uncertainties in received signal power due to multipath fading and shadowing effects. The path-loss is captured separately by [TeX:] $$r^{-\eta} \text { and } R^{-\eta} \text {, }$$ which depend on the distance between the mobile and the respective BSs, and the path-loss exponent [TeX:] $$\eta \in \mathbb{R} .$$ Under the assumption of Rayleigh fading, the channel power gains [TeX:] $$g_0 \text { and } g_i, \forall i \geq 2 \text {, }$$ are exponentially distributed random variables, i.e., [TeX:] $$g_0 \sim \operatorname{Exp}\left(\sigma_0^2\right)$$ and [TeX:] $$g_i \sim \operatorname{Exp}\left(\sigma_i^2\right), i \geq 2,$$ where [TeX:] $$\sigma_0^2 \text { and } \sigma_i^2$$ represent the mean power gains for the respective links. In Appendix A, we have shown that the random variable [TeX:] $$\sum_{i=2}^N g_i R_i^{-\eta}$$ is not well defined in terms of its moments. To overcome this difficulty, we rewrite the SINR in (5) as

(6)
[TeX:] $$\begin{aligned} \mathrm{SINR} & =\frac{g_0}{\sum_{i=2}^N g_i\left(\frac{r}{R_i}\right)^\eta+\sigma^2 r^\eta}=\frac{g_0}{\sum_{i=2}^N g_i W_i^\eta+\sigma^2 r^\eta}, \\ & =\frac{g_0}{S+\sigma^2 r^\eta}. \end{aligned}$$

A key observation here is that the new random variable [TeX:] $$W_i=r / R_i \leq 1, i \geq 2$$ is a well-behaved random variable with finite moments. This allows the modified interference term [TeX:] $$S=\sum_{i=2}^N g_i W_i^\eta$$ to have finite means and is easy to deal with analytically.

Remark 1. It is important to note that the current framework is developed primarily for far-field analysis and may not accurately capture ultra-dense base station deployment scenarios. In such cases, the near-field effects and singularity at the origin in the traditional [TeX:] $$r^{-\eta}$$ path-loss model can lead to saturation in SINR-based coverage probability results. While multi-slope [32] and stretched exponential [7] pathloss models have been proposed to address these issues, they introduce significant analytical complexity and are beyond the scope of this work. Future extensions of this framework will aim to incorporate these models for improved accuracy in ultra-dense networks.

B. Coverage Probability

The coverage probability for a typical mobile in the cellular network is defined as the probability of the SINR exceeding an arbitrary threshold [TeX:] $$T \in \mathbb{R},$$ and is written as follows:

(7)
[TeX:] $$\begin{aligned} \mathbb{P}(\mathrm{SINR} \geq T) & =\mathbb{P}\left(\frac{g_0}{S+\sigma^2 r^\eta} \geq T\right) \\ & =\mathbb{P}\left(g_0 \gt T\left(S+\sigma^2 r^\eta\right)\right), \\ & =\mathbb{E}_{r, S}\left[\mathbb{P}\left(g_0 \gt T\left(S+r^2 \sigma^\eta\right) \mid r, S\right)\right] . \end{aligned}$$

Since [TeX:] $$g_0$$ represents exponential path-loss with parameter [TeX:] $$\sigma_0^2,$$

(8)
[TeX:] $$\mathbb{P}(\text { SINR }\gt T)=\mathbb{E}_{r, S}\left[e^{-\frac{T}{\sigma_0^2}\left(S+\sigma^2 r^\eta\right)}\right].$$

The goal here is to compute the expression in (8) for different values of η. Since S and [TeX:] $$\sigma^2 r^\eta$$ are dependent random variables, the expectation cannot be separated. Hence, we need to determine the moment generating function of the random variable [TeX:] $$S+\sigma^2 r^\eta$$ for a fixed η. If we can have an accurate and tractable model for the probability density function of [TeX:] $$S+\sigma^2 r^\eta$$, then we can directly obtain the coverage probability through its moment generating function. This is elaborated in the subsequent section with the help of special cases.

IV. METHODOLOGY AND ANALYSIS

In this section, we will provide a generalized procedure for direct computation of the coverage probability and derive exact expressions for some special cases of the path-loss exponent. Consider the modified interference term S as derived in the previous section:

(9)
[TeX:] $$S=\sum_{i=2}^N g_i\left(\frac{r}{R_i}\right)^\eta=\sum_{i=2}^N g_i\left(\frac{r^2}{R_i^2}\right)^{\frac{\eta}{2}}=\sum_{i=2}^N g_i W_i^{\frac{\eta}{2}},$$

where [TeX:] $$W_i=r^2 / R_i^2=X / Y_i .$$ In what follows, only [TeX:] $$X=r^2 \sim \operatorname{Exp}(\pi \lambda), \text { and } Y_i=R_i^2 \sim \operatorname{Gamma}(i, \pi \lambda)$$ will appear explicitly and hence in the sequel, we will deal with (9) using X and [TeX:] $$Y_i, \forall i \geq 2 .$$ While the distances [TeX:] $$R_i$$ are independent, their order is important in the computation of the coverage probability, which makes the problem non-trivial. In other words, an interferer may become the closest BS and the closest BS may become an interferer if the ordering of distances is such. Therefore, this factor has to be taken into account during the analysis.

Remark 2. Note that the references to distances of the BSs from a typical point are relative, i.e., in (9), r is the distance to the nearest BS while [TeX:] $$R_i$$ refers to the distances of the [TeX:] $$i^{\text {th }}$$ closest interfering BS. Therefore, in a new realization of the stochastic scenario, the variables will exchange role based on the order of their distance from the typical receiver. Hence, it is necessary to redefine [TeX:] $$W_i \text { with } Z_i$$ as

(10)
[TeX:] $$Z_i=\frac{\min \left(X, Y_i\right)}{\max \left(X, Y_i\right)}, i=2,3, \cdots$$

Notice that [TeX:] $$0 \lt Z_i \leq 1$$ always ensuring the ordering of the random variables. Hence, we will replace [TeX:] $$W_i$$ in (9) with [TeX:] $$Z_i$$ so that the probability of coverage in (8) is carried out under [TeX:] $$S=\sum_{i=2}^N g_i Z_i^{\frac{\eta}{2}}.$$ We can explicitly compute the probability density function (PDF) of the variable [TeX:] $$Z_i$$, provided by the following lemma.

Lemma 1. The probability density function of the random variable [TeX:] $$Z_i=\frac{\min \left(X, Y_i\right)}{\max \left(X, Y_i\right)}$$ can be expressed as:

(11)
[TeX:] $$f_{Z_i}(z)=\frac{i\left(1+z^{i-1}\right)}{(1+z)^{i+1}}, 0 \leq z \leq 1, i \geq 2 .$$

Proof. See Appendix B.

Next, we investigate several commonly used cased of the path loss exponent to obtain exact expressions of the coverage probabilities.

A. Special Case - Inverse Quartic Path-Loss (η = 4)

In this case, the SINR can be expressed as

(12)
[TeX:] $$\mathrm{SINR}=\frac{g_0}{\sum_{i=2}^N g_i\left(\frac{r}{R_i}\right)^4+\sigma^2 r^4}=\frac{g_0}{S+\sigma^2 r^4},$$

where [TeX:] $$S=\sum_{i=2}^N g_i Z_i^2 \text { with } Z_i=\frac{\min \left(X, Y_i\right)}{\max \left(X, Y_i\right)} \text {. }$$ The mean [TeX:] $$\mu_S=\mathbb{E}[S]=\sum_{i=2}^N \mathbb{E}\left[g_i\right] \mathbb{E}\left[Z_i^2\right],$$ and the variance is given by [TeX:] $$\sigma_S^2=\mathbb{E}\left[S^2\right]-\mu_S^2,$$ where

(13)
[TeX:] $$\begin{aligned} \mathbb{E}\left[S^2\right]=\sum_i \sum_j \mathbb{E}\left[g_i Z_i^2 g_j Z_j^2\right], & =\sum_{i=2}^N \mathbb{E}\left[g_i^2\right] \mathbb{E}\left[Z_i^4\right]+ \\ & \sum_{i \neq j} \sum_j \mathbb{E}\left[g_i\right] \mathbb{E}\left[g_j\right] \mathbb{E}\left[Z_i^2 Z_j^2\right] . \end{aligned}$$

Hence, our objective is to evaluate [TeX:] $$\mathbb{E}\left[Z_i^2\right], \quad \mathbb{E}\left[Z_i^4\right],$$ and [TeX:] $$\mathbb{E}\left[Z_i^2 Z_j^2\right], i \neq j \text { for } i, j=2,3, \cdots, N .$$

Lemma 2. For the case when η = 4, the mean [TeX:] $$\mu_s$$ and variance [TeX:] $$\sigma_s^2$$ of the sum S can be computed as follows:

(14)
[TeX:] $$\mu_S=\sum_{n=2}^N \sigma_n^2 \mathbb{E}\left[Z_n^2\right],$$

(15)
[TeX:] $$\sigma_S^2=\sum_{n=2}^N 2 \sigma_n^2 \mathbb{E}\left[Z_n^4\right]+{\sum\sum}_{m \neq n=2}^N \sigma_n \mathbb{E}\left[Z_m^2 Z_n^2\right]-\mu_s^2,$$

where the second moment, fourth moment, and joint moments of [TeX:] $$Z_n$$, are obtained as follows:

(16)
[TeX:] $$\mathbb{E}\left[Z_n^2\right]= \begin{cases}3-4 \ln (2) & , n=2, \\ n\left(1-(n+1) \ln (2)+\frac{1}{n-2}\left(1-\frac{1}{2^{n-2}}\right)-\right. & \\ \frac{2}{n-1}\left(1-\frac{1}{2^{n-1}}\right)+\frac{1}{n}\left(1-\frac{1}{2^n}\right)+ & \\ \left.\sum_{k=1}^n \frac{(-1)^{k+1}}{k}\binom{n+1}{k+1}\left(1-\frac{1}{2^k}\right)\right) & , n \geq 3,\end{cases}$$

(17)
[TeX:] $$\mathbb{E}\left[Z_n^4\right]= \begin{cases}2 Q(2)+12 \ln (2)-\frac{33}{4} & , n=2, \\ 3 Q(3)-12 \ln (2)+\frac{67}{8} & , n=3, \\ 4 Q(4)+4 \ln (2)-\frac{131}{48} & , n=4, \\ n\left(Q(n)+\frac{1}{n-4}\left(1-\frac{1}{2^{n-4}}\right)-\right. & \\ \frac{4}{n-3}\left(1-\frac{1}{2^{n-3}}\right)+\frac{6}{n-2}\left(1-\frac{1}{2^{n-2}}\right)- & \\ \left.\frac{4}{n-1}\left(1-\frac{1}{2^{n-1}}\right)+\frac{1}{n}\left(1-\frac{1}{2^n}\right)\right) & , n \geq 5,\end{cases}$$

with

(18)
[TeX:] $$\begin{aligned} Q(n) & =\frac{7}{3}-\frac{3(n+3)}{2}+\frac{(n+3)(n+2)}{2}- \\ & \frac{(n+3)(n+2)(n+1)}{6} \ln (2)+ \\ & \sum_{k=1}^n \frac{(-1)^{(k+1)}}{k}\binom{n+3}{k+3}\left(1-\frac{1}{2^k}\right), \end{aligned}$$

and

(19)
[TeX:] $$\mathbb{E}\left[Z_m^2 Z_n^2\right]=I_1+I_2+I_3, \quad m \neq n,$$

where

(20)
[TeX:] $$I_1= \begin{cases}12 \ln \left(\frac{3}{2}\right)-\frac{131}{27} & , m=2, n=3, \\ 12 \ln \left(\frac{3}{2}\right)-\frac{131}{27} & , n=2, m=3, \\ \frac{1}{(n-1)(n-2)}\left(2\left(12 \ln \left(\frac{3}{2}\right)-\frac{131}{27}\right)+\right. & \\ \sum_{k=1}^{n-3} \frac{1}{k\left(2^k\right)} \sum_{l=0}^{k-1} \frac{(l+4)!}{l!3^5}\left(\frac{2}{3}\right)^l & , m=2, n \geq 4, \\ \frac{1}{(m-1)(m-2)}\left(2\left(12 \ln \left(\frac{3}{2}\right)-\frac{131}{27}\right)+\right. & \\ \sum_{k=1}^{m-3} \frac{1}{k\left(2^k\right)} \sum_{l=0}^{k-1} \frac{(l+4)!}{l!3^5}\left(\frac{2}{3}\right)^l & , n=2, m \geq 4, \\ \frac{\Gamma(n-2)}{\Gamma(m) \Gamma(n)} \times & \\ \left(\sum_{j=0}^{n-3} \frac{\Gamma(m+j-2)}{j!2^{j+m-2}} \sum_{k=0}^{m+j-3} \frac{(k+4)!}{k!3^5}\left(\frac{2}{3}\right)^k\right) & , n>m \geq 3, \\ \frac{\Gamma(m-2)}{\Gamma(m) \Gamma(n)} \times & \\ \sum_{j=0}^{m-3}\left(\frac{\Gamma(n+j-2)}{j!2^{j+n-2}} \sum_{k=0}^{n+j-3} \frac{(k+4)!}{k!3^5}\left(\frac{2}{3}\right)^k\right) & , m>n \geq 3,\end{cases},$$

(21)
[TeX:] $$I_2=\left\{\begin{array}{l} \frac{1}{\Gamma(m)(n-1)(n-2)} \times \\ \sum_{j=0}^{n-3} \frac{1}{2^{j+1}} \sum_{k=0}^j \frac{2^k}{k!} \frac{\Gamma(m+k+2)}{3^{m+k+2}}, n>m \geq 2, \\ \frac{1}{\Gamma(n)(m-1)(m-2)} \times \\ \sum_{j=0}^{m-3} \frac{1}{2^{j+1}} \sum_{k=0}^j \frac{2^k}{k!} \frac{\Gamma(n+k+2)}{3^{n+k+2}}, m>n \geq 2, \end{array}\right.$$

and

(22)
[TeX:] $$\begin{aligned} & I_3=m n(m+1)(n+1)\left(\frac{1}{36}(5-6 \ln (2))-\sum_{k=0}^{n-0} \frac{k!}{(k+4)!2^{k+1}}\right)- \\ & \frac{m(m+1)}{\Gamma(n)} \sum_{j=0}^{n+1} \frac{\Gamma(n+j+2)}{k!2^{n+j+2}} \times \\ & \quad\left(\frac{4}{9}-\frac{1}{6} \ln (2)-\sum_{k=0}^{n+j-3} \frac{2^{k+4} k!}{(k+4)!3^{k+1}}\right), \quad m, n \geq 2. \end{aligned}$$

Proof. See Appendix C-C, Appendix C-E, and Appendix D-C for the computation of [TeX:] $$\mathbb{E}\left[Z_n^2\right], \mathbb{E}\left[Z_n^4\right],$$ and [TeX:] $$\mathbb{E}\left[Z_m^2 Z_n^2\right]$$ respectively.

Finally, from (8), [TeX:] $$\mathbb{P}(\text { SINR } \gt T)=\mathbb{E}\left[e^{-\frac{T}{\sigma_0^2}\left(S+\sigma^2 r^4\right)}\right],$$ where [TeX:] $$S=\sum_{i=2}^N g_i Z_i^2 .$$ As we have seen, the components [TeX:] $$g_i \text { and } Z_i^2$$ in S have finite means and hence central limit theorem applies to this modified interference term. Therefore, S can be approximated as [TeX:] $$S \sim \mathcal{N}\left(\mu_S, \sigma_S^2\right).$$ However, S in non-negative and its PDF is skewed towards the origin on the positive side. Moreover, S is also correlated with the variable r. Therefore, it is convenient to use the following model:

(23)
[TeX:] $$\tilde{S}=S+\sigma^2 r^4=U^2,$$

where [TeX:] $$U \sim \mathcal{N}\left(\mu_U, \sigma_U^2\right).$$ Knowing [TeX:] $$\mu_U \text { and } \sigma_U^2$$ accurately, we can compute the coverage probability using the following general lemma for a Gaussian random variable:

Lemma 3. If any random variable [TeX:] $$\tilde{S}$$ can be accurately modeled as [TeX:] $$U^2,$$ where [TeX:] $$U \sim \mathcal{N}\left(\mu_U, \sigma_U^2\right),$$ then the coverage probability for a typical mobile can be expressed as:

(24)
[TeX:] $$\begin{aligned} \mathbb{P}(\text { SINR } \gt T) & =\mathbb{E}\left[e^{-\frac{T}{\sigma_0^2} U^2}\right] \\ & =\frac{1}{\sqrt{1+2 \frac{T \sigma_U^2}{\sigma_0^2}}} \exp \left(-\frac{T \mu_U^2}{\left.\sigma_0^2+2 T \sigma_U^2\right)}\right) . \end{aligned}$$

Proof. See Appendix E.

The final step is to express the [TeX:] $$\mu_U \text { and } \sigma_U^2$$ in terms of [TeX:] $$\mu_S, \sigma_S^2, \text { and } \mathbb{E}\left[S r^4\right],$$ where the joint moment between S and [TeX:] $$r^4$$ can be evaluated using the following lemma.

Lemma 4. The joint moment between the random variables S and [TeX:] $$r^4$$ can be computed as follows:

(25)
[TeX:] $$\mathbb{E}\left[S r^4\right]=\sum_{i=2}^N \mathbb{E}\left[g_i\right] \mathbb{E}\left[\frac{\left(\min \left(X, Y_i\right)\right)^4}{\left(\max \left(X, Y_i\right)\right)^2}\right],$$

where

(26)
[TeX:] $$\begin{aligned} & \mathbb{E}\left[\frac{\left(\min \left(X, Y_i\right)\right)^4}{\left(\max \left(X, Y_i\right)\right)^2}\right]= \\ & \begin{cases}\frac{1}{(\pi \lambda)^2}(67-96 \ln (2)), & i=2, \\ \frac{1}{(\pi \lambda)^2}\left(\frac{4!}{\Gamma(i)}\left(\Gamma(i-2)-\sum_{k=0}^4 \frac{\Gamma(i+k-2)}{k!2^{i+k-2}}\right)+\right. &s \\ \left.\frac{\Gamma(i+4)}{\Gamma(i)}\left(1-\ln (2)-\sum_{k=0}^{i+1} \frac{k!}{(k+2)!2^{k+1}}\right)\right), & i \geq 3,\end{cases} \end{aligned}$$

Proof. See Appendix F-C.

The mean and variance of U can then be expressed by the following lemma.

Lemma 5. If the random variables [TeX:] $$\tilde{S}$$ and U can be written as [TeX:] $$\tilde{S}=U^2,$$ then the mean and variance of U can be computed as follows:

(27)
[TeX:] $$\mu_U=\left(\mu_{\tilde{S}}^2-\sigma_{\tilde{S}}^2 / 2\right)^{\frac{1}{4}},$$

(28)
[TeX:] $$\sigma_U^2=\mu_{\tilde{S}}-\sqrt{\mu_{\tilde{S}}^2-\sigma_{\tilde{S}}^2 / 2},$$

where

(29)
[TeX:] $$\mu_{\tilde{S}}=\mu_S+\frac{2 \sigma^2}{(\pi \lambda)^2}$$

(30)
[TeX:] $$\sigma_{\tilde{S}}^2=\sigma_S^2+\frac{20 \sigma^4}{(\pi \lambda)^4}+2 \sigma^2 \mathbb{E}\left[S r^4\right]-\frac{4 \sigma^2 \mu_S}{(\pi \lambda)^2},$$

and the joint moment [TeX:] $$\mathbb{E}\left[S r^4\right]$$ is computed using Lemma 4.

Proof. See Appendix G.

Note that a necessary condition for the relationship to hold is the difference [TeX:] $$\Delta=\mu_{\tilde{S}}^2-\sigma_{\tilde{S}}^2 / 2 \geq 0 .$$ It implies that [TeX:] $$\sigma_{\tilde{S}}^2 \lt 2\mu_{\tilde{S}}^2.$$ In other words, the overall variance of [TeX:] $$\tilde{S}$$ cannot be too large compared to the mean. However, if the condition is not satisfied, then the model used in (59) cannot satisfy the two moment conditions in (95)-(96), and it becomes necessary to expand this model. Therefore, if [TeX:] $$\Delta=\mu_{\tilde{S}}^2-\sigma_{\tilde{S}}^2 / 2 \lt 0,$$ we will remodel [TeX:] $$\tilde{S}$$ as

(31)
[TeX:] $$\tilde{S}=S+\sigma^2 r^4=U^2+V^2,$$

where [TeX:] $$U \sim \mathcal{N}\left(\mu_U, \sigma_U^2\right)$$ as before and [TeX:] $$V \sim \operatorname{Exp}(\beta)$$ independent of U. Knowing [TeX:] $$\mu_U \text { and } \sigma_U^2, \text { and } \beta$$ accurately, we can compute the coverage probability using the following lemma:

Lemma 6. If the random variable [TeX:] $$\tilde{S}=S+\sigma^2 r^4$$ can be accurately modeled as [TeX:] $$U^2+V^2,$$ where [TeX:] $$U \sim \mathcal{N}\left(\mu_U, \sigma_U^2\right)$$ and [TeX:] $$V \sim \operatorname{Exp}(\beta) $$ independent of U, then the coverage probability for a typical mobile can be expressed as:

(32)
[TeX:] $$\mathbb{P}(\operatorname{SINR} \gt T)=\frac{1}{\sqrt{1+2 \frac{T \sigma_U^2}{\sigma_0^2}}} \frac{1}{\beta} \sqrt{\frac{\pi \sigma_0^2}{T}} e^{\frac{\sigma_0^2}{4 T \beta^2}} \operatorname{erfc}\left(\frac{\sigma_0}{\beta \sqrt{2 T}}\right),$$

where [TeX:] $$\operatorname{erfc}(x)=\int_x^{\infty} \frac{1}{\sqrt{2 \pi}} e^{-\frac{y^2}{2}} d y,$$ represents the standard complementary error function.

Proof. See Appendix H.

Notice that now there are three unknowns [TeX:] $$\mu_U, \sigma_U^2, \text { and } \beta$$ that need to be determined from the two moment equations. The extra slack provided by the third unknown can be used to ensure that [TeX:] $$\Delta \geq 0$$ in general or [TeX:] $$\Delta = 0$$ at best.

Lemma 7. If the random variable [TeX:] $$\tilde{S}$$ is expressed as [TeX:] $$\tilde{S}=U^2+V^2,$$ where [TeX:] $$U \sim \mathcal{N}\left(0, \sigma_U^2\right) \text { and } V \sim \operatorname{Exp}\left(\frac{1}{\beta}\right),$$ then the mean and variance of U can be written as follows:

(33)
[TeX:] $$\mu_U=0,$$

(34)
[TeX:] $$\sigma_U^2=\frac{\mu_{\tilde{S}}}{7}\left(5-2 \sqrt{1+\frac{7|\Delta|}{2 \mu_{\tilde{S}}^2}}\right),$$

where [TeX:] $$\Delta=\mu_{\tilde{S}}^2-\frac{\sigma_{\tilde{S} 2}}{2}, \text { and } \beta$$ is selected as [TeX:] $$\beta=\sqrt{\frac{\mu_{\tilde{S}}}{7}\left(1+\sqrt{1+\frac{7|\Delta|}{2 \mu_{\tilde{S}} 2}}\right)} .$$

Proof. See Appendix I.

Remark 3. Notice that in our computations, whenever the distance to the closest interferer, [TeX:] $$R_2$$ is involved, all moments and joint-moments originating from it such as [TeX:] $$\mathbb{E}\left[Z_2^2\right], \mathbb{E}\left[Z_2^4\right],$$ [TeX:] $$\mathbb{E}\left[Z_2^2 Z_j^2\right], j \geq 3, \text { and } \mathbb{E}\left[Z_2^2 r^4\right]$$ include logarithmic terms, which indicates the unstable behaviour caused by the dominant interferer. This is also evident in (65) and (66) since the direct moments of [TeX:] $$\frac{g_i}{Y_i^2}$$ are undefined, thus making the interference term I in (5) hard to deal with. We have circumvented this difficulty by defining the modified interference term S in (6) and the logarithmic terms in the moments of [TeX:] $$Z_2$$ are a reminder of the underlying complexities.

Finally, the computation of the coverage probability can be summarized by the following theorem.

Theorem 1. The coverage probability of a typical mobile receiver when the BSs are modeled by a uniform random spatial distribution can be expressed as follows:

(35)
[TeX:] $$\begin{aligned} & \mathbb{P}(\text { SINR } \gt T)=\mathbb{E}\left[e^{-\frac{T}{\sigma_0^2} \tilde{S}}\right] \\ & = \begin{cases}\frac{1}{\sqrt{1+2 \frac{T \sigma_U^2}{\sigma_0^2}}} \exp \left(-\frac{T \mu_U^2}{\sigma_0^2\left(1+2 \frac{T \sigma_U^2}{\sigma_0^2}\right)}\right), & \Delta \geq 0, \\ \frac{1}{\sqrt{1+2 \frac{T \sigma_U^2}{\sigma_0^2}}} \frac{\sigma_0}{\beta} \sqrt{\frac{\pi}{T}} e^{\frac{\sigma_0^2}{4 T \beta^2}} \operatorname{erfc}\left(\frac{\sigma_0}{\beta \sqrt{2 T}}\right), & \Delta \lt 0,\end{cases} \end{aligned}$$

where

(36)
[TeX:] $$\beta=\frac{\mu_{\tilde{S}}}{7}\left(1+\sqrt{1+\frac{7|\Delta|}{2 \mu_{\tilde{S}}^2}}\right),$$

and the constants [TeX:] $$\mu_{\tilde{S}} \text { and } \sigma_{\tilde{S}}^2$$ can be obtained using (29) and (30) while [TeX:] $$\mu_U, \text { and } \sigma_U^2$$ can be obtained using (27), and (28) if [TeX:] $$\Delta \geq 0$$ and (33) and (34) if [TeX:] $$\Delta \lt 0$$.

B. Special Case - Inverse Third Power Path-Loss (η = 3)

In this case, the SINR can be expressed as

(37)
[TeX:] $$\mathrm{SINR}=\frac{g_0}{\sum_{i=2}^N g_i\left(\frac{r}{R_i}\right)^3+\sigma^2 r^3}=\frac{g_0}{S+\sigma^2 r^3},$$

where [TeX:] $$S=\sum_{i=2}^N g_i Z_i^{\frac{3}{2}} \text { with } Z_i=\frac{\min \left(X, Y_i\right)}{\max \left(X, Y_i\right)} .$$ We need to evaluate [TeX:] $$\mathbb{E}\left[Z_n^{\frac{3}{2}}\right], \mathbb{E}\left[Z_n^3\right], \text { and } \mathbb{E}\left[Z_m^{\frac{3}{2}} Z_n^{\frac{3}{2}}\right]$$ for characterizing the statistics of S. These are provided by the following lemma.

Lemma 8. For the case when η = 3, the mean [TeX:] $$\mu_S$$ and variance [TeX:] $$\sigma_S^2$$ of the sum S can be computed as follows:

(38)
[TeX:] $$\mu_S=\sum_{n=2}^N \sigma_n^2 \mathbb{E}\left[Z_n^{\frac{3}{2}}\right]$$

(39)
[TeX:] $$\sigma_S^2=\sum_{n=2}^N 2 \sigma_n^2 \mathbb{E}\left[Z_n^{\frac{3}{2}}\right]+{{\sum\sum}_{m \neq n}^N} \sigma_m \sigma_n \mathbb{E}\left[Z_m^{\frac{3}{2}} Z_n^{\frac{3}{2}}\right]-\mu_s^2,$$

where the required moments are obtained as follows:

(40)
[TeX:] $$\mathbb{E}\left[Z_n^{\frac{3}{2}}\right]= \begin{cases}3+\frac{\pi}{2} & , n=2, \\ 6\left(\frac{1}{16}\left(\frac{\pi}{4}-\frac{1}{2}\right)+1+\right. & \\ \left.\sum_{j=1}^4 \frac{(-1)^j}{2^j}\binom{4}{j} \sum_{k=0}^{j-1}\binom{j-1}{k} \frac{\Gamma\left(\frac{k+1}{2}\right)}{\Gamma\left(\frac{k}{2}+1\right)} \sqrt{\pi}\right) & , n=3, \\ 2 n\left(1+\frac{3}{2^{n+4}} \sum_{j=0}^{n-4}\binom{n-4}{j} \frac{\Gamma\left(\frac{j+1}{2}\right) \sqrt{\pi}}{\Gamma\left(\frac{j}{2}+3\right)}+\right. & \\ \left.\sum_{j=1}^{n+1} \frac{(-1)^j}{2^j}\binom{n+1}{j} \sum_{k=0}^{j-1}\binom{j-1}{k} \frac{\Gamma\left(\frac{k+1}{2}\right) \sqrt{\pi}}{\Gamma\left(\frac{k}{2}+1\right)}\right) & , n \geq 4 .\end{cases}$$

(41)
[TeX:] $$\mathbb{E}\left[Z_n^3\right]= \begin{cases}6 \ln (2)-4 & , n=2, \\ 33 \ln (2)-\frac{91}{4} & , n=3, \\ n\left(\frac{1}{n-3}\left(1-\frac{1}{2^{n-3}}\right)-\frac{3}{n-2}\left(1-\frac{1}{2^{n-2}}\right)+\right. & \\ \frac{3}{n-1}\left(1-\frac{1}{2^{n-1}}\right)-\frac{1}{n}\left(1-\frac{1}{2^n}\right)+\frac{3}{2}- & \\ (n+2)+\frac{(n+1)(n+2)}{2} \ln (2)+ \\ \left.\sum_{k=1}^n(-1)^k\binom{n+2}{k+2} \frac{1}{k}\left(1-\frac{1}{2^k}\right)\right) & , n \geq 4 .\end{cases}$$

and

(42)
[TeX:] $$\mathbb{E}\left[Z_m^{\frac{3}{2}} Z_n^{\frac{3}{2}}\right]=I_1+I_2+I_3, \quad m \neq n,$$

where

(43)
[TeX:] $$I_1= \begin{cases}2 \sum_{k=0}^3 \frac{\Gamma(m+n+k-3)}{\Gamma(m) \Gamma(n)} Q(m+k-2, n-2), & n \gt m \geq 2, \\ 2 \sum_{k=0}^3 \frac{\Gamma(m+n+k-3)}{\Gamma(m) \Gamma(n)} Q(n+k-2, m-2), & m \gt n \geq 2,\end{cases}$$

(44)
[TeX:] $$I_2= \begin{cases}\frac{\Gamma(m+n)}{\Gamma(m) \Gamma(n)} \times & \\ (Q(m+1, n-2)-Q(n-2, m+1)) & , n \gt m \geq 2, \\ \frac{\Gamma(m+n)}{\Gamma(m) \Gamma(n)} \times & \\ (Q(n+1, m-2)-Q(m-2, n+1)) & , m \gt n \geq 2,\end{cases}$$

(45)
[TeX:] $$I_3= \begin{cases}\frac{4 \Gamma(m+n+1)}{\Gamma(m) \Gamma(n)} \times & \\ \left(\frac{1}{2^{m+n+1}} \sum_{k=0}^{n-1}\binom{n-m}{k} \frac{\Gamma\left(m+\frac{3}{2}\right) \Gamma\left(\frac{k+1}{2}\right)}{\Gamma\left(m+\frac{k}{2}+2\right)}\right) \times & \\ \left(\frac{3}{4}-\frac{m+n+2}{2}+\frac{(m+n+2)(m+n+1)}{4} \ln (2)+\right. & \\ \left.\sum_{k=1}^{m+n} \frac{(-1)^k}{2 k}\binom{m+n+2}{k+2}\left(1-\frac{1}{2^k}\right)\right) & , n \gt m \geq 2, \\ \frac{4 \Gamma(m+n+1)}{\Gamma(m) \Gamma(n)} \times & \\ \left(\frac{1}{2^{m+n+1}} \sum_{k=0}^{m-1}\binom{m-n}{k} \frac{\Gamma\left(n+\frac{3}{2}\right) \Gamma\left(\frac{k+1}{2}\right)}{\Gamma\left(n+\frac{k}{2}+2\right)}\right) \times & \\ \left(\frac{3}{4}-\frac{m+n+2}{2}+\frac{(m+n+2)(m+n+1)}{4} \ln (2)+\right. & \\ \left.\sum_{k=1}^{m+n} \frac{(-1)^k}{2 k}\binom{m+n+2}{k+2}\left(1-\frac{1}{2^k}\right)\right) & , m \gt n \geq 2,\end{cases}$$

and the function Q(i, j) is defined in Appendix J.

Proof. See Appendix C-B, Appendix C-D, and Appendix D-B for the computation of [TeX:] $$\mathbb{E}\left[Z_n^{\frac{3}{2}}\right], \mathbb{E}\left[Z_n^3\right],$$ and [TeX:] $$\mathbb{E}\left[Z_m^{\frac{3}{2}} Z_n^{\frac{3}{2}}\right]$$ respectively.

The probability of coverage [TeX:] $$\mathbb{P}(\operatorname{SINR} \gt T)=\mathbb{E}\left[e^{-\frac{T}{\sigma_0^2} \tilde{S}}\right]$$ is given by (24). Here, [TeX:] $$\tilde{S}$$ is represented as follows:

(46)
[TeX:] $$\tilde{S}=S+\sigma^2 r^3=U^2,$$

where [TeX:] $$U \sim \mathcal{N}\left(\mu_U, \sigma_U^2\right) .$$ The mean and variance of U can be expressed by (27) and (28) except that [TeX:] $$\mu_{\tilde{S}} \text { and } \sigma_{\tilde{S}}^2$$ can be written as

(47)
[TeX:] $$\mu_{\tilde{S}}=\mu_S+\frac{3 \sqrt{\pi} \sigma^2}{4(\pi \lambda)^{\frac{3}{2}}},$$

(48)
[TeX:] $$\sigma_{\tilde{S}}^2=\sigma_S^2+\frac{\sigma^4(6-9 \pi / 16)}{(\pi \lambda)^3}+2 \sigma^2 \mathbb{E}\left[S r^3\right]-\frac{6 \mu_S \sigma^2 \sqrt{\pi}}{4(\pi \lambda)^{\frac{3}{2}}} .$$

Lemma 9. The joint moment between the random variables S and [TeX:] $$r^3$$ can be computed as follows:

(49)
[TeX:] $$\mathbb{E}\left[S r^3\right]=\sum_{i=2}^N \mathbb{E}\left[g_i\right] \mathbb{E}\left[\frac{\left(\min \left(r, R_i\right)\right)^6}{\left(\max \left(r, R_i\right)\right)^3}\right],$$

where

(50)
[TeX:] $$\begin{aligned} & \mathbb{E}\left[\frac{\left(\min \left(r, R_i\right)\right)^6}{\left(\max \left(r, R_i\right)\right)^3}\right]=\frac{\sqrt{\pi}}{\Gamma(i)(\pi \lambda)^{\frac{3}{2}}} \times \\ & \left(6\left(\frac{(2 i-4)!}{4^{(i-2)}(i-2)!}-\sum_{k=0}^3 \frac{1}{k!} \frac{(2 i+2 k-4)!}{(\sqrt{2}) 8^{i+k-2}(i+k-2)!}\right)+\right. \\ & \left.\Gamma(i+3)\left((\sqrt{2}-1)-\sum_{k=1}^{i+2} \frac{1}{k!} \frac{(2 k-2)!}{(\sqrt{2}) 8^{k-1}(k-1)!}\right)\right), i \geq 2 . \end{aligned}$$

Proof. See Appendix F-B.

C. Special Case - Inverse Quadratic Path-Loss (η = 2)

In the case of η = 2, we get

(51)
[TeX:] $$\mathrm{SINR}=\frac{g_0}{\sum_{i=2}^N g_i\left(\frac{r}{R_i}\right)^2+\sigma^2 r^2}=\frac{g_0}{S+\sigma^2 r^2},$$

where [TeX:] $$S=\sum_{i=2}^N g_i Z_i$$ with [TeX:] $$Z_i=\min \left(X, Y_i\right) / \max \left(X, Y_i\right) .$$ The required quantities for evaluating the statistics of S are [TeX:] $$\mathbb{E}\left[Z_n\right], \mathbb{E}\left[Z_n^2\right], \text { and } \mathbb{E}\left[Z_m Z_n\right], m \neq n \text { for } m, n=2,3, \cdots, N \text {. }$$ These can be summarized in the following lemma.

Lemma 10. For the case when η = 2, the mean [TeX:] $$\mu_S$$ and variance [TeX:] $$\sigma_S^2$$ of the sum S can be computed as follows:

(52)
[TeX:] $$\mu_S=\sum_{n=2}^N \sigma_n^2 \mathbb{E}\left[Z_n\right],$$

(53)
[TeX:] $$\sigma_S^2=\sum_{n=2}^N 2 \sigma_n^2 \mathbb{E}\left[Z_n^2\right]+{\sum\sum}_{m \neq n}^N \sigma_m\sigma_n \mathbb{E}\left[Z_m Z_n\right]-\mu_s^2,$$

where the first moment, second moment, and joint moments of [TeX:] $$Z_n,$$ are obtained as follows:

(54)
[TeX:] $$\mathbb{E}\left[Z_n\right]=\left\{\begin{array}{l} 2 \ln (2)-1, n=2, \\ n\left(\frac{1}{n-1}\left(1-\frac{1}{2^{n-1}}\right)-\frac{1}{n}\left(1-\frac{1}{2^n}\right)+\ln (2)+\right. \\ \sum_{k=1}^n(-1)^k\binom{n}{k} \frac{1}{k}\left(1-\frac{1}{2^k}\right), n \geq 2, \end{array}\right.$$

[TeX:] $$\mathbb{E}\left[Z_n^2\right]$$ is as given in (16) and,

(55)
[TeX:] $$\mathbb{E}\left[Z_m Z_n\right]=I_1+I_2+I_3, \quad m \neq n,$$

where

(56)
[TeX:] $$I_1= \begin{cases}\frac{1}{\Gamma(m) \Gamma(n-1)} \sum_{j=0}^{n-2} \frac{\Gamma(j+m-1)}{k!2^{j+m-1}} \times & \\ \sum_{k=0}^{j+m-2} \frac{2^k(k+1)(k+2)}{3^{k+3}} & , n \gt m \geq 2, \\ \frac{1}{\Gamma(n) \Gamma(m-1)} \sum_{j=0}^{m-2} \frac{\Gamma(j+n-1)}{k!2^{j+n-1}} \times & \\ \sum_{k=0}^{j+n-2} \frac{2^k(k+1)(k+2)}{3^{k+3}} & , m \gt n \geq 2,\end{cases}$$

(57)
[TeX:] $$I_2=\left\{\begin{array}{l} \frac{1}{\Gamma(m)(n-1)} \sum_{j=0}^{n-2} \frac{1}{2^{j+1}} \times \\ \sum_{k=0}^j \frac{(k+j)!}{k!} \frac{1}{3^{m+1}}\left(\frac{2}{3}\right)^k, n \gt m \geq 2, \\ \frac{1}{\Gamma(n)(m-1)} \sum_{j=0}^{m-2} \frac{1}{2^{j+1}} \times \\ \sum_{k=0}^j \frac{(k+j)!}{k!} \frac{1}{3^{n+1}}\left(\frac{2}{3}\right)^k, m \gt n \geq 2, \end{array}\right.$$

and

(58)
[TeX:] $$I_3= \begin{cases}m n\left(1-\ln (2)-\sum_{k=0}^{n-2} \frac{1}{(k+1)(k+2) 2^{k+1}}-\right. & \\ \sum_{j=0}^m \frac{\Gamma(n+j+1)}{j!\Gamma(n+1) 2^{n+j+1}} \times & \\ \left.\left(2-\ln (3)-\sum_{k=0}^{n+j-2} \frac{2(2 / 3)^{k+1}}{(k+1)(k+2)}\right)\right) & , n \gt m \geq 2, \\ n m\left((1-\ln (2))-\sum_{k=0}^{m-2} \frac{1}{(k+1)(k+2) 2^{k+1}}-\right. & \\ \sum_{j=0}^n \frac{\Gamma(m+j+1)}{j!\Gamma(m+1) 2^{m+j+1}} \times & \\ \left.\left(2-\ln (3)-\sum_{k=0}^{m+j-2} \frac{2(2 / 3)^{k+1}}{(k+1)(k+2)}\right)\right) & , m \gt n \geq 2,\end{cases}$$

Proof. See Appendix C-A, Appendix C-C, and Appendix D-A for the computation of [TeX:] $$\mathbb{E}\left[Z_n\right], \mathbb{E}\left[Z_n^2\right],$$ and [TeX:] $$\mathbb{E}\left[Z_m Z_n\right]$$ respectively.

Using a similar argument as before, [TeX:] $$\mathbb{P}(\text { SINR } \gt T)=\mathbb{E}\left[e^{-\frac{T}{\sigma_0^2} \tilde{S}}\right],$$ where [TeX:] $$\tilde{S}$$ is represented as follows:

(59)
[TeX:] $$\tilde{S}=S+\sigma^2 r^2=U^2,$$

where [TeX:] $$U \sim \mathcal{N}\left(\mu_U, \sigma_U^2\right) .$$ Knowing [TeX:] $$\mu_U \text { and } \sigma_U^2$$ in this specific case, [TeX:] $$\mathbb{P}(\text { SINR } \gt T)$$ can be computed as in (24) (Lemma 3). The final step is to express the [TeX:] $$\mu_U \text { and } \sigma_U^2$$ in terms of [TeX:] $$\mu_S$$ and [TeX:] $$\sigma_S^2, \text { and } \mathbb{E}\left[S r^2\right],$$ where the joint moment between S and [TeX:] $$r^2$$ can be evaluated using the following lemma. Although the mean and variance of U in (59) can be expressed as in (27) and (28), [TeX:] $$\mu_{\tilde{S}} \text { and } \sigma_{\tilde{S}}^2$$ there are different and they can be written as

(60)
[TeX:] $$\mu_{\tilde{S}}=\mu_S+\frac{\sigma^2}{\pi \lambda},$$

(61)
[TeX:] $$\sigma_{\tilde{S}}^2=\sigma_S^2+\frac{\sigma^4}{(\pi \lambda)^2}+2 \sigma^2 \mathbb{E}\left[S r^2\right]-\frac{2 \mu_S \sigma^2}{\pi \lambda}.$$

Lemma 11. The joint moment between the random variables S and [TeX:] $$r^2$$ can be computed as follows:

(62)
[TeX:] $$\mathbb{E}\left[S r^2\right]=\sum_{i=2}^N \mathbb{E}\left[g_i\right] \mathbb{E}\left[\frac{\left(\min \left(X, Y_i\right)\right)^2}{\max \left(X, Y_i\right)}\right],$$

where

(63)
[TeX:] $$\begin{gathered} \mathbb{E}\left[\frac{\left(\min \left(X, Y_i\right)\right)^2}{\max \left(X, Y_i\right)}\right]=\frac{2}{\pi \lambda}\left(\frac{1}{(i-1)}-\frac{1}{\Gamma(i)} \sum_{k=0}^2 \frac{\Gamma(i+k-1)}{k!2^{i+k-1}}\right)+ \\ \frac{i(i+1)}{\pi \lambda}\left(\ln (2)-\sum_{k=1}^{i+1} \frac{1}{k 2^k}\right), i \geq 2 \end{gathered}$$

Proof. See Appendix F-A.

V. SIMULATION RESULTS

In this section, we validate our analysis with the help of simulations as well as compare the results with the widely accepted SG expressions.

A. Simulation Setup

We simulate a 100 km × 100 km area in which base stations (BSs) are deployed according to a PPP with density [TeX:] $$\lambda \mathrm{~BS} / \mathrm{km}^2 .$$ In our default setup, we consider BS densities of [TeX:] $$\lambda = 0.01, 0.1,$$ and 1, which are commonly used in literature for evaluating sparse to moderately dense deployments [15], [30]. For extremely dense networks, where near-field effects are significant, more complex models are required (e.g., multislope [32] and stretched exponential [7] path-loss), which are beyond the scope of this paper. The PPP assumption ensures that BSs are distributed randomly and independently in the simulation area, representing realistic network layouts [33]. Each mobile user is assumed to be associated with its nearest BS. The channel between the user and the serving BS is assumed to experience Rayleigh fading (i.e., exponentially distributed channel power gain) with unit variance [TeX:] $$\sigma_0^2=1,$$ while the channel power gains for interfering BSs are also set to follow independent exponential distributions with variances [TeX:] $$\sigma_i^2=1, \forall i \geq 2$$ [30]. The receiver antenna gain is set to 0 dB, which is typical for mobile handsets. The thermal noise power is modeled with different levels of signal-to-noise ratio (SNR) to evaluate the system under various interference and noise conditions. We use three cases of SNR values: 0 dB, 10 dB, and 20 dB, which correspond to noise powers of 1 watt, 0.1 watt, and 0.01 watts, respectively. These values are chosen to simulate a range from noise-limited to interference-limited scenarios, which are commonly considered in the literature [4]. Assuming a bandwidth of 10 MHz, [TeX:] $$\sigma^2=0.01,0.1,$$ and 1 W correspond to a noise power spectral density of -90 dB/Hz, -80 dB/Hz, and -70 dB/Hz respectively, which is a standard assumption in performance analysis of wireless networks [34]. The total number of considered BSs, N, is chosen to be the average number of BSs in the simulation area, i.e., [TeX:] $$N=10^4 \lambda.$$ To capture accurate statistical results, each simulation is repeated for [TeX:] $$10^5$$ iterations, and the CCDF of the received SINR at the typical user is recorded. Coverage probability is then computed as the average of the SINR CCDF over all iterations. Path-loss is modeled using the standard power-law pathloss function with exponents η = 2,3, and 4, representing different propagation environments from free space to urban settings [20]. We compare the results of our proposed model with both SG based results and Monte Carlo simulations to validate its accuracy across various settings. We have done an extensive investigation of the accuracy of our proposed model with different [TeX:] $$\lambda, \sigma^2, \text { and } T.$$ Our model is more general and allows for consideration of interferers separately. While the order statistics may make the analysis complicated, we circumvent this difficulty by just requiring the moments and the joint moments. Although the proposed approach is computationally intensive, the computations are only required to obtain constants involved in the model. In practice, these constants can be pre-computed and tabulated for different networks of interest.

B. Results and Discussion

Since the coverage probability curves fundamentally depend on [TeX:] $$\lambda \text { and } \sigma^2,$$ we have to ensure that the results of the proposed approach agree with the simulations across a range of these parameters. In this section, we first investigate the goodness of fit of our proposed expression for coverage probability in Poisson wireless networks and then compare it with the baseline SG results. Fig. 2 shows the coverage probability for the case of η = 4 and without noise. It can be observed that our proposed probabilistic approach asymptotically approaches the simulation and established SG results as we include more and more interferers into the analysis. It is important to note that the analysis of the noise-free case is independent of λ. This is evident from the fact that both the parameters [TeX:] $$\sigma^2$$ and λ appear together in all the expressions. Hence if [TeX:] $$\sigma^2 = 0,$$ then λ has no effect on the coverage probability. In other words, there is a saturation in the performance and it doesn’t change whether the density of nodes in the network increases or decreases.

Fig. 2.
Comparison of coverage expression for varying number of BSs. As more number of interfering BSs are considered in the calculations, the computation converges to the simulated values asymptotically.

Fig. 3, Fig. 4, and Fig. 5 illustrate the down-link coverage probability of a typical user in a cellular network assuming a path-loss exponent η = 4, 3, and 2 respectively for various values of the BS density λ and the noise power [TeX:] $$\sigma^2$$ (or equivalently the SNR). It is shown that in almost all cases, the results emanating from our proposed approach match very well with simulations and the well established SG results. Note that we stage a direct comparison to the SG results for η = 4 against our approach in Fig. 3. Some of the key observations from the results are as follows:

1) N = 2 provides the best case scenario when only the nearest interferer is considered. This can be considered a conservative upper bound on the average system performance since it considers only the most dominant interferer.

2) Noise-free performance does not depend on the density λ. In other words, without noise, the coverage does not change by increasing or decreasing the base station density. This is evident from the right column graphs in Fig. 3, Fig. 4, and Fig. 5 that are almost identical due to the saturation.

Fig. 3.
Probability of coverage for the case η = 4 for various BS density and SNR levels.
Fig. 4.
Probability of coverage for the case η = 3 for various BS density and SNR levels.
Fig. 5.
Probability of coverage for the case η = 2 for various BS density and SNR levels.

We notice a slight inaccuracy in the model when the BS density is low and the SNR is low. It is only natural to have lower probability of coverage in scenarios when the BSs are sparsely located and the noise is high. However, our proposed model still accurately predicts the simulated values. A detailed analysis of the inaccuracy region as well as its quantification is provided in Fig. 6 and Fig. 7, reflecting the discrepancies with simulation for varying density of BS and noise power respectively.

Fig. 6.
Comparison of coverage probability for varying density of BSs.
Fig. 7.
Comparison of coverage probability for varying channel noise power.
C. Comparison with Existing SG Results

A widely cited result from SG provides an analytical expression for the coverage probability in Poisson wireless networks. This expression, adapted from [30] using consistent notation, is given as:

(64)
[TeX:] $$P_o \stackrel{\eta=4}{=} \lambda \frac{\pi^{\frac{3}{2}}}{\sqrt{T \sigma_0^2 \sigma^2}} \exp \left(\frac{\lambda^2 \pi^2 \kappa^2(T)}{4 T \sigma_0^2 \sigma^2}\right) \operatorname{erfc}\left(\frac{\lambda \pi \kappa(T)}{\sqrt{2 b(T)}}\right),$$

where [TeX:] $$\kappa(T)=1+\sqrt{T}(\pi / 2-\arctan (1 / \sqrt{T})) .$$

This expression forms the benchmark for SINR-based coverage analysis in PPP-modeled networks. However, it is characterized by a complex combination of system parameters and nested functional forms, which makes it difficult to use directly in system-level optimization problems (e.g., power allocation, frequency reuse planning). Such expressions are not easily interpretable and often require numerical evaluation, obscuring the structural dependencies between coverage probability and system parameters.

In contrast, our proposed approach yields closed-form expressions based on moment-based approximations. These expressions explicitly reveal the contribution of each interferer through modular computations of their respective moments and joint moments. This modularity enables not only deeper insight into the coverage behavior but also provides a practical means for system-level analysis and optimization, since the required statistical quantities (e.g., means, variances, and covariances) can be precomputed and tabulated for different network configurations. Moreover, our method remains valid across different path-loss exponents without incurring singularities that often arise in SG analysis, especially in the case of η = 2, where the integrals diverge or become undefined. By reformulating the interference term and introducing more stable random variables (e.g., [TeX:] $$Z_i$$), we circumvent such singularities while preserving accuracy, as demonstrated in our derived results and simulations.

VI. CONCLUSION

In this paper, we tackle the classical coverage problem in uniform random wireless networks, which is well-studied and the methodology is extended to multiple wireless and cellular network configurations. We provide an alternative and modular approach to determining the cumulative distribution function of the SINR by accurately capturing the statistics of the modified interference term. We have provided complete derivations for the case of uniformly random wireless cellular networks focusing on three cases of path-loss exponent-quadratic, cubic, and quartic-to illustrate this approach. Our theoretical results have been validated through simulations and asymptotically agree to the widely accepted stochastic geometry results. The analysis methodology we present can allow wireless engineers to take the next step in designing systems with performance guarantees. The simple evaluation of coverage probability can be used in the optimization problems and the effect of different system parameters on the performance can be analyzed. In the future, more complex network models such as multi-tier networks can also be studied using the probabilistic approach.

APPENDIX A

PROOF OF UNDEFINED MOMENTS

Considering the fourth power loss case, i.e., η = 4 for the SINR, where [TeX:] $$I=\sum_{i=2}^N g_i R_i^{-4}=\sum_{i=2}^N g_i Y_i^{-2},$$ it can be shown that [TeX:] $$g_2 / R_2^4=g_2 / Y_2^2$$ is not well defined in terms of its moments.

(65)
[TeX:] $$\begin{aligned} \mathbb{E}\left[\frac{g_2}{Y_2^2}\right] & =\int_0^{\infty} \frac{g}{\sigma_2^2} e^{-\frac{g}{\sigma_2^2}} d g \int_0^{\infty} \frac{(\lambda \pi)^2}{y^2} y e^{-\lambda \pi y} d y, \\ & =\int_0^{\infty} \frac{g}{\sigma_2^2} e^{-\frac{g}{\sigma_2^2}} d g \int_0^{\infty} \frac{1}{x} e^{-x} d x \rightarrow \infty, \end{aligned}$$

since [TeX:] $$\int_0^{\infty} \frac{1}{x} e^{-x} d x \rightarrow \infty.$$ Similarly,

(66)
[TeX:] $$\begin{aligned} \mathbb{E}\left[\frac{g_2^2}{Y_2^4}\right] & =\int_0^{\infty} \frac{g^2}{\sigma_2^2} e^{-\frac{g}{\sigma_2^2}} d g \int_0^{\infty} \frac{(\lambda \pi)^2}{y^4} y e^{-\lambda \pi y} d y, \\ & =\int_0^{\infty} \frac{g}{\sigma_2^2} e^{-\frac{g}{\sigma_2^2}} d g(\lambda \pi)^2 \int_0^{\infty} \frac{1}{x^3} e^{-x} d x \rightarrow \infty. \end{aligned}$$

Hence the summation [TeX:] $$\sum_{i=2}^N g_i R_i^{-4}$$ does not have finite moments. This also holds for generalized [TeX:] $$\eta \geq 2 .$$

APPENDIX B

PROOF OF LEMMA 1

Consider [TeX:] $$Z_i=\min \left(X, Y_i\right) / \max \left(X, Y_i\right), \text { where } X \sim \operatorname{Exp}(\pi \lambda)$$ independent of [TeX:] $$Y_i \sim \operatorname{Gamma}\left(i, \sigma_i^2\right) .$$

(67)
[TeX:] $$\begin{aligned} F_{Z_i}(z) & =\mathbb{P}\left(Z_i \leq z\right)=\mathbb{P}\left(\frac{X}{Y_i} \leq z, X \leq Y_i\right)+\mathbb{P}\left(\frac{Y_i}{X} \leq z, X \gt Y_i\right), \\ & =1-\int_0^{\infty} \int_{y z}^{y / z} f_X(x) f_{Y_i}(y) d x d y. \end{aligned}$$

Differentiating the distribution function and substituting the density functions of X and [TeX:] $$Y_i,$$ we get

(68)
[TeX:] $$\begin{aligned} f_Z(z)= & \int_0^{\infty} \frac{y}{z^2} \pi \lambda e^{-\frac{\pi \lambda y}{z}} \frac{(\pi \lambda)^2 y^{2-1}}{\Gamma(i)} e^{-\pi \lambda y} d y+ \\ & \int_0^{\infty} \pi \lambda y e^{-\pi \lambda y z} \frac{(\pi \lambda)^i y^{i+1}}{\Gamma(i)} e^{-\lambda \pi y} d y, \\ & =\frac{(\pi \lambda)^{i+1}}{\Gamma(i) z^2} \int_0^{\infty} y^i e^{-\pi \lambda y\left(1+\frac{1}{z}\right)} d y+ \\ & \int_0^{\infty} \frac{(\pi \lambda)^{i+1}}{\Gamma(i)} y^i e^{-\lambda \pi y(1+\lambda \pi z)} d z. \end{aligned}$$

Substituting [TeX:] $$u=\lambda \pi y\left(1+\frac{1}{z}\right) \text { and } v=\lambda \pi y(1+z)$$ in the above integrals and using the result [TeX:] $$\int_0^{\infty} u^i e^{-u} d u=\Gamma(n+1)=n!,$$ we get [TeX:] $$f_{Z_i}(z)=i\left(\frac{1}{z^2(1+1 / z)^{i+1}}+\frac{1}{(1+z)^{i+1}}\right)=\frac{i\left(1+z^{i-1}\right)}{(1+z)^{i+1}},$$ [TeX:] $$0 \leq z \leq 1, i \geq 2.$$

APPENDIX C

PROOF OF MOMENTS COMPUTATION OF [TeX:] $$Z_n$$
A. Proof of [TeX:] $$\mathbb{E}\left[Z_n\right]$$

(69)
[TeX:] $$\begin{aligned} \mathbb{E}\left[Z_n\right] & =n \int_0^1 \frac{z+z^n}{(1+z)^{n+1}} d z=n \int_1^2 \frac{(x-1)+(x-1)^n}{x^{n+1}} d x, \\ & =n \int_1^2\left(\frac{1}{x^n}-\frac{1}{x^{n+1}}+\frac{1}{x}+\sum_{k=1}^n\binom{n}{k} \frac{x^{n-k}(-1)^k}{x^{n+1}}\right) d x, \end{aligned}$$

B. Proof of [TeX:] $$\mathbb{E}\left[Z_n^{\frac{3}{2}}\right]$$

(70)
[TeX:] $$\begin{aligned} \mathbb{E}\left[Z_n^{\frac{3}{2}}\right] & =n \int_0^1 \frac{z^{\frac{3}{2}}\left(1+z^{n-1}\right)}{(1+z)^{n+1}} d z, \\ & =n\left(\int_0^1 \frac{z^{\frac{3}{2}}}{(1+z)^{n+1}} d z+\int_0^1 \frac{z^{n+\frac{1}{2}}}{(1+z)^{n+1}} d z\right) \end{aligned}$$

Let [TeX:] $$z=\tan ^2 \theta$$ so that

(71)
[TeX:] $$\mathbb{E}\left[Z_n^{\frac{3}{2}}\right]=2 n\left(\int_0^{\frac{\pi}{4}}\left(\sin ^4 \theta\right)\left(\cos ^{2(n-2)} \theta\right) d \theta+\int_0^{\frac{\pi}{4}} \frac{\sin ^{2 n+2} \theta}{\cos ^2 \theta} d \theta\right).$$

The first integral, i.e., [TeX:] $$\int_0^{\frac{\pi}{4}}\left(\sin ^4 \theta\right)\left(\cos ^{2(n-2)} \theta\right) d \theta,$$ can be computed as

(72)
[TeX:] $$\begin{aligned} & \int_0^{\frac{\pi}{4}}\left(\sin ^4 \theta\right)\left(\cos ^{2(n-2)} \theta\right) d \theta \\ & =\int_0^{\frac{\pi}{4}}\left(\frac{1-\cos (2 \theta)}{2}\right)^2\left(\frac{1+\cos (2 \theta)}{2}\right)^{n-2} d \theta, \\ & =\frac{1}{2^{n+1}} \int_0^{\frac{\pi}{2}} \sin ^4 \psi(1+\cos \psi)^{n-4} d \psi, \\ & =\frac{1}{2^{n+1}} \sum_{k=0}^{n-4}\binom{n-4}{k} \int_0^{\pi / 2} \sin ^4 \theta \cos ^k \theta d \theta, \\ & =\frac{3}{2^{n+4}} \sum_{k=0}^{n-4}\binom{n-4}{k} \frac{\sqrt{\pi} \Gamma\left(\frac{k+1}{2}\right)}{\Gamma\left(\frac{k}{2}+3\right)}, \end{aligned}$$

where we have used the identity [TeX:] $$\int_0^{\pi / 2}\left(\sin ^{m-1} \theta\right)\left(\cos ^{m-1} \theta\right) d \theta=\frac{\Gamma\left(\frac{m}{2}\right) \Gamma\left(\frac{n}{2}\right)}{2 \Gamma\left(\frac{m+n}{2}\right)}.$$ The second integral, [TeX:] $$\int_0^{\frac{\pi}{4}} \frac{\sin ^{2(n+1)} \theta}{\cos ^2 \theta} d \theta,$$ can be computed as follows:

(73)
[TeX:] $$\begin{aligned} & \int_0^{\frac{\pi}{4}} \frac{\sin ^{2(n+1)} \theta}{\cos ^2 \theta} d \theta=\int_0^{\frac{\pi}{4}} \frac{\left(1-\cos ^2 \theta\right)^{(n+1)}}{\cos ^2 \theta} d \theta, \\ & =\int_0^{\frac{\pi}{4}} \frac{1}{\cos ^2 \theta} d \theta+\sum_{k=1}^{n+1}(-1)^k\binom{n+1}{k} \int_0^{\pi / 4} \cos ^{2(k-1)} \theta d \theta, \\ & =1+\sum_{k=1}^{n+1}(-1)^k\binom{n+1}{k}\left(\sum_{m=0}^{k-1} \frac{1}{2^{k-1}}\binom{k-1}{m} \int_0^{\pi / 4} \cos ^m(2 \theta) d \theta\right), \\ & =1+\sum_{k=1}^{n+1}(-1)^k \frac{1}{2^k}\binom{n+1}{k} \frac{1}{2^k} \sum_{m=0}^{k-1}\binom{k-1}{m} \frac{\Gamma\left(\frac{m+1}{2}\right)}{\Gamma\left(\frac{m}{2}+1\right)} \sqrt{\pi}. \end{aligned}$$

Combining these results, we get the expression provided in Lemma 8. For n = 2 and n = 3, we have to compute the integral separately.

C. Proof of [TeX:] $$\mathbb{E}\left[Z_n^2\right]$$

The second moment of [TeX:] $$Z_n$$ can be computed as follows:

(74)
[TeX:] $$\begin{aligned} & \mathbb{E}\left[Z_n^2\right]=\int_0^1 z^2 f_{Z_n}(z) d z=\int_0^1 \frac{n\left(z^2+z^{n+1}\right)}{(1+z)^{n+1}} d z, \\ & =n \int_1^2\left(\frac{(x-1)^2}{x^{n+1}}+\frac{(x-1)^{n+1}}{x^{n+1}}\right) d x. \end{aligned}$$

Now,

(75)
[TeX:] $$\begin{aligned} \frac{(x-1)^2}{x^{n+1}}+ & \frac{(x-1)^{n+1}}{x^{n+1}}= \\ & \frac{1}{x^{n-1}}-\frac{2}{x^n}+\frac{1}{x^{n+1}}+\sum_{k=0}^{n+1}(-1)^k\binom{n+1}{k} \frac{1}{x^k}, \\ & =\frac{1}{x^{n-1}}-\frac{2}{x^n}+\frac{1}{x^{n+1}}+1-\frac{(n+1)}{x} \\ & \quad+\sum_{m=1}^n(-1)^{m+1}\binom{n+1}{m+1} \frac{1}{x^{m+1}}, \end{aligned}$$

so substituting above and integrating term by term gives

(76)
[TeX:] $$\begin{aligned} \mathbb{E}\left[Z_n^2\right] & =n\left(1-(n+1) \ln (2)+\frac{1}{n-2}\left(1-\frac{1}{2^{n-2}}\right)\right. \\ & -\frac{2}{n-1}\left(1-\frac{1}{2^{n-1}}\right)+\frac{1}{n}\left(1-\frac{1}{2^n}\right) \\ & \left.+\sum_{k=1}^n \frac{(-1)^{k+1}}{k}\binom{n+1}{k+1}\left(1-\frac{1}{2^k}\right)\right), n \geq 3. \end{aligned}$$

For n = 2, [TeX:] $$\mathbb{E}\left[Z_2^2\right]=\int_0^1 \frac{2 z^2}{(1+z)^2} d z=\int_1^2 \frac{2(x-1)^2}{x^2} d x=2\left(\int_1^2\left(1-\frac{2}{x}+\frac{1}{x^2}\right) d x\right)=3-4 \ln (2).$$

D. Proof of [TeX:] $$\mathbb{E}\left[Z_n^3\right]$$

The third moment of [TeX:] $$Z_n$$ can be evaluated as follows:

(77)
[TeX:] $$\begin{aligned} \mathbb{E}\left[Z_n^3\right] & =n \int_0^1 \frac{z^3\left(1+z^{n-1}\right)}{(1+z)^{n+1}} d z, \\ & =n \int_0^1\left(\frac{z^2}{(1+z)^{n+1}}+\frac{z^{n+2}}{(1+z)^{n+1}}\right) d z, \\ & =n \int_1^2\left(\frac{(x-1)^3}{x^{n+1}}+\frac{(x-1)^{n+2}}{x^{n+1}}\right) d x, \\ & =n \int_1^2\left(\frac{x^3-3 x^2+3 x-1}{x^{n+1}}+\right. \\ & \left.\sum_{k=0}^{n+2}\binom{n+2}{k}(-1)^k \frac{x^{n+2-k}}{x^{n+1}}\right) d x . \end{aligned}$$

Integrating the terms above individually results in (41) (Lemma 8) for the general case of [TeX:] $$n \geq 4 .$$ The cases n = 2 and n = 3 need to be computed separately and have been omitted for brevity.

E. Proof of [TeX:] $$\mathbb{E}\left[Z_n^4\right]$$

The fourth moment of [TeX:] $$Z_n$$ can be computed as follows:

(78)
[TeX:] $$\begin{aligned} \mathbb{E}\left[Z_n^4\right] & =n\left(\int_0^1 \frac{z^4+z^{n+3}}{(1+z)^{n+1}} d z\right), \\ & =n\left(\int_1^2 \frac{(x-1)^4}{x^{n+1}} d x+\int_1^2 \frac{(x-1)^{n+3}}{x^{n+1}} d x\right). \end{aligned}$$

Now, the second term integrates to

(79)
[TeX:] $$\begin{aligned} Q(n) & =\int_1^2 \frac{(x-1)^{n+3}}{x^{n+1}} d x=\int_1^2 \sum_{k=0}^{n+3}\binom{n+3}{k}(-1)^k \frac{x^{n+3-k}}{x^{n+1}} d x, \\ & =\sum_{k=0}^{n+3}\binom{n+3}{k}(-1)^k \int_1^2 \frac{1}{x^{k-2}} d x, \\ & =\int_0^2 x^2-(n+3) x+\frac{(n+3)(n+2)}{2} \\ & -\frac{(n+3)(n+2)(n+1)}{x}+\sum_{k=4}^{n+3}\binom{n+3}{k} \frac{(-1)^k}{x^{k-2}} d x, \\ & =\frac{7}{3}-\frac{3}{2}(n+3)+\frac{(n+3)(n+2)}{2}- \\ & \frac{(n+3)(n+2)(n+1)}{6} \ln (2)+ \\ & \sum_{k=1}^n\binom{n+3}{k+3} \frac{(-1)^{k+1}}{k}\left(1-\frac{1}{2^k}\right). \end{aligned}$$

Q(n) above is generic and appears in all [TeX:] $$\mathbb{E}\left[Z_n^4\right]$$, n ≥ 2. The first term in (78) is different for [TeX:] $$\mathbb{E}\left[Z_2^4\right], \mathbb{E}\left[Z_3^4\right], \mathbb{E}\left[Z_4^4\right],$$ and [TeX:] $$\mathbb{E}\left[Z_n^4\right]$$, n ≥ 5 and needs to be computed separately, whose results are shown in (17).

APPENDIX D

PROOF OF JOINT MOMENT CALCULATIONS
A. Joint Moment of [TeX:] $$Z_n$$

For the case of [TeX:] $$m \gt n, m \neq n,$$ we can write

(80)
[TeX:] $$\begin{aligned} \mathbb{E}\left[Z_m Z_n\right] & =\mathbb{E}\left[\frac{\min \left(X, Y_m\right)}{\max \left(X, Y_m\right)} \frac{\min \left(X, Y_n\right)}{\max \left(X, Y_n\right)}\right] \\ & =\mathbb{E}\left[\left.\frac{X^2}{Y_m Y_n} \right\rvert\, X \lt Y_m \lt Y_n\right]+ \\ & \mathbb{E}\left[\left.\frac{Y_m}{Y_n} \right\rvert\, Y_m \lt X \lt Y_n\right]+\mathbb{E}\left[\left.\frac{Y_m Y_n}{X^2} \right\rvert\, Y_m \lt Y_n \lt X\right] \\ & =I_1+I_2+I_3, \end{aligned}$$

where

(81)
[TeX:] $$\begin{aligned} I_1 & =\mathbb{E}\left(\left.\frac{X^2}{Y_m Y_n} \right\rvert\, X \lt Y_m \lt Y_n\right) \\ & =\int_0^{\infty} x^2 e^{-x} \int_x^{\infty} \frac{y^{m-2}}{\Gamma(m)} e^{-y} \int_y^{\infty} \frac{z^{n-2}}{\Gamma(n)} e^{-z} d x d y d z, \\ & =\frac{\Gamma(n-1)}{\Gamma(m) \Gamma(n)} \int_0^{\infty} x^2 e^{-x} \sum_{j=0}^{n-2} \frac{\Gamma(m+j-1)}{j!2^{m+j-1}} \int_{2 x}^{\infty} y^{m+j-2} e^{-y} d y d x, \\ & =\frac{1}{\Gamma(m) \Gamma(n-1)} \sum_{j=0}^{n-2} \frac{\Gamma(j+m-1)}{k!2^{j+m-1}} \sum_{k=0}^{j+m-2} \frac{2^k(k+1)(k+2)}{3^{k+3}}. \end{aligned}$$

The proofs for [TeX:] $$I_2 \text { and } I_3$$ can be obtained in a similar way and have thus been omitted for brevity.

B. Joint Moment of [TeX:] $$Z_n^{\frac{3}{2}}$$

For the case of [TeX:] $$m \gt n, m \neq n,$$ we can write

[TeX:] $$\begin{aligned} & \mathbb{E}\left[Z_m^{\frac{3}{2}} Z_n^{\frac{3}{2}}\right] \\ & =\mathbb{E}\left[\left.\frac{r^6}{R_m^3 R_n^3} \right\rvert\, r \lt R_m \lt R_n\right]+\mathbb{E}\left[\left.\frac{R_m^3}{R_n^3} \right\rvert\, R_m \lt r \lt R_n\right] \\ & +\mathbb{E}\left[\left.\frac{R_m^3 R_n^3}{r^6} \right\rvert\, R_m \lt R_n \lt r\right]=I_1+I_2+I_3, \end{aligned}$$

where

(82)
[TeX:] $$\begin{aligned} I_1 & =\frac{4}{\Gamma(i) \Gamma(j)} \int_0^{\infty} z^{2(j-2)} e^{-z^2} \sum_{k=0}^3 \frac{1}{k!} \int_0^z y^{2(i+k-2)} e^{-2 y^2} d z d y, \\ & =\frac{4}{\Gamma(i) \Gamma(j)} \sum_{k=0}^3 \frac{1}{k!} \int_0^{\pi / 4} \cos ^{2(j-2)} \theta \sin ^{2(i+k-2)} \theta \\ & \times \int_0^{\infty} r^{2(i+j+k-4)+1} e^{-r^2\left(1+\sin ^2 \theta\right)} d r d \theta, \\ & =2 \sum_{k=0}^3 \frac{\Gamma(i+j+k-3)}{\Gamma(i) \Gamma(j)} \int_0^{\frac{\pi}{4}} \frac{\cos ^{2(j-2)} \theta \sin ^{2(i+k-2)} \theta}{\left(1+\sin ^2 \theta\right)^{i+j+k-3}} d \theta, \\ & =2 \sum_{k=0}^3 \frac{\Gamma(i+j+k-3}{\Gamma(i) \Gamma(j)} Q(i+k-2, j-2), j \gt i \geq 2, \end{aligned}$$

with Q(i, j) as defined in Appendix J. Similarly,

(83)
[TeX:] $$\begin{aligned} I_2 & =\int_0^{\infty} \frac{1}{z^3} \frac{2 z^{2 j-1}}{\Gamma(j)} e^{-z^2} d z \int_0^z 2 x e^{-x^2} d x \int_0^x y^3 \frac{2 y^{2 i-1}}{\Gamma(i)} e^{-y^2} d y, \\ & =\frac{4}{\Gamma(i) \Gamma(j)} \times \end{aligned}$$

(84)
[TeX:] $$\begin{aligned} & \int_0^{\infty} \int_0^z z^{2(j-2)} y^{2(i+1)}\left(e^{-\left(2 y^2+z^2\right)}-e^{-\left(y^2+2 z^2\right)}\right) d y d z, \\ & =\frac{4}{\Gamma(i) \Gamma(j)} \int_0^{\frac{\pi}{4}} \cos ^{2(j-2)} \theta \sin ^{2(i+1)} \theta \int_0^{\infty} r^{2(i+j-1)+1} \end{aligned}$$

(85)
[TeX:] $$\begin{aligned} & \times\left(e^{-r^2\left(1+\sin ^2 \theta\right)}-e^{-r^2\left(1+\cos ^2 \theta\right)}\right) d r d \theta, \\ & =\frac{\Gamma(i+j)}{\Gamma(i) \Gamma(j)}\left(\int_0^{\frac{\pi}{4}} \frac{\cos ^{2(j-2)} \theta \sin ^{2(i+1)} \theta}{\left(1+\sin ^2 \theta\right)} d \theta-\right. \\ & \left.\int_0^{\frac{\pi}{2}} \frac{\cos ^{2(i+1)} \theta \sin ^{2(j-2)} \theta}{\left(1+\sin ^2 \theta\right)^{i+j}} d \theta\right), \\ & =\frac{\Gamma(i+j)}{\Gamma(i) \Gamma(j)}(Q(i+1, j-2)-Q(j-2, i+1)), j \gt i \geq 2, \end{aligned}$$

and

(86)
[TeX:] $$\begin{aligned} & I_3= \\ & \frac{8}{\Gamma(i) \Gamma(j)} \int_0^{\infty} \frac{1}{x^5} e^{-x^2} d x \int_0^x z^{2(j+1)} e^{-z^2} d z \int_0^z y^{2(i+1)} e^{-y^2} d y \\ & =\frac{8}{\Gamma(i) \Gamma(j)} \int_0^{\frac{\pi}{4}} \sin ^{2(i+1)} \psi \sin ^{2(j+1)} \psi d \psi \times \\ & \int_0^{\frac{\pi}{2}} \frac{\sin ^{2(i+j+2)+1} \theta}{\cos ^5 \theta} d \theta \int_0^{\infty} r^{2(i+j)+1} e^{-r^2} d r \\ & =\frac{1}{2^{i+j+4}} \sum_{k=0}^{j-i}\binom{j-i}{k} \frac{\Gamma\left(i+\frac{3}{2}\right) \Gamma\left(\frac{k+1}{2}\right)}{\Gamma\left(i+2+\frac{k}{2}\right)}, j \gt i \geq 2 \end{aligned}$$

Combining [TeX:] $$I_1, I_2, \text { and } I_3$$ leads to the result provided in (40).

C. Joint Moments of [TeX:] $$Z_n^2$$

The joint moment of [TeX:] $$Z_m^2 \text { and } Z_n^2$$ for [TeX:] $$m \neq n$$ can be computed as follows:

(87)
[TeX:] $$\begin{aligned} & \mathbb{E}\left(Z_m^2 Z_n^2\right)=\mathbb{E}\left(\left.\frac{x^2}{y_m^2} \frac{x^2}{y_n^2} \right\rvert\, x \lt R_m \lt R_n\right) \\ & +\mathbb{E}\left(\left.\frac{y_m^2}{x^2} \frac{x^2}{y_n^2} \right\rvert\, R_m \lt x \lt R_n\right)+\mathbb{E}\left(\left.\frac{y_m^2}{x^2} \frac{y_n^2}{x^2} \right\rvert\, R_n \lt x\right), \\ & =\mathbb{E}\left(\left.\frac{x^4}{y_m^2 y_n^2} \right\rvert\, x \lt R_m \lt R_n\right)+\mathbb{E}\left(\left.\frac{y_m^2}{y_n^2} \right\rvert\, R_m \lt x \lt R_n\right) \\ & +\mathbb{E}\left(\left.\frac{y_m^2 y_n^2}{x^4} \right\rvert\, R_n \lt x\right)=I_1+I_2+I_3, \end{aligned}$$

where

(88)
[TeX:] $$\begin{aligned} & I_1=\int_0^{\infty} x^4 f_X(x) \int_x^{\infty} \frac{1}{y^2} f_{Y_m}(y) \int_y^{\infty} \frac{1}{z^2} f_{Y_n}(z) d z d y d x, \\ & \stackrel{n \neq 2}{=} \int_0^{\infty} x^4(\lambda \pi) e^{-\lambda \pi x} \int_x^{\infty} \frac{(\lambda \pi)^m y^{m-2}}{\Gamma(m)} e^{-\lambda \pi y} \times \\ & \int_y^{\infty} \frac{(\lambda \pi)^n z^{n-2}}{\Gamma(n)} e^{-\lambda \pi z} d z d y d x, \\ & \stackrel{m \geq 3}{=}(\lambda \pi)^3 \frac{\Gamma(n-2)}{\Gamma(n)} \int_0^{\infty} x^4 e^{-\lambda \pi x} \times \\ & \int_x^{\infty} \frac{(\lambda \pi)^m y^{m-2}}{\Gamma(m)} e^{-\lambda \pi y} \sum_{j=0}^{n-3} \frac{(\lambda \pi)^j e^{-\lambda \pi y}}{j!} d y d x, \\ & =(\lambda \pi)^3 \frac{\Gamma(n-2)}{\Gamma(n)} \int_0^{\infty} x^4 e^{-\lambda \pi x} \times \\ & \sum_{j=0}^{n-3} \int_x^{\infty} \frac{(\lambda \pi)^{m+j} y^{m+j-3}}{\Gamma(m) j!} e^{-2 \lambda \pi y} d y d x, \\ & =(\lambda \pi)^5 \frac{\Gamma(n-2)}{\Gamma(m) \Gamma(n)} \int_0^{\infty} x^4 e^{-\lambda \pi x} \sum_{j=0}^{n-3} \int_x^{\infty} \frac{(2 \pi \lambda)^{m+j-2}}{2^{m+j-2} j!} \times \\ & y^{m+j-3} e^{-2 \lambda \pi y} d y d x, \\ & =(\lambda \pi)^5 \frac{\Gamma(n-2)}{\Gamma(m) \Gamma(n)} \sum_{j=0}^{n-3} \frac{\Gamma(m+j-2)}{\Gamma(j+1)} \sum_{k=0}^{m+j-3} \frac{(2 \lambda \pi)^k}{k!} \times \\ & \int_0^{\infty} x^4 e^{-3 \lambda \pi x} d x, \\ & =\frac{\Gamma(n-2)}{\Gamma(m) \Gamma(n)} \sum_{j=0}^{n-3} \frac{\Gamma(m+j-2)}{k!2^{m+j-2}} \sum_{k=0}^{m+j-3} \frac{(k+4)!}{k!3^5}\left(\frac{2}{3}\right)^k. \end{aligned}$$

Similarly,

(89)
[TeX:] $$\begin{aligned} I_2 & =\int_0^{\infty} y^2 f_{Y_m}(y) \int_y^{\infty} f_X(x) \int_x^{\infty} \frac{1}{z^2} f_{Y_n}(z) d z d x d y, \\ & =\int_0^{\infty} y^2 \cdot \frac{y^{m-1} e^{-y}}{\Gamma(m)} \int_y^{\infty} e^{-x} \int_x^{\infty} \frac{z^{n-3} e^{-z}}{\Gamma(n)} d z d x d y, \\ & =\frac{\Gamma(n-2)}{\Gamma(m) \Gamma(n)} \int_0^{\infty} y^{m+1} e^{-y} \int_y^{\infty} e^{-x}\left(\int_x^{\infty} \frac{z^{n-3} e^{-z}}{\Gamma(n-2)} d z\right) d x d y, \\ & =\frac{\Gamma(n-2)}{\Gamma(m) \Gamma(n)} \int_0^{\infty} y^{m+1} e^{-y} \int_y^{\infty} e^{-x}\left(\sum_{k=0}^{n-3} \frac{x^k}{k!} e^{-x}\right) d x d y, \\ & =\frac{\Gamma(n-2)}{\Gamma(m) \Gamma(n)} \int_0^{\infty} y^{m+1} e^{-y} \sum_{j=0}^{n-3} \int_y^{\infty} \frac{x^j}{j!} e^{-2 x} d x d y, \\ & =\frac{1}{\Gamma(m)(n-1)(n-2)} \sum_{j=0}^{n-3} \frac{1}{2^{j+1}} \sum_{k=0}^j \frac{2^k}{k!} \frac{\Gamma(m+k+2)}{3^{m+k+2}}, \end{aligned}$$

and

(90)
[TeX:] $$\begin{aligned} & I_3= \int_0^{\infty} y^2 f_{Y_m}(y) \int_y^{\infty} z^2 f_{Y_n}(z) \int_z^{\infty} \frac{1}{x^4} f_X(x) d x d z d y, \\ &= \int_0^{\infty} y^2 \cdot \frac{y^{m-1} e^{-y}}{\Gamma(m)} \int_y^{\infty} z^2 \cdot \frac{z^{n-1} e^{-z}}{\Gamma(n)} \int_z^{\infty} \frac{1}{x^4} e^{-x} d x d z d y, \\ &= \frac{1}{\Gamma(m) \Gamma(n)} \int_0^{\infty} \frac{1}{x^4} e^{-x} \int_0^x z^{n+1} e^{-z} \int_0^z y^{m+1} e^{-y} d y d z d x, \\ &= \frac{\Gamma(m+2)}{\Gamma(m) \Gamma(n)} \int_0^{\infty} \frac{1}{x^4} e^{-x} \int_0^x z^{n+1} e^{-z} \int_0^z \frac{y^{m+1} e^{-y}}{\Gamma(m+2)} d y d z d x, \\ &= m n(m+1)(n+1) \int_0^{\infty} \frac{1}{x^4}\left(e^{-x}-e^{-2 x}-x e^{-2 x}-\frac{x^2}{2} e^{-2 x}\right. \\ &\left.-\frac{x^3}{6} e^{-2 x}\right) d x-\sum_{k=0}^{n-3} \frac{1}{(k+4)!} \int_0^{\infty} x^k e^{-2 x} d x, \\ &= m n(m+1)(n+1)\left(\frac{1}{36}(5-6 \ln (2))-\sum_{k=0}^{n-3} \frac{k!}{(k+4)!2^{k+1}}\right)- \\ & \frac{m(m+1)}{\Gamma(n)} \sum_{j=0}^{n+1} \frac{\Gamma(n+j+2)}{k!2^{n+j+2}}\left(\frac{4}{9}-\frac{1}{6} \ln (2)-\sum_{k=0}^{n+j-3} \frac{2^{k+4} m!}{(k+4)!3^{k+1}}\right). \end{aligned}$$

The cases of m = 2, n = 3 and m = 2, n > 3 and their flipped counterparts can be computed separately and have been omitted for brevity.

APPENDIX E

PROOF OF LEMMA 3

The coverage probability when [TeX:] $$\tilde{S}$$ is modeled as [TeX:] $$U^2,$$ where [TeX:] $$U \sim \mathcal{N}\left(\mu_U, \sigma_U^2\right),$$ can be computed as follows:

(91)
[TeX:] $$\begin{aligned} \mathbb{P}(\text { SINR } \gt T) & =\mathbb{E}\left[e^{-\frac{T}{\sigma_0^2} U^2}\right] \\ & =\int_{-\infty}^{\infty} e^{-\frac{T}{\sigma_0^2} u^2} \frac{1}{\sqrt{2 \pi \sigma_U^2}} e^{-\left(u-\mu_U^2\right) /\left(2 \sigma_U^2\right)} d u, \\ & =\frac{e^{-\frac{\mu_U^2}{2 \sigma_U^2}}}{\sqrt{2 \sigma_U^2}} \int_{-\infty}^{\infty} e^{-\frac{1+2 T \sigma_U^2 / \sigma_0^2}{2 \sigma_U^2}\left(u^2-\frac{2 \mu_U u}{1+2 a \sigma_U^2}\right)} d u, \\ & =\frac{1}{\sqrt{1+2 T \sigma_U^2 / \sigma_0^2}} e^{-\frac{T \mu_U^2}{\sigma_0^2+2 T \sigma_U^2}}. \end{aligned}$$

APPENDIX F

JOINT MOMENT BETWEEN S AND [TeX:] $$r^\eta$$
A. Derivation of [TeX:] $$\mathbb{E}\left[S r^2\right]$$

The joint moment between S and [TeX:] $$r^2$$ can be computed as [TeX:] $$\mathbb{E}\left[S r^2\right]=\sum_{i=2}^N \mathbb{E}\left[g_i\right] \mathbb{E}\left[\frac{\left(\min \left(X, Y_i\right)\right)^2}{\max \left(X, Y_i\right)}\right],$$ where

(92)
[TeX:] $$\begin{aligned} & \mathbb{E}\left[\frac{\left(\min \left(X, Y_i\right)\right)^2}{\max \left(X, Y_i\right)}\right]=\mathbb{E}\left[\left.\frac{X^2}{Y_i} \right\rvert\, X \lt Y_i\right]+\mathbb{E}\left[\left.\frac{Y_i^2}{X} \right\rvert\, X \gt Y_i\right], \\ & =\int_0^{\infty} \frac{(\pi \lambda)^i y^{i-1}}{y \Gamma(i)} e^{-\lambda \pi y} \int_0^y x^2(\lambda \pi) e^{-\lambda \pi x} d y d x \\ & +\int_0^{\infty} \frac{(\pi \lambda)}{x \Gamma(i)} e^{-\lambda \pi x} \int_0^x \frac{y^2(\lambda \pi)^i y^{i-1}}{\Gamma(i)} e^{-\lambda \pi y} d x d y, \\ & =\frac{(\pi \lambda)^{i+1}}{\Gamma(i)}\left(\frac{2}{(\pi \lambda)^3} \int_0^{\infty} y^{i-2} e^{-\lambda \pi y}\left(1-\sum_{k=0}^2 \frac{(\pi \lambda)^k}{k!} e^{-\lambda \pi y}\right) d y\right) \\ & +\frac{\Gamma(i+2)}{(\pi \lambda)^{i+2}}\left(\int_0^{\infty} \frac{1}{x} e^{-\lambda \pi x}\left(1-\sum_{k=0}^{i+1} \frac{(\lambda \pi x)^k}{k!} e^{-\lambda \pi x}\right) d x\right), \\ & =\frac{2}{(\lambda \pi)}\left(\frac{1}{i-1}-\frac{1}{\Gamma(i)} \sum_{k=0}^2 \frac{\Gamma(i+k-1)}{k!2^{i+k-1}}\right) \\ & +\frac{i(i+1)}{\lambda \pi}\left(\ln (2)-\sum_{k=1}^{i+1} \frac{1}{k 2^k}\right), i \geq 2. \end{aligned}$$

B. Derivation of [TeX:] $$\mathbb{E}\left[S r^3\right]$$

The joint moment between S and [TeX:] $$r^3$$ can be computed as [TeX:] $$\mathbb{E}\left[S r^3\right]=\sum_{i=2}^N \mathbb{E}\left[g_i\right] \mathbb{E}\left[\frac{\left(\min \left(r, R_i\right)\right)^6}{\left(\max \left(X, R_i\right)\right)^3}\right],$$ where

(93)
[TeX:] $$\begin{aligned} & {\left[\frac{\left(\min \left(r, R_i\right)\right)^6}{\left(\max \left(X, R_i\right)\right)^3}\right]=\mathbb{E}\left[\left.\frac{r^6}{R_i^3} \right\rvert\, r \lt R_i\right]+\mathbb{E}\left[\left.\frac{R_i^6}{r^3} \right\rvert\, R_i \gt r\right]}, \\ & =\int_0^{\infty} \frac{12}{y^3}(\lambda \pi)^i \frac{y^{2 i-1}}{\Gamma(i)} e^{-\lambda \pi y^2} \int_0^y r^6(2 \lambda \pi) e^{-\lambda \pi r^2} d y d r \\ & +\int_0^{\infty} \frac{1}{r^3}(2 \lambda \pi r) e^{-\lambda \pi r^2} \int_0^r y^6 \frac{2(\lambda \pi)^i y^{2 i-1}}{\Gamma(i)} d r d y, \\ & =\frac{2}{\Gamma(i)(\pi \lambda)^{\frac{3}{2}}} \\ & \times\left(3!\left(\int_0^{\infty} y^{2(i-2)} e^{-y^2} d y-\sum_{k=0}^3 \frac{1}{k!} \int_0^{\infty} y^{2(i+k-2)} e^{-2 y^2} d y\right)\right. \\ & +\Gamma(i+3) \\ & \times\left(\int_0^{\infty} \frac{1}{x^2}\left(e^{-x^2}-e^{-2 x^2}\right) d x-\sum_{k=1}^{i+2} \frac{1}{k!} \int_0^{\infty} x^{2(k-1)} e^{-2 x^2} d x\right). \end{aligned}$$

It can be shown that [TeX:] $$\int_0^{\infty} y^{2(i-2)} e^{-y^2} d y=\frac{\pi}{2^{2(i-2)+1}} \frac{(2 i-4)!}{(i-2)!},$$ [TeX:] $$\int_0^{\infty} y^{2(i+k-2)} e^{-2 y^2} d y \quad=\quad \frac{\sqrt{2 \pi}}{2^{3(i+k-2)+2}} \frac{(2 i+2 k-4)!}{(i+k-2)!},$$[TeX:] $$\int_0^{\infty} \frac{e^{-x^2}-e^{-2 x^2}}{x^2} d x=(\sqrt{2}-1) \sqrt{\pi},$$ and [TeX:] $$\int_0^{\infty} x^{2(k-1)} e^{-2 x^2} d x=\frac{\sqrt{2 \pi}(2 k-2)!}{2^{3(k-1)+2}(k-1)!} \text {. }$$ Combining these lead to the result provided in (50).

B. Derivation of [TeX:] $$\mathbb{E}\left[S r^4\right]$$

The joint moment between S and [TeX:] $$r^4$$ can be computed as [TeX:] $$\mathbb{E}\left[S r^4\right]=\sum_{i=2}^N \mathbb{E}\left[g_i\right] \mathbb{E}\left[\frac{\left(\min \left(X, Y_i\right)\right)^4}{\left(\max \left(X, Y_i\right)\right)^2}\right],$$ where

(94)
[TeX:] $$\begin{aligned} & \mathbb{E}\left[\frac{\left(\min \left(X, Y_i\right)\right)^4}{\left(\max \left(X, Y_i\right)\right)^2}\right]=\mathbb{E}\left[\left.\frac{X^4}{Y_i^2} \right\rvert\, X \lt Y_i\right]+\mathbb{E}\left[\left.\frac{Y_i^4}{X^2} \right\rvert\, X \gt Y_i\right], \\ & =\int_0^{\infty} \frac{1}{y^2}(\pi \lambda)^i \frac{y^{i-1}}{\Gamma(i)} e^{-\lambda \pi y} d y \int_0^y x^4(\pi \lambda) e^{-\lambda \pi x} d x+ \\ & \int_0^{\infty} \frac{1}{x^2}(\pi \lambda) e^{-\pi \lambda x} d x \int_0^x y^4 \frac{(\pi \lambda)^i y^{i-1}}{\Gamma(i)} e^{-\pi \lambda y} d y, \\ & =\frac{(\pi \lambda)^{i+1}}{\Gamma(i)}\left(\frac{4!}{(\pi \lambda)^5} \int_0^{\infty} y^{i-3} e^{-\pi \lambda y}\left(1-\sum_{k=0}^4 \frac{(\pi \lambda y)^k}{k!} e^{-\pi \lambda y}\right) d y\right.,\\ & \left.+\frac{\Gamma(i+4)}{(\pi \lambda)^{i+4}} \int_0^{\infty} \frac{1}{x^2} e^{-\pi \lambda x}\left(1-\sum_{k=0}^{i+3} \frac{(\pi \lambda x)^k}{k!} e^{-\pi \lambda x}\right) d x\right), \\ & =\frac{1}{(\pi \lambda)^2}\left(\frac{4!}{\Gamma(i)}\left(\Gamma(i-2)-\sum_{k=0}^4 \frac{\Gamma(i+k-2)}{k!2^{i+k-2}}\right)+\right. \\ & \left.\quad \frac{\Gamma(i+4)}{\Gamma(i)}\left(1-\ln (2)-\sum_{k=0}^{i+1} \frac{k!}{(k+2)!2^{k+1}}\right)\right), \quad i \geq 3 . \end{aligned}$$

The special case of i = 2 needs to be computed separately and can be done directly to obtain [TeX:] $$\mathbb{E}\left[\frac{\left(\min \left(X, Y_2\right)\right)^4}{\left(\max \left(X, Y_2\right)\right)^2}\right]=\frac{1}{(\pi \lambda)^2}(67-96 \ln (2)) .$$

APPENDIX G

PROOF OF LEMMA 5

We can take advantage of the properties of Gaussian moments in (59). Since U is Gaussian with mean [TeX:] $$\mu_U$$ [TeX:] $$\sigma_U^2,$$ the relationship between the mean and variances of S and U can be obtained using the following [35]:

(95)
[TeX:] $$\mu_{\tilde{S}}=\mathbb{E}\left[U^2\right]=\sigma_U^2+\mu_U^2,$$

(96)
[TeX:] $$\sigma_{\tilde{S}}^2=\mathbb{E}\left[U^4\right]-\left(\sigma_U^2+\mu_U^2\right)^2=\mu_U^4+6 \mu_U^2 \sigma_U^2+3 \sigma_U^4-\left(\sigma_U^2+\mu_U^2\right)^2.$$

Solving the above set of equations for [TeX:] $$\mu_U \text{ and } \sigma_U^2$$ lead to the results provided in (27) and (28) respectively. Note that [TeX:] $$\mu_{\tilde{S}}=\mathbb{E}[S+\left.\sigma^2 r^4\right]=\mu_S+\sigma^2 \mathbb{E}\left[r^4\right].$$ Since [TeX:] $$r^2$$ is Exponentially distributed with mean [TeX:] $$\frac{1}{\pi \lambda},$$ hence [TeX:] $$\mu_{\tilde{S}}=\mu_S+\frac{2 \sigma^2}{(\pi \lambda)^2} .$$ Similarly, [TeX:] $$\sigma_{\tilde{S}}^2=\operatorname{Var}\left(S+\sigma^2 r^4\right)=\sigma_S^2+\sigma^4 \operatorname{Var}\left(r^4\right)+2 \sigma^2 \operatorname{Cov}\left(S, r^4\right).$$ It results in [TeX:] $$\sigma_{\tilde{S}}^2=\sigma_S^2+\frac{20 \sigma^4}{(\pi \lambda)^4}+2 \sigma^2 \mathbb{E}\left[S r^4\right]-\frac{4 \sigma^2 \mu_S}{(\pi \lambda)^2}$$

APPENDIX H

PROOF OF LEMMA 6

The coverage probability when [TeX:] $$\tilde{S}$$ is modeled as [TeX:] $$U^2+V^2,$$ where [TeX:] $$U \sim \mathcal{N}\left(\mu_U, \sigma_U^2\right)$$ and [TeX:] $$V \sim \operatorname{Exp}(\beta)$$ independently of U, can be computed as follows:

(97)
[TeX:] $$\begin{aligned} & \mathbb{P}(\operatorname{SINR} \gt T)=\mathbb{E}\left[e^{-\frac{T}{\sigma_0^2}\left(U^2+V^2\right)}\right]=\mathbb{E}\left[e^{-\frac{T}{\sigma_0^2} U^2}\right] \mathbb{E}\left[e^{-\frac{T}{\sigma_0^2} V^2}\right], \\ & =\frac{e^{-\frac{T \mu_U^2}{\sigma_0^2\left(1+2 T \sigma_U^2 / \sigma_0^2\right)}}}{\sqrt{1+2 T \sigma_U^2 / \sigma_0^2}} \int_0^{\infty} e^{-\frac{T}{\sigma_0^2} v^2} \frac{1}{\beta} e^{-\frac{v}{\beta}} d v, \\ & =\frac{1}{\sqrt{1+2 T \sigma_U^2 / \sigma_0^2}} \times \\ & e^{-\frac{T \mu_U^2}{\sigma_0^2\left(1+2 T \sigma_U^2 / \sigma_0^2\right)}} \frac{\sigma_0}{\beta} \sqrt{\frac{\pi}{T}} e^{\frac{\sigma_0^2}{4 \beta^2 T}} \int_{\frac{\sigma_0}{\beta \sqrt{2 T}}}^{\infty} \frac{1}{\sqrt{2 \pi}} e^{-y^2 / 2} d y . \end{aligned}$$

APPENDIX I

In the case when [TeX:] $$\Delta=\mu_{\tilde{S}}^2-\frac{\sigma_{\tilde{S}}^2}{2} \lt 0$$, we can model [TeX:] $$\tilde{S}=U^2+V^2,$$ where [TeX:] $$U \sim \mathcal{N}\left(\mu_U, \sigma_U^2\right) \text { and } V \sim \operatorname{Exp}(\beta) .$$ To this end, we can write [TeX:] $$\mu_{\tilde{S}}=\mathbb{E}\left[U^2\right]+\mathbb{E}\left[V^2\right]=\mu_U^2+\sigma_U^2+2 \beta^2 .$$ Similarly, [TeX:] $$\mathbb{E}\left[\tilde{S}^2\right]=3 \sigma_U^4+6 \sigma_U^2 \mu_U^2+\mu_U^4+24 \beta^4+4\left(\sigma_U^2+\mu_U^2\right) \beta^2,$$ which results in [TeX:] $$\sigma_{\tilde{S}}^2=2 \sigma_U^4+4 \sigma_U^2 \mu_U^2+20 \beta^4 .$$ From these, we obtain

(98)
[TeX:] $$\sigma_U^2=\mu_{\tilde{S}}-2 \beta^2+\sigma \pm \sqrt{\left(\mu_{\tilde{S}}-2 \beta^2\right)^2-\left(\sigma_{\tilde{S}}^2-20 \beta^4\right) / 2}.$$

For this variance to be non-negative, we need [TeX:] $$14 \beta^4-4 \mu_S \beta^2+\Delta \geq 0.$$ The acceptable solution to this can be written as [TeX:] $$\frac{\mu_{\bar{S}}}{7}\left(1+\sqrt{1+\frac{7|\Delta|}{2 \mu_{\bar{S}}^2}}\right)=\beta_{\min }^2$$ and we use the minimum possible value of β in our solution. As a result of using [TeX:] $$\beta_{\min },$$ we get [TeX:] $$\mu_U=0 \text{ and } \sigma_U^2=\mu_{\tilde{S}}\left(1-\frac{2}{7}\left(1+\sqrt{1+\frac{7|\Delta|}{2 \mu_{\tilde{S}}^2}}\right)\right) .$$

APPENDIX J

The function Q(i, j) is defined as

(99)
[TeX:] $$\begin{aligned} Q(i, j) & =\int_0^{\pi / 4} \frac{\sin ^{2 i} \theta \cos ^{2 j} \theta}{\left(1+\sin ^2 \theta\right)^{i+j+1}} d \theta, \\ & =\int_0^{\pi / 4} \frac{\tan ^{2 i} \theta \sec ^2 \theta}{\left(1+2 \tan ^2 \theta\right)^{i+j+1}} d \theta, \end{aligned}$$

Using the substitution [TeX:] $$\sqrt{2} \tan \theta=\tan \psi,$$ we get

(100)
[TeX:] $$\begin{aligned} & Q(i, j)=\int_0^{\tan ^{-1} \sqrt{2}} \frac{\left(\frac{\tan \psi}{\sqrt{2}}\right)^{2 i} \frac{\sec ^2 \psi}{\sqrt{2}}}{\left(1+\tan ^2 \psi\right)^{i+j+1}} d \psi, \\ & =\frac{1}{2^{i+\frac{1}{2}}} \int_0^{\tan ^{-1} \sqrt{2}} \tan ^{2 i} \psi \frac{\cos ^{2(i+j+1)} \psi}{\cos ^2 \psi} d \psi, \\ & =\frac{1}{2^{i+\frac{1}{2}}} \int_0^{\tan ^{-1} \sqrt{2}} \sin ^{2 i} \psi \cos ^{2 j} \psi d \psi, \\ & =\frac{1}{2^{i+\frac{1}{2}}} \sum_{m=0}^i(-1)^m\binom{i}{m} \int_0^{\tan ^{-1} \sqrt{2}} \cos ^{2(j+m)} \psi d \psi, \\ & =\frac{1}{2^{i+\frac{1}{2}}} \sum_{m=0}^i\binom{i}{m} \frac{(-1)^m}{4^{m+j}}\left(\binom{2(m+j)}{m+j} \times\right. \\ & \left.\left(\tan ^{-1} \sqrt{2}\right) 2 \sum_{k=1}^{m+j}\binom{2(m+j)}{m+j-k} \frac{\sin \left(2 k \tan ^{-1} \sqrt{2}\right)}{k}\right), \end{aligned}$$

where we leverage the result [TeX:] $$\int_0^{\theta_0} \cos ^{2 n} \theta d \theta=\frac{1}{4^n}\left(\binom{2 n}{n} \theta_0+2 \sum_{r=1}^n\binom{2 n}{n-r} \frac{\sin \left(2 r \theta_0\right)}{r}\right)$$ with n = m + j. Finally,

(101)
[TeX:] $$\begin{aligned} & \sin \left(2 k \tan ^{-1} \sqrt{2}\right)=\operatorname{Im}\left\{e^{i 2 k \tan ^{-1} \sqrt{2}}\right\}, \\ & =\operatorname{Im}\left\{\left(\cos \left(\tan ^{-1} \sqrt{2}\right)+i \sin \left(\tan ^{-1} \sqrt{2}\right)\right)^{2 k}\right\}, \\ & =\operatorname{Im}\left\{\sum_{r=0}^{2 k}\binom{2 k}{r} i^r \sin ^r\left(\tan ^{-1} \sqrt{2}\right) \cos ^{2 k-r}\left(\tan ^{-1} \sqrt{2}\right)\right\}, \\ & =\sum_{l=0}^{k-1}(-1)^l\binom{2 k}{2 l+1} \frac{2^{l+\frac{1}{2}}}{3^k}. \end{aligned}$$

Biography

Junaid Farooq

Junaid Farooq (S’15-M’16) received the B.S. degree in electrical engineering from the School of Electrical Engineering and Computer Science (SEECS), National University of Sciences and Technology (NUST), Islamabad, Pakistan, in 2013, the M.S. degree in electrical engineering from the King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia, in 2015, and the Ph.D. degree in electrical engineering from the Tandon School of Engineering, New York University, Brooklyn, NY , USA, in 2020. From 2015 to 2016, he was a Research Assistant with the Qatar Mobility Innovations Center (QMIC), Qatar Science and Technology Park (QSTP), Doha, Qatar. He is currently an Assistant Professor with the Department of Electrical and Computer Engineering, College of Engineering and Computer Science, University of Michigan-Dearborn. His research interests include modeling, analysis and optimization of wireless communication systems, cyber-physical systems, and the Internet of Things.

Biography

Unnikrishna Pillai

Unnikrishna Pillai (S’86-M’90-SM’99-F’15) is a Professor of Electrical Engineering at the Tandon School of Engineering, New York University. He joined the Polytechnic Institute of New York (Brooklyn Poly) in 1985 as an Assistant Professor. He completed his Ph.D. in Systems Engineering from the Moore School of Electrical Engineering, University of Pennsylvania in 1985, and prior to that an M.S. in Electrical Engineering from IIT, Kanpur, India in 1982. He has served as the Electrical Engineering Department Head for one year in the past (19981989) during the Polytechnic years. Pillai is the (co)author of several textbooks including Array Signal Processing (1989), Spectrum Estimation and System Identification (S. U. Pillai and T. I. Shim, 1993), the fourth edition of Probability, Random Variables and Stochastic Processes, (A. Papoulis and S. U. Pillai, 2002), and Space Based Radar (S. U. Pillai, K. Y . Li, and B. Himed, 2008). His research interests include system identification, radar signal processing, synthetic aperture radar (SAR) imaging and identifying moving targets from such radar images and from superfast airborne platforms, machine learning technuiques for signal detection and identification in congested and other environments.

References

  • 1 P. Popovski, et al., "Wireless access for ultra-reliable low-latency communication: Principles and building blocks," IEEE Netw., vol. 32, no. 2, pp. 16-23, 2018.custom:[[[-]]]
  • 2 Y . Wang, L. Xiang, J. Zhang, and X. Ge, "Connectivity analysis for large-scale intelligent reflecting surface aided mmWave cellular networks," in Proc. PIMRC, 2022.custom:[[[-]]]
  • 3 S. A. Dahri, et al., "Multi-slope path loss model-based performance assessment of heterogeneous cellular network in 5G," IEEE Access, vol. 11, pp. 30473-30485, 2023.custom:[[[-]]]
  • 4 Y . Fang, "Modeling and performance analysis for wireless mobile networks: a new analytical approach," IEEE/ACM Trans. Netw., vol. 13, no. 5, pp. 989-1002, 2005.custom:[[[-]]]
  • 5 K. A. Hamdi, "On the statistics of signal-to-interference plus noise ratio in wireless communications," IEEE Trans. Commun., vol. 57, no. 11, pp. 3199-3204, 2009.custom:[[[-]]]
  • 6 M. Simon and M. Alouini, "A unified approach to the performance analysis of digital communication over generalized fading channels," Proc. IEEE, vol. 86, no. 9, pp. 1860-1877, 1998.custom:[[[-]]]
  • 7 A. AlAmmouri, J. G. Andrews, and F. Baccelli, "SINR and throughput of dense cellular networks with stretched exponential path loss," IEEE Trans. Wireless Commun., vol. 17, no. 2, pp. 1147-1160, 2018.custom:[[[-]]]
  • 8 M. Haenggi, Stochastic Geometry for Wireless Networks. Cambridge University Press, 2012.custom:[[[-]]]
  • 9 Y . Hmamouche, M. Benjillali, S. Saoudi, H. Yanikomeroglu, and M. D. Renzo, "New trends in stochastic geometry for wireless networks: A tutorial and survey," Proc. IEEE, vol. 109, no. 7, pp. 1200-1252, 2021.custom:[[[-]]]
  • 10 R. Borralho, A. U. Quddus, A. Mohamed, P. Vieira, and R. Tafazolli, "Coverage and data rate analysis for a novel cell-sweeping-based ran deployment," IEEE Trans. Wireless Commun., vol. 23, no. 1, pp. 217-230, 2024.custom:[[[-]]]
  • 11 M. Soltanpour, H. Zhang, and H. Ding, "Coverage probability and area potential spectral efficiency analysis of 3d dense scma cellular networks," IEEE Trans. Wireless Commun., vol. 22, no. 12, pp. 8891-8903, 2023.custom:[[[-]]]
  • 12 R. L. Streit, The Poisson Point Process. Boston, MA: Springer US, 2010, pp. 11-55. (Online). Available: https://doi.org/10.1007/ 978-1-4419-6923-1 2custom:[[[10.1007/978-1-4419-6923-12]]]
  • 13 U. Schilcher, et al., "Interference functionals in poisson networks," IEEE Trans. Inf. Theory, vol. 62, no. 1, pp. 370-383, 2016.custom:[[[-]]]
  • 14 M. Di Renzo, A. Zappone, T. T. Lam, and M. Debbah, "Stochastic geometry modeling of cellular networks: A new definition of coverage and its application to energy efficiency optimization," in Proc. EUSIPCO, 2018.custom:[[[-]]]
  • 15 H. ElSawy, A. Sultan-Salem, M.-S. Alouini, and M. Z. Win, "Modeling and analysis of cellular networks using stochastic geometry: A tutorial," Commun. Surveys Tuts., vol. 19, no. 1, pp. 167-203, 2017.custom:[[[-]]]
  • 16 D. Stoyan, W. Kendall, and J. Mecke, Stochastic Geometry and Its Applications, 2nd Edition, John Wiley and Sons, 1996.custom:[[[-]]]
  • 17 M. Haenggi and R. K. Ganti, Interference in Large Wireless Netw.. Now Foundations and Trends, 2009.custom:[[[-]]]
  • 18 Y . Hmamouche, M. Benjillali, S. Saoudi, H. Yanikomeroglu, and M. D. Renzo, "New trends in stochastic geometry for wireless networks: A tutorial and survey," Proc. IEEE, vol. 109, no. 7, pp. 1200-1252, 2021.custom:[[[-]]]
  • 19 M. Haenggi, J. G. Andrews, F. Baccelli, O. Dousse, and M. Franceschetti, "Stochastic geometry and random graphs for the analysis and design of wireless networks," IEEE J. Sel. Areas Commun., vol. 27, no. 7, pp. 1029-1046, 2009.custom:[[[-]]]
  • 20 H. ElSawy, E. Hossain, and M. Haenggi, "Stochastic geometry for modeling, analysis, and design of multi-tier and cognitive cellular wireless networks: A survey," IEEE Commun. Surveys Tutss, vol. 15, no. 3, pp. 996-1019, 2013.custom:[[[-]]]
  • 21 H. S. Dhillon, R. K. Ganti, F. Baccelli, and J. G. Andrews, "Modeling and analysis of k-tier downlink heterogeneous cellular networks," IEEE J. Sel. Areas Commun., vol. 30, no. 3, pp. 550-560, 2012.custom:[[[-]]]
  • 22 M. J. Farooq, H. ElSawy, and M.-S. Alouini, "A stochastic geometry model for multi-hop highway vehicular communication," IEEE Trans. Wireless Commun., vol. 15, no. 3, pp. 2276-2291, 2016.custom:[[[-]]]
  • 23 M. J. Farooq, H. ElSawy, Q. Zhu, and M.-S. Alouini, "Optimizing mission critical data dissemination in massive IoT networks," in Proc. WiOpt, 2017.custom:[[[-]]]
  • 24 A. Al-Hourani, "An analytic approach for modeling the coverage performance of dense satellite networks," IEEE Wireless Commun. Lett., vol. 10, no. 4, pp. 897-901, 2021.custom:[[[-]]]
  • 25 M. Z. Win, P. C. Pinto, and L. A. Shepp, "A mathematical theory of network interference and its applications," Proc. IEEE, vol. 97, no. 2, pp. 205-230, 2009.custom:[[[-]]]
  • 26 S. Srinivasa and M. Haenggi, "Distance distributions in finite uniformly random networks: Theory and applications," IEEE Trans. Veh. Technol., vol. 59, no. 2, pp. 940-949, 2010.custom:[[[-]]]
  • 27 R. Mathar and J. Mattfeldt, "On the distribution of cumulated interference power in Rayleigh fading channels," in Springer Wireless Netw., vol. 1, no. 1, 1995, p. 31-36.custom:[[[-]]]
  • 28 P. C. Pinto and M. Z. Win, "Communication in a Poisson field of interferers - Part I: Interference distribution and error probability," IEEE Trans. Wireless Commun., vol. 9, no. 7, pp. 2176-2186, 2010.custom:[[[-]]]
  • 29 S. Ak, H. Inaltekin, and H. V . Poor, "Gaussian approximation for the downlink interference in heterogeneous cellular networks," in Proc. ISIT, 2016.custom:[[[-]]]
  • 30 J. G. Andrews, F. Baccelli, and R. K. Ganti, "A tractable approach to coverage and rate in cellular networks," IEEE Trans. Commun., vol. 59, no. 11, pp. 3122-3134, 2011.custom:[[[-]]]
  • 31 M. Haenggi, "On distances in uniformly random networks," IEEE Trans. Inf. Theory, vol. 51, no. 10, pp. 3584-3586, 2005.custom:[[[-]]]
  • 32 X. Zhang and J. G. Andrews, "Downlink cellular network analysis with multi-slope path loss models," IEEE Trans. on Commun., vol. 63, no. 5, pp. 1881-1894, 2015.custom:[[[-]]]
  • 33 W. Lu and M. Di Renzo, "Stochastic geometry modeling of cellular networks: Analysis, simulation and experimental validation," in Proc. ACM MSWiM, New York, NY , USA: Association for Computing Machinery, 2015.custom:[[[-]]]
  • 34 K. S. Ali, M. Haenggi, H. ElSawy, A. Chaaban, and M.-S. Alouini, "Downlink non-orthogonal multiple access (NOMA) in poisson networks," IEEE Trans. Commun., vol. 67, no. 2, pp. 1613-1628, 2019.custom:[[[-]]]
  • 35 A. Papoulis and S. U. Pillai, Probability, Random Variables, and Stochastic Processes, 4th ed. Boston: McGraw Hill, 2002. (Online). Available: http://www.worldcat.org/search?qt=worldcat org allq=0071226613custom:[[[http://www.worldcat.org/search?qt=worldcatorgallq=0071226613]]]
Top view of a cellular network displaying the signal and interference powers received by a typical mobile user in the network. The closest or associated BS is shown in green color while the nearest interfering BS is shown in red color. The other interferers are shown in black color.
Comparison of coverage expression for varying number of BSs. As more number of interfering BSs are considered in the calculations, the computation converges to the simulated values asymptotically.
Probability of coverage for the case η = 4 for various BS density and SNR levels.
Probability of coverage for the case η = 3 for various BS density and SNR levels.
Probability of coverage for the case η = 2 for various BS density and SNR levels.
Comparison of coverage probability for varying density of BSs.
Comparison of coverage probability for varying channel noise power.