PDF  PubReader

Ghareeb and Al-Shalali: Statistical Analysis of Cascaded Nakagami-m Fading Channels with Generalized Correlation

Ibrahim Ghareeb and Osama Al-Shalali

Statistical Analysis of Cascaded Nakagami-m Fading Channels with Generalized Correlation

Abstract: This paper studies the statistical analysis of cascaded Nakagami-m fading channels that are arbitrarily correlated and not necessarily identically distributed. The probability density function (PDF), cumulative distribution function (CDF), and the nth moment for the product of N correlated Nakagamim random variables (RVs) are derived and presented in exact form expressions using the Meijer’s G function. The cascaded channels are assumed to have flat and slow fading with arbitrarily non-identical fading severity parameters. Using these results, the impact of channel correlation and fading severity parameters are investigated for the cascaded Nakagami-m channels. Furthermore, performance analysis addressed by outage probability (OP), average channel capacity, and average bit error probability (BEP) for coherently detected binary PSK and FSK signals are derived. As a consequence of the versatility of Nakagami-m distribution, the derived expressions can compromise the statistics of other useful multivariate distributions such as One-sided Gaussian distribution with m = 1/2 and Rayleigh distribution with m = 1. To the best of the authors’ knowledge, the derived expressions are novel and have not been reported in the literature. To aid and verify the theoretical analysis, numerical results authenticated by Monte Carlo simulation are presented.

Keywords: Average bit error probability , average channel capacity , cascaded fading channels , generalized correlation , keyhole channels , outage probability , Nakagami-m distribution

I. INTRODUCTION

THE demand for higher performance and reliability in communication systems has increased significantly. Therefore, the attention to have accurate channel modeling in system design and development has been considerably important. Generally, the propagation of radio signals in wireless channels is characterized by some environmental effects such as path loss, multipath (short-term) fading, and shadowing (long-term) fading. Using experimental radio propagation measurements, the short-term, long-term, and mixed fading channels can be statistically represented by various channel models. Since wireless fading channels characteristics vary widely, the need for accurate channel modeling that can precisely represent the channel statistical properties has been a continuing concern [1].

Recently, cascaded “compound” fading channels topic has been a major of interest within the field of 5G wireless communications. As opposed to the classical one-way fading channel, the signal transmitted from the sender to the receiver experiences a cascade of reflections/diffractions. Consequently, a wide range of realistic and efficient communication scenarios can be modeled as cascaded fading channels. For instance, keyhole channels, node-to-node communication channels, dual-hop fading channels and radio-frequency identification (RFID) keyhole channels [2]–[11]. Furthermore, recent works have made valuable contributions to the field of wireless cascaded re-configurable intelligent surfaces (RISs) [12] and [13]. More specifically, cascaded RIS wireless networks are intended to extend the wireless network’s coverage by efficiently bypass obstacles within the propagation media and serve the ground node through a unique cascaded route or link. In [12], under the assumption of perfect knowledge of the end-to-end channel the performance of a cascaded RIS network affected by imperfect phase estimation in a downlink scenario was investigated. Whereas in [13] approximation to the channel distribution of cascaded RIS-aided wireless networks with phase errors over Nakagami-m fading channels was presented. Generally, the signal being sent from the transmitter to the receiver is received after exhibiting multiple scatters. Therefore, the overall end-to-end channel gain is multiplicative and can be represented as a model of the product of each sub-channel fading coefficient.

In the literature, early and recent works have been dedicated to studying the statistical properties of cascaded fading channels with the assumption that the sub-channels are statistically independent [14]–[21]. In this respect, using an inverse Mellin transformation and a Meijer G-function, the PDF and CDF for the product of N independent Rayleigh distributed RVs were derived and expressed in a closed-form [14]. In [15], the performance of multihop-intervehicular communication systems is introduced, where the authors have considered the independent cascaded Rayleigh fading channel as an appropriate multipath fading channel model for vehicle-to-vehicle communication systems. The statistics of cascaded Weibull fading channels have been investigated in [16], where the PDF and the average channel capacity of the product of N independent, not necessarily identical, Weibull distributed RVs were derived. In the same matter, the statistical representations of the product of N independent but not necessarily identically distributed Nakagami-m RVs was studied in [17], where the PDF, CDF, moments-generating function and moments of the “so-called” N∗Nakagami distribution were derived and expressed in closed-form. In order to mitigate fading in such a channel, many diversity techniques have been studied in [18]. The author has examined the performance of the N∗Nakagami channel for the selection, equal gain, generalized selection, and maximal ratio combining techniques. In [19], the statistics of a generic fading distribution called the N-product Generalized Nakagami-m distribution have been investigated. For such a channel, the PDF, CDF, moments-generating function, and moments have been derived and expressed in closed-forms in terms of Fox’s H function. Furthermore, the authors have presented closed-form expressions for the outage probability, amount of fading (AoF), outage capacity, and the average bit error probability. One of the versatile distributions is the α - μ distribution which can be compromised to fit many other practical and feasible statistical distributions, such as, Nakagami-m, Weibull, Rayleigh, and Gamma distributions. In this regard, the statistics of cascaded α - μ fading channels have been introduced in [20], where exact and closed-form expressions for the PDF, CDF, and OP of independent and non-identically distributed α - μ RVs have been studied. Recently, many cascaded fading channel models have been proposed. Within this framework, statistical analysis for the ratio of the products of fluctuating two-ray RVs was introduced in [21], where exact expressions for PDF, CDF, MGF, and AoF have been obtained. The authors in this work have shown the value and use of the derived mathematical expressions to investigate the performance of multi-hop communication systems in multiple interference scenarios. In [22], statistical analysis of cascaded Rician fading channels were studied. Channel modeling in the presence and/or absence of Lineof- Site in small-scale and large-scale fading cascaded double Beaulieu-Xie fading channel was proposed in [23], where statistical analysis for the product of two independent but not necessarily identically distributed Beauliey-Xie RVs were introduced.

Practically, signals propagating in cascaded fading channels can undergo correlated scatters. For instance, RFID architecture necessitates having correlation between the forward and backscatter links [24]. To make this claim clear, in cascaded fading channels, the received signal is subjected to N relay levels. In such scenarios, the only direct link is the link between adjacent levels, where each level receives a faded version from the preceding level. As a result, correlation at each hop between successive channels occurs [25]. Therefore, dependency declare among cascaded fading channels needs to be taken into consideration. In the literature, there is plenty of research concerning correlation between cascaded fading channels [26], [27]. In the earlier work of Goldman and Sommer they have examined several modulation techniques over independent and correlated Rayleigh cascaded fading channels [26]. Regardless of the used modulation technique, they have shown that reliability is higher in correlated fading than in independent fading. Recently, statistical analysis for arbitrarily correlated Rayleigh fading channels has been studied in [27], where exact expressions of the PDF, CDF, the PDF and CDF of the instantaneous signal-to-noise ratio, average channel capacity, and the average bit error probability for coherently detected binary signals have been derived.

The Nakagami-m distribution has attracted a widespread application in the modeling of wireless fading channels [28], [29]. In the literature, few attempts have been made to study the statistics of cascaded Nakagami-m fading channels with arbitrary correlation [30], [31]. In this respect, an approximation for the PDF and CDF of N correlated Nakagami-m fading channels have been studied in [30]. In [31], the effect of channel correlation between the forward and backscatter in RFID was investigated. Among the aforementioned works, the statistics of correlated Nakagami-m fading channels are not exact ”approximations” and/or subject to limitations, either in fading parameters or structure of correlation. Also, these works consider Nakagami-m fading channels with the same severity value m. However, in practical wireless scenarios, fading parameters vary depending on the channel characteristics [32]. Therefore, the need for exact-form for the statistics of cascaded Nakagami-m fading channels with arbitrary correlation and non-identically distributed becomes an essential demand. However, to the best of authors knowledge, studies that consider the statistical analysis of generalized cascaded Nakagami-m fading channels with arbitrary correlation and non-identically distributed have not been reported in the literature. Motivated to fill this gap, we have analyzed the PDF, CDF, and the nth moment for the product of N correlated Nakagami-m random variables. Also, the PDF, CDF, and the n-th moment of the received instantaneous SNR over slow and flat fading compound channels were obtained. The impact of sub-channels correlation and fading severity parameters are investigated. Furthermore, performance analysis addressed by outage probability, average channel capacity, and average bit error probability for coherently detected binary PSK and FSK signals are also derived to gain more insight into the system. The remainder of this paper is organized as follows. In Section II, the statistics of the end-to-end cascaded Nakagamim fading channels with arbitrary correlation are introduced. In Section III, applications and performance analysis are obtained. In Section IV, numerical and simulation results on the OP, channel capacity and BEP are introduced. In the last section, the main results are summarized and concluded.

II. STATISTICAL CHARACTERISTICS

A. Representation of Correlated Nakagami-m RVs
We are interested in the statistics of the end-to-end cascaded N correlated Nakagami-m fading channels, which do not necessarily have the same fading parameters m , or are not even identical. Following the fading model for the N Correlated Nakagami-m RVs described in [ 33]. For the subsequent use we define a complex vector [TeX:] $$\mathbf{G}_k=\left[G_{k 1}, G_{k 2}, \cdots, G_{k m_k}\right]^T$$ where [TeX:] $$[\cdot]^T$$ denotes the transpose operator and [TeX:] $$G_{k \ell}(k=\left.1,2, \cdots, N, \quad \ell=1,2, \cdots, m_k\right)$$ are complex Gaussian random variables (RV’s). The RV [TeX:] $$G_{k \ell}$$ could be written as

(1)
[TeX:] $$G_{k \ell}=G_{X_{k \ell}}+j G_{Y_{k \ell}} \text { for } k=1, \cdots, N, \ell=1, \cdots, m_k,$$

where [TeX:] $$\sigma_k\left(\sqrt{1-\lambda_k^2} Y_{k \ell}+\lambda_k Y_{0 \ell}\right)$$ are the real and imaginary parts of [TeX:] $$G_{k \ell}$$ respectively. The parameter [TeX:] $$\lambda_k \in(-1,1) \backslash\{0\}$$ is a correlation dependent parameter, [TeX:] $$\sigma_k$$ is a finite real value and the components [TeX:] $$X_{k \ell}, Y_{k \ell}\left(k=1,2, \cdots, N, \ell=1,2, \cdots, m_k\right)$$ are assumed to be mutually independent Gaussian random variables with mean zero (i.e. [TeX:] $$\mathbb{E}\left[X_{k \ell}\right]=0 \text { and } \mathbb{E}\left[Y_{k \ell}\right]=0$$) and variance equals to 1/2 (i.e. [TeX:] $$\mathbb{E}\left[X_{k \ell}^2\right]=1 / 2 \text { and } \mathbb{E}\left[Y_{k \ell}^2\right]=1 / 2$$) denoted by [TeX:] $$\mathcal{N}(0,1 / 2).$$ Therefore, for any [TeX:] $$k, j \in \{1,2, \cdots, N\}$$ and [TeX:] $$\ell, n \in\left\{1,2, \cdots, m_k\right\}, \mathbb{E}\left[X_{k \ell} Y_{j n}\right]=0,$$ [TeX:] $$\mathbb{E}\left[X_{k \ell} X_{j n}\right]=\frac{1}{2} \delta_{k j} \delta_{\ell n} \text { and } \mathbb{E}\left[Y_{k \ell} Y_{j n}\right]=\frac{1}{2} \delta_{k j} \delta_{\ell n},$$ where [TeX:] $$\delta_{k j}$$ is the Kronecker delta function, which is defined as [TeX:] $$\delta_{k k}=1$$ and [TeX:] $$\delta_{k j}=0 \text { for } k \neq j \text {. }$$ Hence, the cross-correlation coefficient between [TeX:] $$G_{k \ell} \text { and } G_{j n}(k \neq j)$$ may be expressed as

(2)
[TeX:] $$\begin{aligned} \rho_{k \ell, j n} & =\frac{\mathbb{E}\left[G_{k \ell} G_{j n}^*\right]-\mathbb{E}\left[G_{k \ell}\right] \mathbb{E}\left[G_{j n}^*\right]}{\sqrt{\mathbb{E}\left[\left|G_{k \ell}\right|^2\right] \mathbb{E}\left[\left|G_{j n}\right|^2\right]}} \\ & =\left\{\begin{array}{cc} \lambda_k \lambda_j, & k \neq j \text { and } \ell=n \\ 1, & k=j \text { and } \ell=n \\ 0, & \ell \neq n . \end{array}\right. \end{aligned}$$

Define a new RV [TeX:] $$V_k$$, such that

(3)
[TeX:] $$\begin{aligned} V_k & =\mathbf{G}_k^H \mathbf{G}_k=\sum_{\ell=1}^{m_k}\left|G_{k \ell}\right|^2 \\ & =\sum_{\ell=1}^{m_k} G_{X_{k \ell}}^2+\sum_{\ell=1}^{m_k} G_{Y_{k \ell}}^2, \end{aligned}$$

where [TeX:] $$\mathbf{G}_k^H$$ is the Hermitian of vector [TeX:] $$\mathbf{G}_k,$$ which is its conjugate transpose: [TeX:] $$\mathbf{G}_k^H=\left(\mathbf{G}_k^*\right)^T.$$ It is clear that [TeX:] $$V_k(k=1,2, \cdots, N)$$ is a sum of squares of [TeX:] $$2 m_k$$ mutually independent Gaussian RVs. Consequently, the cross-correlation coefficient between [TeX:] $$V_k \text{ and }V_j$$ [TeX:] $$(k \text{ and } k=1,2, \cdots, N)$$ can be written obtained as

(4)
[TeX:] $$\rho_{V_k, V_j}=\frac{\operatorname{Cov}\left(V_k, V_j\right)}{\sqrt{\operatorname{Var}\left[V_k\right] \operatorname{Var}\left[V_j\right]}},$$

where [TeX:] $$\operatorname{Var}\left[V_k\right]=\mathbb{E}\left[V_k^2\right]-\mathbb{E}^2\left[V_k\right]$$ is the variance of the RV [TeX:] $$V_k$$ and

(5)
[TeX:] $$\operatorname{Cov}\left(V_k, V_j\right)=\mathbb{E}\left[V_k V_j\right]-\mathbb{E}\left[V_k\right] \mathbb{E}\left[V_j\right]$$

is the covariance of the RV’s [TeX:] $$V_k \text{ and }V_j.$$ Consequently

[TeX:] $$\mathbb{E}\left[V_k\right]=\sum_{\ell=1}^{m_k} \mathbb{E}\left[G_{X_{k \ell}}^2\right]+\sum_{\ell=1}^{m_k} \mathbb{E}\left[G_{Y_{k \ell}}^2\right],$$

where

[TeX:] $$\begin{gathered} \mathbb{E}\left[G_{X_{k \ell}}^2\right]=\mathbb{E}\left[\sigma_k^2\left(\sqrt{1-\lambda_k^2} X_{k \ell}+\lambda_k X_{0 \ell}\right)^2\right]=\frac{1}{2} \sigma_k^2, \\ \mathbb{E}\left[G_{Y_{k \ell}}^2\right]=\mathbb{E}\left[\sigma_k^2\left(\sqrt{1-\lambda_k^2} Y_{k \ell}+\lambda_k Y_{0 \ell}\right)^2\right]=\frac{1}{2} \sigma_k^2 . \end{gathered}$$

Therefore,

(6)
[TeX:] $$\mathbb{E}\left[V_k\right]=\sum_{\ell=1}^{m_k} \mathbb{E}\left[G_{X_{k \ell}}^2\right]+\sum_{\ell=1}^{m_k} \mathbb{E}\left[G_{Y_{k \ell}}^2\right]=m_k \sigma_k^2,$$

(7)
[TeX:] $$\mathbb{E}\left[V_j\right]=\sum_{\ell=1}^{m_j} \mathbb{E}\left[G_{X_{j \ell}}^2\right]+\sum_{\ell=1}^{m_j} \mathbb{E}\left[G_{Y_{j \ell}}^2\right]=m_j \sigma_j^2.$$

Hence, the first term of [TeX:] $$\operatorname{Cov}\left(V_k, V_j\right)$$ can be written as

[TeX:] $$\begin{aligned} \mathbb{E}\left[V_k V_j\right]= & \sum_{\ell=1}^{m_k} \sum_{n=1}^{m_j} \mathbb{E}\left[G_{X_{k \ell}}^2 G_{X_{j n}}^2\right]+\sum_{\ell=1}^{m_k} \sum_{n=1}^{m_j} \mathbb{E}\left[G_{X_{k \ell}}^2 G_{Y_{j n}}^2\right] \\ & +\sum_{\ell=1}^{m_k} \sum_{n=1}^{m_j} \mathbb{E}\left[G_{Y_{k \ell}}^2 G_{X_{j n}}^2\right]+\sum_{\ell=1}^{m_k} \sum_{n=1}^{m_j} \mathbb{E}\left[G_{Y_{k \ell}}^2 G_{Y_{j n}}^2\right] \end{aligned}$$

and since [TeX:] $$G_{X_{k \ell}} \text { and } G_{Y_{j n}}$$ are statistically independent and have same statistics the above equation may be expressed as

[TeX:] $$\begin{aligned} \mathbb{E}\left[V_k V_j\right]= & 2 \sum_{\ell=1}^{m_k} \sum_{n=1}^{m_j} \mathbb{E}\left[G_{X_{k \ell}}^2 G_{X_{j n}}^2\right] \\ & +2 \sum_{\ell=1}^{m_k} \mathbb{E}\left[G_{X_{k \ell}}^2\right] \sum_{n=1}^{m_j} \mathbb{E}\left[G_{Y_{j n}}^2\right] \\ = & 2 \sum_{\ell=1}^{m_k} \sum_{n=1}^{m_j} \mathbb{E}\left[G_{X_{k \ell}}^2 G_{X_{j n}}^2\right]+\frac{1}{2} m_k m_j \sigma_k^2 \sigma_j^2. \end{aligned}$$

We invoke the result of [41, eq. (7-61)] to write

[TeX:] $$\begin{aligned} \mathbb{E}\left[G_{X_{k \ell}}^2 G_{X_{j n}}^2\right] & =\mathbb{E}\left[G_{X_{k \ell}}^2\right] \mathbb{E}\left[G_{X_{j n}}^2\right]+2 \mathbb{E}^2\left[G_{X_{k \ell}} G_{X_{j n}}\right] \\ & =\frac{1}{4} \sigma_k^2 \sigma_j^2+2 \mathbb{E}^2\left[G_{X_{k \ell}} G_{X_{j n}}\right], \end{aligned}$$

by straight forward mathematical manipulations we can write

[TeX:] $$\mathbb{E}\left[G_{X_{k \ell}} G_{X_{j n}}\right]=\frac{1}{2} \sigma_k \sigma_j\left[\sqrt{1-\lambda_k^2} \sqrt{1-\lambda_j^2} \delta_{k j} \delta_{\ell n}+\lambda_k \lambda_j \delta_{\ell n}\right] .$$

Accordingly, [TeX:] $$\mathbb{E}\left[V_k V_j\right]$$ can be expressed as

(8)
[TeX:] $$\begin{aligned} \mathbb{E}\left[V_k V_j\right]= & \sum_{\ell=1}^{m_k} \sum_{n=1}^{m_j} \sigma_k^2 \sigma_j^2\left[\sqrt{1-\lambda_k^2} \sqrt{1-\lambda_j^2} \delta_{k j} \delta_{\ell n}+\lambda_k \lambda_j \delta_{\ell n}\right]^2 \\ & +m_k m_j \sigma_k^2 \sigma_j^2. \end{aligned}$$

Combining the above results by substituting (6), (7) and (8) into (5), it follows that

Since [TeX:] $$\mathbb{E}\left[V_k\right]=m_k \sigma_k^2 \text { and } \mathbb{E}\left[V_j\right]=m_j \sigma_j^2$$ and by using (8) with [TeX:] $$k=j \text { and } \ell=n$$ it it follows [TeX:] $$\operatorname{Var}\left(V_k\right)=m_k \sigma_k^4$$ and [TeX:] $$\operatorname{Var}\left(V_j\right)=m_j \sigma_j^4.$$ Consequently, the cross-correlation coefficient between [TeX:] $$V_k \text{ and } V_j$$ in (4) can be expressed as

(9)
[TeX:] $$\begin{aligned} \rho_{V_k, V_j} & =\frac{1}{\sqrt{m_k m_j}} \sum_{\ell=1}^{m_k} \sum_{n=1}^{m_j}\left[\sqrt{1-\lambda_k^2} \sqrt{1-\lambda_j^2} \delta_{k j} \delta_{\ell n}+\lambda_k \lambda_j \delta_{\ell n}\right]^2 \\ & =\left\{\begin{array}{cc} \frac{\min \left(m_k, m_j\right)}{\sqrt{m_k m_j}} \lambda_k^2 \lambda_j^2, & k \neq j \text { and } \ell=n \\ 1, & k=j \text { and } \ell=n \\ 0, & \ell \neq n . \end{array}\right. \end{aligned}$$

It can be verified that [TeX:] $$R_{\ell}=\sqrt{V_{\ell}}(\ell=1,2, \cdots, N)$$ are N correlated Nakagami-m RVs with PDFs given by

(10)
[TeX:] $$f_{R_{\ell}}(x)=\frac{2}{\Gamma\left(m_{\ell}\right)}\left(\frac{m_{\ell}}{\Omega_{\ell}}\right)^{m_{\ell}} x^{2 m_{\ell}-1} \exp \left(-\frac{m_{\ell}}{\Omega_{\ell}} x^2\right) \quad x \geq 0,$$

where [TeX:] $$\Omega_{\ell}=\mathbb{E}\left[R_{\ell}^2\right]=m_{\ell} \sigma_{\ell}^2, \Gamma(\cdot)$$ is the Euler Gamma function [37, eq. (8.310-1)] and [TeX:] $$m_{\ell}=\Omega_{\ell}^2 / \mathbb{E}\left[\left(R_{\ell}^2-\Omega_{\ell}\right)^2\right] \geq 0.5$$ represents the fading severity parameter. Nakagami-m PDF can also be expressed as

(11)
[TeX:] $$f_{R_{\ell}}(x)=\frac{2 x^{2 m_{\ell}-1}}{\sigma_{\ell}^{2 m_{\ell}} \Gamma\left(m_{\ell}\right)} \exp \left(-\frac{x^2}{\sigma_{\ell}^2}\right) .$$

The PDF of Nakagami-m RV can be reduced to the special cases of Rayleigh distribution with [TeX:] $$\left(m_{\ell}=1\right)$$ and to the one-sided Gaussian distribution with [TeX:] $$\left(m_{\ell}=1/2\right).$$
B. Statistics of Cascaded Nakagami-m RVs

Communication technology fuels our interconnected world, where in the last decade a new area of research has emerged as a result of channel modeling to fit the multi-layer channel networks, known as the end-to-end compound cascaded fading channel. Because they take us beyond the properties of conventional one-way channel, they display remarkable effects not found in the conventional on-way channels, such as multihop relaying communication link, mobile-to-mobile fading channel, dual-hop fading channels, radio-frequency identification pinhole channels. For end-to-end compound cascaded keyhole or pinhole fading channels, the correlation between cascaded sub-channels fading may have some impacts on the system performance. This is because faded signals terminate and originate at the same keyhole or pinhole location.

In wireless communication links, cascaded fading channel occurs when the transmitter-and receiver pairs experience rich and multiple scattering and when the received signals are engendered by the product of a bunch of rays reflected via N scatters, but the existence of some keyholes or pinholes still makes the transmission possible.

For cascade fading channel shown in Fig. 1, it is assumed that each sub-channel undergoes the Nakagami-m fading with fading coefficient [TeX:] $$R_n e^{j \theta_n}(n=1,2, \cdots, N)$$ is characterized with fading severity parameter [TeX:] $$m_n.$$ In Fig. 1, [TeX:] $$R_n \text{ and } \theta_n$$ represents the sub-channel gain and phase, respectively and n denotes the complex envelope of the Gaussian noise process with zero mean and [TeX:] $$N_0$$ power spectral density. Therefore, [TeX:] $$Y_N=R_1 \times \cdots \times R_N \text { and } \phi_N=\theta_1+\cdots+\theta_N$$ represents the end-to-end channel gain and phase, respectively. Consequently, the end-to-end channel between the transmitter and receiver can be modeled by the product of the fading coefficients corresponding to each sub-channel. The product of N correlated non-identically distributed Nakagami-m RVs may be expressed as

(12)
[TeX:] $$Y_N=\prod_{k=1}^N R_k.$$

Fig. 1.

Cascaded fading channel with N sub-channels.
1.png
For the subsequent use we define a new random vectors [TeX:] $$\mathbf{X}_0=\left[\mathbf{X}_0^{(1) T}, \mathbf{X}_0^{(2) T}, \cdots, \mathbf{X}_0^{(N) T}\right]^T$$ and [TeX:] $$\mathbf{Y}_0=\left[\mathbf{Y}_0^{(1) T}, \mathbf{Y}_0^{(2) T}, \cdots, \mathbf{Y}_0^{(N) T}\right]^T$$ with the random vectors [TeX:] $$\mathbf{X}_0^{(k)}=\left[X_{01}, X_{02}, \cdots, X_{0 m_k}\right]^T$$ and [TeX:] $$\mathbf{Y}_0^{(k)}=\left[Y_{01}, Y_{02}, \cdots, Y_{0 m_k}\right]^T$$ [TeX:] $$(k=1, 2, \cdots, N).$$ The RV’s [TeX:] $$X_{0 \ell} \text { and } Y_{0 \ell}\left(\ell=1,2, \cdots, m_k\right)$$ are mutually independent Gaussian random variables with mean zero and variance equals to 1/2. The joint PDF of [TeX:] $$\mathbf{X}_0^{(k)} \text{ and } \mathbf{Y}_0^{(k)}$$ is given by

(13)
[TeX:] $$\begin{aligned} f_{\mathbf{X}_0^{(k)}, \mathbf{Y}_0^{(k)}}\left(\mathbf{X}_0^{(k)}, \mathbf{Y}_0^{(k)}\right) & =f_{\mathbf{X}_0^{(k)}}\left(X_{01}, \cdots, X_{0 m_k}\right) f_{\mathbf{Y}_0^{(k)}}\left(Y_{01}, \cdots, Y_{0 m_k}\right) \\ & =\frac{1}{\pi^{m_k}} \exp \left[-\sum_{\ell=1}^{m_k}\left(X_{0 \ell}^2+Y_{0 \ell}^2\right)\right]. \end{aligned}$$

Without any loss of generality we assume that [TeX:] $$m_1 \leq m_2 \leq \cdots \leq m_N.$$ Therefore, [TeX:] $$\max \left\{m_1, m_2, \cdots, m_N\right\}=m_N.$$ Since the random vectors [TeX:] $$\mathbf{X}_0^{(k)} \text { and } \mathbf{Y}_0^{(k)}(k=1,2, \cdots, N)$$ have some common variables and for a purpose of the analysis, the random vectors [TeX:] $$\mathbf{X}_0 \text { and } \mathbf{Y}_0$$ may be represented as [TeX:] $$\mathbf{X}_0=\left[X_{01}, X_{02}, \cdots, X_{0 m_N}\right]^T$$ and [TeX:] $$\mathbf{Y}_0=\left[Y_{01}, Y_{02}, \cdots, Y_{0 m_N}\right]^T .$$ Therefore, the joint PDF of [TeX:] $$\mathbf{X}_0 \text { and } \mathbf{Y}_0$$ can be written as

(14)
[TeX:] $$\begin{aligned} f_{\mathbf{X}_0, \mathbf{Y}_0}\left(\mathbf{X}_0, \mathbf{Y}_0\right) & =f_{\mathbf{X}_0}\left(X_{01}, \cdots, X_{0 m_N}\right) f_{\mathbf{Y}_0}\left(Y_{01}, \cdots, Y_{0 m_N}\right) \\ & =\frac{1}{\pi^{m_N}} \exp \left[-\sum_{\ell=1}^{m_N}\left(X_{0 \ell}^2+Y_{0 \ell}^2\right)\right]. \end{aligned}$$

In order to obtain the PDF and CDF of [TeX:] $$Y_N,$$ the joint PDF of the correlated RVs [TeX:] $$R_k(k=1,2, \cdots, N)$$ must be obtained. To this end, we first define an auxiliary RV [TeX:] $$D_N,$$ which is the conditional PDF of [TeX:] $$Y_N \text { on } \mathbf{X}_0 \text { and } \mathbf{Y}_0.$$ Then, the intended PDF of [TeX:] $$Y_N$$ can be directly obtained by averaging [TeX:] $$D_N$$ over the joint PDF of [TeX:] $$\mathbf{X}_0 \text { and } \mathbf{Y}_0.$$ In this case both real and imaginary parts of the conditional PDF of the RVs [TeX:] $$G_{k \ell} \text { on } X_{0 \ell} \text { and } Y_{0 \ell}(\ell=\left.1,2, \cdots, m_k\right)$$ have equal variance of [TeX:] $$\sigma_k^2\left(1-\lambda_k^2\right) / 2$$ and means equal to [TeX:] $$\sigma_k \lambda_k X_{0 \ell} \text { and } \sigma_k \lambda_k Y_{0 \ell},$$ respectively. Consequently, from (3), the conditional PDF of [TeX:] $$V_k$$ on [TeX:] $$\mathbf{X}_0^{(k)} \text { and } \mathbf{Y}_0^{(k)}(k=1,2, \cdots, N)$$ follows a non-central chi-square distribution [34, eq. (2.3-29)] that is

(15)
[TeX:] $$\begin{aligned} f_{Q_k}(v) & =f_{V_k \mid \mathbf{X}_0^{(k)}, \mathbf{Y}_0^{(k)}}\left(v \mid \mathbf{X}_0^{(k)}, \mathbf{Y}_0^{(k)}\right) \\ & =\frac{1}{2 \Lambda_k^2}\left(\frac{v}{S_k^2}\right)^{\frac{g_k}{2}} \exp \left(-\frac{S_k^2+v}{2 \Lambda_k^2}\right) I_{g_k}\left(\frac{S_k}{\Lambda_k^2} \sqrt{v}\right), \end{aligned}$$

where [TeX:] $$g_k=m_k-1, S_k^2=\sigma_k^2 \lambda_k^2 \sum_{\ell=1}^{m_k}\left(X_{0 \ell}^2+Y_{0 \ell}^2\right),$$ [TeX:] $$\Lambda_k^2=\frac{1}{2} \sigma_k^2\left(1-\lambda_k^2\right) \text { and } I_{g_k}$$ is the modified Bessel function of gkth order and first kind [38, p.p. (374)]. By using a simple transformation of random variables, the conditional PDF of [TeX:] $$R_k=\sqrt{V_k} \text { on } X_{0 \ell} \text { and } Y_{0 \ell}\left(\ell=1,2, \cdots, m_k\right)$$ (i.e., [TeX:] $$W_k=\sqrt{Q_k}$$) may be expressed as [34, eq. (2.3-64)]

(16)
[TeX:] $$\begin{aligned} f_{W_k}(r) & =f_{R_k \mid \mathbf{X}_0^{(k)}, \mathbf{Y}_0^{(k)}}\left(r \mid \mathbf{X}_0^{(k)}, \mathbf{Y}_0^{(k)}\right) \\ & =\frac{1}{\Lambda_k^2} \frac{r^{m_k}}{S_k^{g_k}} \exp \left(-\frac{S_k^2+r^2}{2 \Lambda_k^2}\right) I_{g_k}\left(\frac{S_k}{\Lambda_k^2} r\right) . \end{aligned}$$

The auxiliary RV [TeX:] $$\left.D_N \text { (i.e. } f_{D_N}(r)=f_{Y_N \mid \mathbf{X}_0, \mathbf{Y}_0}\left(r \mid \mathbf{X}_0, \mathbf{Y}_0\right)\right)$$ can be rearranged and represented as

(17)
[TeX:] $$D_N=\prod_{k=1}^N W_k.$$

Now, the PDF of [TeX:] $$Y_N$$ may be expressed as

(18)
[TeX:] $$f_{\mathbf{Y}_N}(r)=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f_{D_N}(r) f_{\mathbf{X}_0, \mathbf{Y}_0}\left(\mathbf{X}_0, \mathbf{Y}_0\right) d \mathbf{X}_0 d \mathbf{Y}_0,$$

where the bold integrals above are [TeX:] $$m_N$$-fold integrals. Following the procedure described in [ 20], starting with the product of two RVs, the PDF of [TeX:] $$D_2=W_1 W_2$$ can be obtained by employing the method of transformation for the product of two RV’s [ 41], that is

(19)
[TeX:] $$\begin{aligned} & f_{D_2}(r)=\int_{-\infty}^{\infty} \frac{1}{\left|r_2\right|} f_{W_1}\left(\frac{r}{r_2}\right) f_{W_2}\left(r_2\right) d r_2 \\ & =\int_0^{\infty} \frac{r^{m_1} r_2^{m_2-m_1-1}}{S_1^{g_1} S_2^{g_2} \Lambda_1^2 \Lambda_2^2} \exp \left(-\frac{r^2}{2 r_2^2 \Lambda_1^2}-\frac{r_2^2}{2 \Lambda_2^2}\right) \\ & \times \exp \left(-\frac{S_1^2}{2 \Lambda_1^2}-\frac{S_2^2}{2 \Lambda_2^2}\right) I_{g_1}\left(\frac{S_1}{\Lambda_1^2} \frac{r}{r_2}\right) I_{g_2}\left(\frac{S_2}{\Lambda_2^2} r_2\right) d r_2 \\ & =\int_0^{\infty} \frac{4 r^{m_1} r_2^{m_2-m_1-1}}{S_1^{g_1} S_2^{g_2} \mathcal{T}_1 \mathcal{T}_2} \exp \left[-\left(\frac{r^2}{r_2^2 \mathcal{T}_1}+\frac{r_2^2}{\mathcal{T}_2}\right)\right] \\ & \times \exp \left(-h_1 \sum_{\ell=1}^{m_1}\left(x_{0 \ell}^2+y_{0 \ell}^2\right)-h_2 \sum_{\ell=1}^{m_2}\left(x_{0 \ell}^2+y_{0 \ell}^2\right)\right) \\ & \times I_{g_1}\left(\frac{2 \lambda_1 r \sqrt{\sum_{\ell=1}^{m_1}\left(x_{0 \ell}^2+y_{0 \ell}^2\right)}}{\sigma_1\left(1-\lambda_1^2\right) r_2}\right) \\ & \times I_{g_2}\left(\frac{2 \lambda_2 r_2 \sqrt{\sum_{\ell=1}^{m_2}\left(x_{0 \ell}^2+y_{0 \ell}^2\right)}}{\sigma_2\left(1-\lambda_2^2\right)}\right) d r_2, \\ & \end{aligned}$$

where [TeX:] $$\mathcal{T}_i=\sigma_i^2\left(1-\lambda_i^2\right) \text { and } h_i=\lambda_i^2 /\left(1-\lambda_i^2\right).$$ Now by substituting (14) and (19) into (18) and with a simple mathematical manipulation the PDF of [TeX:] $$Y_2$$ can be expressed as

(20)
[TeX:] $$\begin{aligned} f_{Y_2}(r)= & \int_0^{\infty} \int_\mathbf{T} \frac{4 r^{m_1} r_2^{m_2-m_1-1}}{B_1^{g_1} B_2^{g_2} \mathcal{C}_2 \mathcal{E}_2} \exp \left[-\left(\frac{r^2}{r_2^2 \mathcal{T}_1}+\frac{r_2^2}{\mathcal{T}_2}\right)\right] \\ & \times \frac{1}{\pi^{m_2}} \exp \left(-h_1 \sum_{\ell=1}^{2 m_1} t_{\ell}^2-\left(h_2+1\right) \sum_{\ell=1}^{2 m_2} t_{\ell}^2\right) \\ & \times I_{g_1}\left(\frac{2 \lambda_1 r B_1}{\sigma_1\left(1-\lambda_1^2\right) r_2}\right) I_{g_2}\left(\frac{2 \lambda_2 r_2 B_2}{\sigma_2\left(1-\lambda_2^2\right)}\right) d \mathbf{T} d r_2, \end{aligned}$$

where [TeX:] $$B_i=\sqrt{\sum_{\ell=1}^{2 m_i} t_{\ell}^2}, \mathcal{C}_n=\prod_{j=1}^n\left(\sigma_j \lambda_j\right)^{g_j}, \mathcal{E}_n=\prod_{j=1}^n \mathcal{T}_j$$ and [TeX:] $$\mathbf{T}=\left[t_1, t_2, \cdots, t_{2 m_N}\right].$$ In (20) the inner [TeX:] $$2 m_N$$-fold integral on [TeX:] $$\mathbf{T}$$ may be defined as

[TeX:] $$\int_{\mathbf{T}}(\quad) d \mathbf{T}=\int_{t_1} \int_{t_2} \cdots \int_{t_{2 m_N}}(\quad) d t_1 d t_2 \cdots d t_{2 m_N}.$$

We can express (20) in terms of the infinite series representation for the modified Bessel function of [TeX:] $$g_k$$th order and first kind [ 34, eq. (2.3.31)] defined by

(21)
[TeX:] $$I_{g_i}(z)=\sum_{k=0}^{\infty} \frac{(z / 2)^{2 k+g_i}}{k ! \Gamma\left(k+g_i+1\right)}, \quad z \geq 0.$$

Therefore, (20) can be expressed as

[TeX:] $$\begin{aligned} f_{Y_2}(r)= & \int_0^{\infty} \int_{\mathbf{T}} \frac{4 r^{m_1} r_2^{m_2-m_1-1}}{B_1^{g_1} B_2^{g_2} \mathcal{C}_2 \mathcal{E}_2} \exp \left[-\left(\frac{r^2}{r_2^2 \mathcal{T}_1}+\frac{r_2^2}{\mathcal{T}_2}\right)\right] \\ & \times \frac{1}{\pi^{m_2}} \exp \left(-h_1 \sum_{\ell=1}^{2 m_1} t_{\ell}^2-\left(h_2+1\right) \sum_{\ell=1}^{2 m_2} t_{\ell}^2\right) \\ & \times \sum_{k_1=0}^{\infty} \frac{1}{k_{1} ! \Gamma\left(k_1+m_1\right)}\left(\frac{\lambda_1 r B_1}{\sigma_1\left(1-\lambda_1^2\right) r_2}\right)^{2 k_1+g_1} \\ & \times \sum_{k_2=0}^{\infty} \frac{1}{k_{2} ! \Gamma\left(k_2+m_2\right)}\left(\frac{\lambda_2 r_2 B_2}{\sigma_2\left(1-\lambda_2^2\right)}\right)^{2 k_2+g_2} d \mathbf{T} d r_2, \end{aligned}$$

by performing the change of variable and after some manipulations we obtain

(22)
[TeX:] $$\begin{aligned} f_{Y_2}(r)= & \int_0^{\infty} \int_\mathbf{T} \frac{4}{\mathcal{C}_2 \mathcal{E}_2} \exp \left[-\left(\frac{r^2}{r_2^2 \mathcal{T}_1}+\frac{r_2^2}{\mathcal{T}_2}\right)\right] \\ & \times \frac{1}{\pi^{m_2}} \exp \left(-h_1 \sum_{\ell=1}^{2 m_1} t_{\ell}^2-\left(h_2+1\right) \sum_{\ell=1}^{2 m_2} t_{\ell}^2\right) \\ & \times \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty}\left(\sum_{\ell=1}^{2 m_1} t_{\ell}^2\right)^{k_1}\left(\sum_{\ell=1}^{2 m_2} t_{\ell}^2\right)^{k_2} \\ & \times r^{2 k_1+2 m_1-1} r_2^{2\left(k_2-k_1\right)+2\left(m_2-m_1\right)-1} \\ & \times\left[\prod_{j=1}^2 \frac{1}{k_j \Gamma\left(k_j+m_j\right)}\left(\frac{\lambda_j}{\left(1-\lambda_j^2\right) \sigma_j}\right)^{2 k_j+g_j}\right] d \mathbf{T} d r_2 . \end{aligned}$$

The inner [TeX:] $$2 m_N$$-fold integral on [TeX:] $$\mathbf{T}$$ may be written as

(23)
[TeX:] $$\begin{aligned} I_2= & \int_{\infty}^{\infty} \int_{\infty}^{\infty} \cdots \int_{\infty}^{\infty} \frac{1}{\pi^{m_2}} \prod_{i=1}^2\left[\left(\sum_{\ell=1}^{2 m_i} t_{\ell}^2\right)^{k_i} \exp \left(-\alpha_i \sum_{\ell=1}^{2 m_i} t_{\ell}^2\right)\right] \\ & \times d t_1 d t_2 \cdots d t_{2 m_2}, \end{aligned}$$

where [TeX:] $$\alpha_1=h_1 \text { and } \alpha_2=h_2+1 \text {. }$$ To the best of the authors' knowledge, there is no reported analytical solution for (71). However, following the procedure given in Appendix A, the solution of the integral in (71) can be obtained as

(24)
[TeX:] $$\begin{aligned} I_2= & \frac{1}{\Gamma\left(m_1\right) \Gamma\left(m_2-m_1\right)} \sum_{i_2=0}^{k_2}\left(\begin{array}{c} k_2 \\ i_2 \end{array}\right) \\ & \times \frac{\Gamma\left(k_1+m_1+i_2\right)}{\left(1+h_1+h_2\right)^{k_1+m_1+i_2}} \frac{\Gamma\left(k_2+m_2-m_1-i_2\right)}{\left(1+h_2\right)^{k_2+m_2-m_1-i_2}} . \end{aligned}$$

Therefore, one can write (22) as

(25)
[TeX:] $$\begin{aligned} f_{Y_2}(r)= & \int_0^{\infty} \frac{4}{\mathcal{C}_2 \mathcal{E}_2} \exp \left[-\left(\frac{r^2}{r_2^2 \mathcal{T}_1}+\frac{r_2^2}{\mathcal{T}_2}\right)\right] \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} I_2 \\ & \times r^{2 k_1+2 m_1-1} r_2^{2\left(k_2-k_1\right)+2\left(m_2-m_1\right)-1} \\ & \times\left[\prod_{j=1}^2 \frac{1}{k_{j} ! \Gamma\left(k_j+m_j\right)}\left(\frac{\lambda_j}{\left(1-\lambda_j^2\right) \sigma_j}\right)^{2 k_j+g_j}\right] d r_2 . \end{aligned}$$

By using the relation [36, eq. (8.4.3.1)], defined by

(26)
[TeX:] $$e^{-z}=G_{0,1}^{1,0}\left(z \left\lvert\, \begin{array}{c} - \\ 0 \end{array}\right.\right),$$

where [TeX:] $$G_{p, q}^{m, n}\left(z \left\lvert\, \begin{array}{l} a_r \\ b_s \end{array}\right.\right)$$ is the Meijer G-function [37, eq. (9.301)], then, (25) can be written as

(27)
[TeX:] $$\begin{aligned} f_{Y_2}(r)= & \int_0^{\infty} \frac{4}{\mathcal{C}_2 \mathcal{E}_2} \exp \left(-\frac{r_2^2}{\mathcal{T}_2}\right) G_{0,1}^{1,0}\left(\left.\frac{r^2}{r_2^2 \mathcal{T}_1} \right\rvert\, \frac{-}{0}\right) \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} I_2 \\ & \times r^{2 k_1+2 m_1-1} r_2^{2\left(k_2-k_1\right)+2\left(m_2-m_1\right)-1} \\ & \times\left[\prod_{j=1}^2 \frac{1}{k_{j} ! \Gamma\left(k_j+m_j\right)}\left(\frac{\lambda_j}{\left(1-\lambda_j^2\right) \sigma_j}\right)^{2 k_j+g_j}\right] d r_2 . \end{aligned}$$

By using the following properties [37, eq. (9.31-2) and eq. (9.31-5)],

(28)
[TeX:] $$G_{p, q}^{m, n}\left(z^{-1} \left\lvert\, \begin{array}{l} a_r \\ b_s \end{array}\right.\right)=G_{q, p}^{n, m}\left(z \left\lvert\, \begin{array}{c} 1-b_s \\ 1-a_r \end{array}\right.\right),$$

(29)
[TeX:] $$z^k G_{p, q}^{m, n}\left(z \left\lvert\, \begin{array}{l} \mathbf{a} \\ \mathbf{b} \end{array}\right.\right)=G_{p, q}^{m, n}\left(z \left\lvert\, \begin{array}{l} k+\mathbf{a} \\ k+\mathbf{b} \end{array}\right.\right)$$

and after making the change of variable [TeX:] $$u_2=r_2^2 \text {, }$$ the integral in (27) becomes

(30)
[TeX:] $$\begin{aligned} f_{Y_2}(r)= & \int_0^{\infty} \frac{2}{\mathcal{C}_2 \mathcal{E}_2} \exp \left(-\frac{u_2}{\mathcal{T}_2}\right) G_{1,0}^{0,1}\left(\left.\frac{u_2 \mathcal{T}_1}{r^2} \right\rvert\, \begin{array}{c} 1 \\ - \end{array}\right) \\ & \times \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} I_2 r^{2 k_1+2 m_1-1} u_2^{-\left(k_1-k_2+m_1-m_2+1\right)} \\ & \times\left[\prod_{j=1}^2 \frac{1}{k_{j} ! \Gamma\left(k_j+m_j\right)}\left(\frac{\lambda_j}{\left(1-\lambda_j^2\right) \sigma_j}\right)^{2 k_j+g_j}\right] d u_2 . \end{aligned}$$

Consequently, and with the aid of [37, eq. (7.813-1)], the single integral in (30) can be evaluated as

(31)
[TeX:] $$\begin{aligned} & f_{Y_2}(r)=\frac{2}{\mathcal{C}_2 \mathcal{E}_2} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} I_2 \frac{r^{2 k_1+2 m_1-1}}{\mathcal{T}_2^{k_1-k_2+m_1-m_2}} \\ & \times G_{2,0}^{0,2}\left(\frac{\mathcal{E}_2}{r^2} \left\lvert\, \begin{array}{c} k_1-k_2+m_1-m_2+1,1 \\ - \end{array}\right.\right) \\ & \times\left[\prod_{j=1}^2 \frac{1}{k_{j} ! \Gamma\left(k_j+m_j\right)}\left(\frac{\lambda_j}{\left(1-\lambda_j^2\right) \sigma_j}\right)^{2 k_j+g_j}\right] . \\ & \end{aligned}$$

After making some mathematical manipulations, (31) can be expressed as

(32)
[TeX:] $$\begin{aligned} f_{Y_2}(r)= & \frac{2}{r} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} I_2\left(\frac{r^2}{\mathcal{E}_2}\right)^{k_1+m_1} \\ & \times G_{2,0}^{0,2}\left(\left.\frac{\mathcal{E}_2}{r^2} \right\rvert\, \begin{array}{c} k_1-k_2+m_1-m_2+1,1 \\ - \end{array}\right) \\ & \times\left[\prod_{j=1}^2 \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right] . \end{aligned}$$

Using (28) and (29), the PDF for the product of two arbitrarily correlated Nakagami-m RVs may be written as

(33)
[TeX:] $$\begin{aligned} f_{Y_2}(r)= & \frac{2}{r} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} I_2 G_{0,2}^{2,0}\left(\frac{r^2}{\mathcal{E}_2} \left\lvert\, \begin{array}{c} - \\ \mathbf{M}_2 \end{array}\right.\right) \\ & \times\left[\prod_{j=1}^2 \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right], \end{aligned}$$

where [TeX:] $$\mathbf{M}_N=\left[k_N+m_N, k_{N-1}+m_{N-1}, \cdots, k_1+m_1\right] .$$

By extending the analysis described above for N = 2 into the case of N = 3, the PDF of [TeX:] $$D_3$$ may be expressed as

(34)
[TeX:] $$f_{D_3}(r)=\int_{-\infty}^{\infty} \frac{1}{\left|r_3\right|} f_{D_2}\left(\frac{r}{r_3}\right) f_{W_3}\left(r_3\right) d r_3 .$$

Now by substituting (14) and (34) into (18) and following the same procedure for the case of N = 2, the PDF of [TeX:] $$Y_3$$ can be expressed as

(35)
[TeX:] $$\begin{aligned} & f_{Y_3}(r)=\int_0^{\infty} \int_0^{\infty} \int_{\Gamma} \frac{8}{\mathcal{C}_3 \mathcal{E}_3} \exp \left[-\left(\frac{r^2}{r_2^2 r_3^2 \mathcal{T}_1}+\frac{r_2^2}{\mathcal{T}_2}+\frac{r_3^2}{\mathcal{T}_3}\right)\right] \\ & \times \frac{1}{\pi^{m_3}} \exp \left(-h_1 \sum_{\ell=1}^{2 m_1} t_{\ell}^2-h_2 \sum_{\ell=1}^{2 m_2} t_{\ell}^2-\left(h_3+1\right) \sum_{\ell=1}^{2 m_3} t_{\ell}^2\right) \\ & \times \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \sum_{k_3=0}^{\infty}\left(\sum_{\ell=1}^{2 m_1} t_{\ell}^2\right)^{k_1}\left(\sum_{\ell=1}^{2 m_2} t_{\ell}^2\right)^{k_2}\left(\sum_{\ell=1}^{2 m_3} t_{\ell}^2\right)^{k_3} \\ & \times r^{2 k_1+2 m_1-1} r_2^{2\left(k_2-k_1\right)+2\left(m_2-m_1\right)-1} \\ & \times r_3^{2\left(k_3-k_1\right)+2\left(m_3-m_1\right)-1} \\ & \times\left[\prod_{j=1}^3 \frac{1}{k_{j} ! \Gamma\left(k_j+m_j\right)}\left(\frac{\lambda_j}{\left(1-\lambda_j^2\right) \sigma_j}\right)^{2 k_j+g_j}\right] \\ & \times \quad d \mathbf{T} d r_2 d r_3 . \\ & \end{aligned}$$

Then, the PDF of the product of three arbitrarily correlated Nakagami-m RVs can be similarly found as

(36)
[TeX:] $$\begin{aligned} f_{Y_3}(r)= & \frac{2}{r} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \sum_{k_3=0}^{\infty} I_3 G_{0,3}^{3,0}\left(\left.\frac{r^2}{\mathcal{E}_3} \rvert\, \begin{array}{c} - \\ \mathbf{M}_3 \end{array}\right.\right)\\ & \times\left[\prod_{j=1}^3 \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right], \end{aligned}$$

where by following the procedure given in Appendix A, [TeX:] $$I_3$$ can be obtained as

[TeX:] $$\begin{aligned} I_3= & \frac{1}{\Gamma\left(m_1\right) \Gamma\left(m_2-m_1\right) \Gamma\left(m_3-m_2\right)} \sum_{i_2=0}^{k_3} \sum_{i_3=0}^{k_2+i_2}\left(\begin{array}{l} k_3 \\ i_2 \end{array}\right) \\ & \times\left(\begin{array}{c} k_2+i_2 \\ i_3 \end{array}\right) \frac{\Gamma\left(k_1+m_1+i_3\right)}{\left(1+h_1+h_2+h_3\right)^{k_1+m_1+i_3}} \\ & \times \frac{\Gamma\left(k_2+m_2-m_1+i_2-i_3\right)}{\left(1+h_2+h_3\right)^{k_2+m_2-m_1+i_2-i_3}} \\ & \times \frac{\Gamma\left(k_3+m_3-m_2-i_2\right)}{\left(1+h_3\right)^{k_3+m_3-m_2-i_2} }. \end{aligned}$$

Thus, using the same approach used above and recursively, we can find the PDF for [TeX:] $$Y_4, Y_5, \cdots, Y_N$$, it results that

(37)
[TeX:] $$\begin{aligned} f_{Y_N}(r)= & \frac{2}{r} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \ldots \sum_{k_N=0}^{\infty} I_N G_{0, N}^{N, 0}\left(\left.\frac{r^2}{\mathcal{E}_N} \rvert\, \begin{array}{c}-\\ \mathbf{M}_N \end{array}\right.\right)\\ & \times\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right], \end{aligned}$$

where as given in Appendix A, [TeX:] $$I_N$$ can be obtained as

[TeX:] $$\begin{aligned} I_N= & c_N \sum_{i_2=0}^{k_N} \sum_{i_3=0}^{k_{N-1}+i_2} \sum_{i_4=0}^{k_{N-2}+i_3} \cdots \sum_{i_N=0}^{k_2+i_{N-1}} \\ & \times\left(\begin{array}{c} k_N \\ i_2 \end{array}\right)\left(\begin{array}{c} k_{N-1}+i_2 \\ i_3 \end{array}\right)\left(\begin{array}{c} k_{N-2}+i_3 \\ i_4 \end{array}\right) \cdots\left(\begin{array}{c} k_2+i_{N-1} \\ i_N \end{array}\right) \\ & \times \frac{\Gamma\left(k_1+n_1+i_N\right)}{\mu_1^{k_1+n_1+i_N}} \frac{\Gamma\left(k_2+n_2+i_{N-1}-i_N\right)}{\mu_2+n_2+i_{N-1}-i_N} \cdots \\ & \times \cdots \frac{\Gamma\left(k_{N-1}+n_{N-1}+i_2-i_3\right)}{\mu_{N-1}+n_{N-1}+i_2-i_3} \frac{\Gamma\left(k_N+n_N-i_2\right)}{\mu_N^{k_N+n_N-i_2}} \end{aligned}$$

with

[TeX:] $$\begin{gathered} n_{\ell}= \begin{cases}m_1 & \ell=1 \\ m_{\ell}-m_{\ell-1} & \ell=2,3, \ldots, N\end{cases} \quad,\\ \mu_{\ell}=1+\sum_{n=\ell}^N h_n \quad \text { and } \quad c_N=\prod_{\ell=1}^N \frac{1}{\Gamma\left(n_{\ell}\right)} . \end{gathered}$$

We can identify that, for a set of identical fading severity parameter [TeX:] $$m_k=m(k=1,2, \cdots, N), I_N$$ can be expressed as [TeX:] $$I_N=\Gamma^{-1}(m) \Gamma\left(\mathcal{K}_N+m\right) \mathcal{H}_N^{-\left(\mathcal{K}_N+m\right)}$$ with [TeX:] $$\mathcal{K}_N=\sum_{n=1}^N k_n$$ and [TeX:] $$\mathcal{H}_N=1+\sum_{n=1}^N h_n .$$ For a special case of a product of N independent Nakagami-m RVs, it can be easily shown that [TeX:] $$I_N=1.$$ In particular, independent Nakagami-m RVs can be considered as special case of generalized Nakagami-m RVs for [TeX:] $$\lambda_k=0(k=1,2, \cdots, N),$$ in this case (37) simplifies to

[TeX:] $$f_{Y_N}(r)=\frac{2}{r \prod_{j=1}^N \Gamma\left(m_j\right)} G_{0, N}^{N, 0}\left(\left.r^2 \prod_{j=1}^N \frac{m_j}{\Omega_j}\right\rvert\, \begin{array}{c}-\\ {m_N, \cdots, m_1} \end{array}\right),$$

with [TeX:] $$\Omega_j=m_j \sigma_j^2,$$ which is identical to [17, eq. (4)].

Obviously if we consider [TeX:] $$m_k=1(k=1,2, \cdots, N)$$ in the expression (37) and by using the property (29), we obtain an expression for the PDF of the product of N arbitrarily correlated Rayleigh RVs, hence

[TeX:] $$\begin{aligned} f_{Y_N}(r)= & \frac{2 r}{\mathcal{E}_N} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} G_{0, N}^{N, 0}\left(\left.\frac{r^2}{\mathcal{E}_N} \right\rvert\, \begin{array}{c}-\\ \mathbf{K}_N \end{array}\right) \\ & \times\left[\prod_{j=1}^N \frac{h_j^{k_j}}{\left(k_{j} !\right)^2}\right] \Gamma\left(\mathcal{K}_N+1\right) \mathcal{H}_N^{-\left(\mathcal{K}_N+1\right)}, \end{aligned}$$

where [TeX:] $$\mathbf{K}_N=\left[k_N, k_{N-1}, \cdots, k_1\right] \text {, }$$ which is equivalent to [27, eq. (17)]. However we can consider other choices of the parameter in order to verify our result.

Using standard definition of the cumulative distribution function (CDF), the CDF for the product of N arbitrarily correlated Nakagami-m RVs is obtained as

(38)
[TeX:] $$\begin{aligned} F_{Y_N}(r)= & \int_0^r f_{Y_N}(y) d y \\ = & \int_0^r \frac{2}{y} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} I_N G_{0, N}^{N, 0}\left(\left.\frac{y^2}{\mathcal{E}_N} \right\rvert\, \begin{array}{c} -\\ {\mathbf{M}}_N \end{array}\right) \\ & \times\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right] d y . \end{aligned}$$

Obviously, by using the change of variable [TeX:] $$x=y^2,$$ the integral in (38) becomes

(39)
[TeX:] $$\begin{aligned} F_{Y_N}(r)= & \int_0^{r^2} \frac{1}{x} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} I_N G_{0, N}^{N, 0}\left(\left.\frac{x}{\mathcal{E}_N} \right\rvert\, \begin{array}{c} - \\ \mathbf{M}_N \end{array}\right)\\ & \times\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right] d x, \end{aligned}$$

and by the aid of [39, eq. (26)], the CDF of the product of N arbitrarily correlated Nakagami-m RVs can be evaluated as

(40)
[TeX:] $$\begin{aligned} F_{Y_N}(r)= & \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} I_N G_{1, N+1}^{N, 1}\left(\frac{r^2}{\mathcal{E}_N} \left\lvert\, \begin{array}{c} 1 \\ \mathbf{M}_N, 0 \end{array}\right.\right) \\ & \times\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right] . \end{aligned}$$

For a special case of a product of N independent Nakagamim RVs [TeX:] $$\lambda_k=0(k=1,2, \cdots, N),$$ with some mathematical manipulations, it can be easily shown that (40) reduced to

[TeX:] $$F_{Y_N}(r)=\frac{1}{\prod_{j=1}^N \Gamma\left(m_j\right)} G_{1, N+1}^{N, 1}\left(\left.r^2 \prod_{j=1}^N \frac{m_j}{\Omega_j}\right\rvert \begin{array}{c} 1 \\ {m_N, \cdots, m_1, 0} \end{array}\right),$$

which is identical to the expansion for the distribution function given by [ 17, eq. (7)].

For a special case of [TeX:] $$m_k=1(k=1,2, \cdots, N),$$ the expression in (40) is reduced to the CDF of the product of N arbitrarily correlated Rayleigh RVs, that is

[TeX:] $$\begin{aligned} F_{Y_N}(r)= & \sum_{k_1=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} G_{1, N+1}^{N, 1}\left(\left.\frac{r^2}{\mathcal{E}_N}\right\rvert \begin{array}{c} 1 \\ {k_N+1, \cdots, k_1+1,0} \end{array}\right) \\ & \times\left[\prod_{j=1}^N \frac{h_j^{k_j}}{\left(k_{j} !\right)^2}\right] \Gamma\left(\mathcal{K}_N+1\right) \mathcal{H}_N^{-\left(\mathcal{K}_N+1\right)}, \end{aligned}$$

which is equivalent to [ 27, eq. (21)].

III. APPLICATIONS AND PERFORMANCE ANALYSIS

Consider a digitally modulated signal transmitted over cascaded Nakagami-m fading channels. Due to the variety of distinct environments in wireless channels, the N subchannels are supposed to have distinct distributions. It is assumed that the N sub-channels are arbitrarily correlated and non-identically distributed. Moreover, they are assumed to have slow and flat fading. Therefore, if a signal s(t) with an average symbol energy E is transmitted over the cascaded channels, then the complex envelope of the received signal can be represented as [TeX:] $$r(t)=Y_N e^{-j \phi_N} s(t)+n(t),$$ where [TeX:] $$Y_N \text { and } \phi_N$$ represents the channel gain and channel phase, respectively. n(t) denotes the complex envelope of the Gaussian noise process with zero mean and [TeX:] $$N_0$$ power spectral density. Consequently, the instantaneous SNR can be expressed as [TeX:] $$\gamma=Y_N^2 E / N_0$$ and the average SNR may be written as

(41)
[TeX:] $$\bar{\gamma}=\frac{E}{N_0} \mathbb{E}\left[Y_N^2\right]=\frac{E}{N_0} \mathbb{E}\left[\prod_{k=1}^N R_k^2\right].$$

Following the procedure given in [27], we obtain the second moment of [TeX:] $$Y_N$$ with the aid of the conditional RV [TeX:] $$D_N$$ as

(42)
[TeX:] $$\begin{aligned} \mathbb{E}\left[Y_N^2\right] & =\int_{X_{0 \ell}} \int_{Y_{0 \ell}} \mathbb{E}\left[D_N^2\right] f_{X_{0 \ell}, Y_{0 \ell}}\left(x_{0 \ell}, y_{0 \ell}\right) \prod_{\ell=1}^{m_N} d x_{0 \ell} d y_{0 \ell} \\ & =\int_{X_{0 \ell}} \int_{Y_{0 \ell}} \mathbb{E}\left[\prod_{k=1}^N W_k^2\right] f_{X_{0 \ell}, Y_{0 \ell}}\left(x_{0 \ell}, y_{0 \ell}\right) \prod_{\ell=1}^{m_N} d x_{0 \ell} d y_{0 \ell} \\ & =\int_{X_{0 \ell}} \int_{Y_{0 \ell}} \prod_{k=1}^N \mathbb{E}\left[W_k^2\right] f_{X_{0 \ell}, Y_{0 \ell}}\left(x_{0 \ell}, y_{0 \ell}\right) \prod_{\ell=1}^{m_N} d x_{0 \ell} d y_{0 \ell} . \end{aligned}$$

It is well known that, the second moment of [TeX:] $$W_k(k=1,2, \cdots, N)$$ can be obtained as [34, eq. (2.3-66)]

(43)
[TeX:] $$\mathbb{E}\left[W_k^2\right]=\sigma_k^2 \lambda_k^2 \sum_{\ell=1}^{m_k}\left(x_{0 \ell}^2+y_{0 \ell}^2\right)+m_k \sigma_k^2\left(1-\lambda_k^2\right).$$

Consequently, substituting (43) and (14) into (42), and as a result of some mathematical simplifications it is now possible to write the second moment of [TeX:] $$Y_N$$ in the form

(44)
[TeX:] $$\begin{aligned} \mathbb{E}\left[Y_N^2\right]= & \varsigma_N \int_{\Gamma} \frac{1}{\pi^{m_N}} \exp \left(-\sum_{\ell=1}^{2 m_N} t_{\ell}^2\right) \\ & \times \prod_{k=1}^N\left[\lambda_k^2 \sum_{\ell=1}^{2 m_k} t_{\ell}^2+m_k\left(1-\lambda_k^2\right)\right] d t_1 d t_2 \cdots d t_{2 m_N}, \end{aligned}$$

where [TeX:] $$\varsigma_N=\prod_{n=1}^N \sigma_n^2.$$ Depending on the fading severity parameters, it will be suffice to find the solution of (44) in two cases. For the case of equal fading severity parameters, with [TeX:] $$m_1=m_2=\cdots=m_N=m \text {, }$$ the second moment of [TeX:] $$Y_N$$ in (44) becomes

(45)
[TeX:] $$\mathbb{E}\left[Y_N^2\right]=\frac{2 \varsigma_N}{\Gamma(m)} \int_0^{\infty} e^{-t^2} t^{2 m-1} \prod_{k=1}^N\left[\lambda_k^2 t^2+m\left(1-\lambda_k^2\right)\right] d t.$$

The solution to the integral given in (45) is obtained by using the hyper-spherical coordinates system transformation given in Appendix A. Let [TeX:] $$u=t^2$$, the second moment of [TeX:] $$Y_N$$ in (45) reduced to

(46)
[TeX:] $$\mathbb{E}\left[Y_N^2\right]=\frac{\varsigma_N}{\Gamma(m)} \int_0^{\infty} e^{-u} u^{m-1} \prod_{k=1}^N\left[\lambda_k^2 u+m\left(1-\lambda_k^2\right)\right] d u.$$

The integral in (46) can be solved with the aid of [37, eq. (2.323)], that is

(47)
[TeX:] $$\begin{aligned} \mathbb{E}\left[Y_N^2\right] & =\frac{-\varsigma_N}{\Gamma(m)} e^{-u} \\ & \times\left.\sum_{n=0}^{N+m-1} \frac{d^n}{d u^n}\left[u^{(m-1)} \prod_{k=1}^N\left[\lambda_k^2 u+m\left(1-\lambda_k^2\right)\right]\right]\right|_{u=0} ^{\infty} \\ & =\left.\frac{\varsigma_N{ }^{N+m-1}}{\Gamma(m)} \sum_{n=0} \frac{d^n}{d u^n}\left[u^{(m-1)} \prod_{k=1}^N\left[\lambda_k^2 u+m\left(1-\lambda_k^2\right)\right]\right]\right|_{u=0} \\ & =\mathcal{A}_N \varsigma_N, \end{aligned}$$

where

[TeX:] $$\mathcal{A}_N=\left.\frac{1}{\Gamma(m)} \sum_{n=0}^{N+m-1} \frac{d^n}{d u^n}\left[u^{(m-1)} \prod_{k=1}^N\left[\lambda_k^2 u+m\left(1-\lambda_k^2\right)\right]\right]\right|_{u=0}.$$

We now proceed to solve (44) in the case of unequal fading severity parameters, with [TeX:] $$m_1\lt m_2\lt \cdots\lt m_N.$$ In this case, we utilize the result obtained in Appendix A and employ it in mathematical programs such as Mathematica and Maple to individually obtain [TeX:] $$\mathcal{A}_N$$ for each N. It can, in fact, be shown that the solution of the second moment of [TeX:] $$Y_N$$ satisfies

[TeX:] $$\mathbb{E}\left[Y_N^2\right]=\mathcal{A}_N \varsigma_N.$$

As a result of the above mentioned method in obtaining the values of [TeX:] $$\mathcal{A}_N$$ for different N, here and as the special cases of interest the value of [TeX:] $$\mathcal{A}_N$$ for N = 2, 3 and 4 may be written as

[TeX:] $$\begin{aligned} \mathcal{A}_2= & m_1 \lambda_1^2 \lambda_2^2+m_1 m_2, \\ \mathcal{A}_3= & 2 m_1 \lambda_1^2 \lambda_2^2 \lambda_3^2+m_1 m_3 \lambda_1^2 \lambda_2^2+m_1 m_2 \lambda_1^2 \lambda_3^2 \\ & +m_1 m_2 \lambda_2^2 \lambda_3^2+m_1 m_2 m_3, \\ \mathcal{A}_4= & m_1\left(2 m_2+m_3+6\right) \lambda_1^2 \lambda_2^2 \lambda_3^2 \lambda_4^2+2 m_1 m_4 \lambda_1^2 \lambda_2^2 \lambda_3^2 \\ & +2 m_1 m_3 \lambda_1^2 \lambda_2^2 \lambda_4^2+2 m_1 m_2 \lambda_1^2 \lambda_3^2 \lambda_4^2+2 m_1 m_2 \lambda_2^2 \lambda_3^2 \lambda_4^2 \\ & +m_1 m_3 m_4 \lambda_1^2 \lambda_2^2+m_1 m_2 m_4 \lambda_1^2 \lambda_3^2+m_1 m_2 m_3 \lambda_1^2 \lambda_4^2 \\ & +m_1 m_2 m_4 \lambda_2^2 \lambda_3^2+m_1 m_2 m_3 \lambda_2^2 \lambda_4^2+m_1 m_2 m_3 \lambda_3^2 \lambda_4^2 \\ & +m_1 m_2 m_3 m_4 . \end{aligned}$$

Clearly, the previous formulas of [TeX:] $$\mathcal{A}_N$$ for N = 2, 3 and 4 are valid for both identical and non-identical fading severity parameters. Indeed, it is possible to consider more values of N by the same method described above, but the present article considers the cases for N = 2, 3 and 4. For independent Nakagami-m RVs, by substituting [TeX:] $$\lambda_k=0(k=1,2, \cdots, N)$$ in (44), it is easy to obtain

[TeX:] $$\mathbb{E}\left[Y_N^2\right]=\prod_{k=1}^N m_k \sigma_k^2.$$

A. An Infinite Series Representation for the Moment of [TeX:] $$Y_N$$

An alternative expression for the second moment of [TeX:] $$Y_N$$ can be derived in terms of an infinite-series representation. Indeed, the nth moment of cascaded Nakagami-m fading channels that are arbitrarily correlated and not necessarily identically distributed may be expressed as

(48)
[TeX:] $$\begin{aligned} \mathbb{E}\left[Y_N^n\right]= & \int_0^{\infty} r^n f_{Y_N}(r) d r \\ & =\int_0^{\infty} 2 r^{n-1} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} I_N G_{0, N}^{N, 0}\left(\left.\frac{r^2}{\mathcal{E}_N} \right\rvert\, \begin{array}{c} - \\ \mathbf{M}_N \end{array}\right) \\ & \times\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right] d r . \end{aligned}$$

Let [TeX:] $$u=r^2$$, the nth moment may be written as

(49)
[TeX:] $$\begin{aligned} \mathbb{E}\left[Y_N^n\right]= & \int_0^{\infty} u^{\frac{n}{2}-1} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \ldots \sum_{k_N=0}^{\infty} I_N G_{0, N}^{N, 0}\left(\left.\frac{u}{\mathcal{E}_N} \right\rvert\, \begin{array}{c} - \\ {\mathbf{M}}_N \end{array}\right) \\ & \times\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right] d u, \end{aligned}$$

with the aid of [37, eq. (7.811-4)], it is now possible to write (49) in the form

(50)
[TeX:] $$\mathbb{E}\left[Y_N^n\right]=\mathcal{E}_N^{\frac{n}{2}} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} I_N\left[\prod_{j=1}^N \frac{\Gamma\left(k_j+m_j+\frac{n}{2}\right)}{k_{j} ! \Gamma\left(k_j+m_j\right)} h_j^{k_j}\right].$$

In order to verify the previous result, the general moments of the product of N independent Nakagami-m RVs, which is given by [17, eq. (9)], can be easily obtained by substituting [TeX:] $$\lambda_k=0(k=1,2, \cdots, N)$$ in (50). A further verification is afforded by referring to the well known result, the area under a valid PDF integrates to unity. Thus by substituting n = 0 in (50), the following identity is obtained

(51)
[TeX:] $$\sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} I_N\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} !}\right]=1.$$

The prove of this identity is given in Appendix B. The second moment of cascaded Nakagami-m fading channels that are arbitrarily correlated and not necessarily identically distributed can by obtained by letting n = 2 in (50). Thus,

(52)
[TeX:] $$\mathbb{E}\left[Y_N^2\right]=\mathcal{E}_N \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} I_N\left[\prod_{j=1}^N \frac{k_j+m_j}{k_{j} !} h_j^{k_j}\right].$$

B. Statistics of the SNR

It is now possible to derive the CDF and PDF of the SNR for arbitrarily correlated, non-identically distributed cascaded Nakagami-m fading channels. It is well known that the CDF of the SNR can be obtained as

(53)
[TeX:] $$\begin{aligned} F_\gamma(\gamma) & =P_r\left(Y_N \leq \sqrt{\frac{\gamma}{E / N_0}}\right)=P_r\left(Y_N \leq \sqrt{\frac{\gamma}{\bar{\gamma}} \mathcal{A}_N \prod_{i=1}^N \sigma_i^2}\right) \\ & =F_{Y_N}\left(\sqrt{\frac{\gamma}{\bar{\gamma}} \mathcal{A}_N \prod_{i=1}^N \sigma_i^2}\right). \end{aligned}$$

By substituting (40) into (53), the CDF of the SNR is then

(54)
[TeX:] $$\begin{aligned} F_\gamma(\gamma)= & \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} I_N G_{1, N+1}^{N, 1}\left(\left.\frac{\gamma}{\bar{\gamma}} \frac{\mathcal{A}_N}{\mathcal{P}_N}\right\rvert \begin{array}{c} 1 \\ {\mathbf{M}_N, 0} \end{array}\right) \\ & \times\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right], \end{aligned}$$

where [TeX:] $$\mathcal{P}_N=\prod_{j=1}^N\left(1-\lambda_j^2\right).$$ By differentiating (54) with respect to [TeX:] $$\gamma$$, one finds that the PDF of the SNR, [TeX:] $$\gamma$$, is

(55)
[TeX:] $$f_\gamma(\gamma)=\frac{1}{2} \sqrt{\frac{1}{\gamma \bar{\gamma}} \mathcal{A}_N \prod_{i=1}^N \sigma_i^2} f_{Y_N}\left(\sqrt{\frac{\gamma}{\bar{\gamma}} \mathcal{A}_N \prod_{i=1}^N \sigma_i^2}\right) .$$

Substituting (37) into (55), the PDF of the SNR can be obtained to be

(56)
[TeX:] $$\begin{aligned} f_\gamma(\gamma)= & \frac{1}{\gamma} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} G_{0, N}^{N, 0}\left(\left.\frac{\gamma}{\bar{\gamma}} \frac{\mathcal{A}_N}{\mathcal{P}_N} \right\rvert\, \begin{array}{c} - \\ \mathbf{M}_N \end{array}\right) \\ & \times\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right] I_N . \end{aligned}$$

Using the property (29), it is now possible to write (56) in the from

(57)
[TeX:] $$\begin{aligned} f_\gamma(\gamma)= & \frac{1}{\bar{\gamma}} \frac{\mathcal{A}_N}{\mathcal{P}_N} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} I_N G_{0, N}^{N, 0}\left(\left.\frac{\gamma}{\bar{\gamma}} \frac{\mathcal{A}_N}{\mathcal{P}_N} \right\rvert\, \begin{array}{c} -\\ \mathbf{J}_N \end{array}\right) \\ & \times\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right], \end{aligned}$$

where [TeX:] $$\mathbf{J}_N=\left[k_N+m_N-1, \cdots, k_1+m_1-1\right] .$$ Consequently, with the aid of [37, eq. (7.811-4)], The nth moment of the SNR can be obtained as

(58)
[TeX:] $$\begin{aligned} & \mathbb{E}\left[\gamma^n\right]=\left(\frac{\bar{\gamma} \mathcal{P}_N}{\mathcal{A}_N}\right)^n \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} I_N \\ & \times\left[\prod_{j=1}^N \frac{\Gamma\left(k_j+m_j+n\right)}{k_{j} ! \Gamma\left(k_j+m_j\right)} h_j^{k_j}\right] . \end{aligned}$$

The mathematical expressions derived here are applicable to the previously derived expressions in [17] and [27].

C. Outage Probability
In a communication system, outage probability occurs when the instantaneous SNR, [TeX:] $$\gamma$$, falls below a certain specified threshold, [TeX:] $$\gamma_{th}$$, that is

(59)
[TeX:] $$\begin{aligned} P_{\text {out }} & =P_r\left(\gamma \leq \gamma_{t h}\right)=F_\gamma\left(\gamma_{t h}\right) \\ = & \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} I_N G_{1, N+1}^{N, 1}\left(\left.\frac{\gamma_{t h}}{\bar{\gamma}} \frac{\mathcal{A}_N}{\mathcal{P}_N} \right\rvert\, \begin{array}{c} 1 \\ \mathbf{M}_N, 0 \end{array}\right) \\ & \times\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right] . \end{aligned}$$

For a special case of independent cascaded Nakagami-m fading channels [TeX:] $$\lambda_k=0(k=1,2, \cdots, N)$$, with some mathematical manipulations, it can be easily shown that the outage probability in (59) reduced to

[TeX:] $$P_{\text {out }}=\frac{1}{\prod_{j=1}^N \Gamma\left(m_j\right)} G_{1, N+1}^{N, 1}\left(\left.\frac{\gamma_{t h}}{\bar{\gamma}} \prod_{j=1}^N m_j\right\rvert \begin{array}{c} 1\\ {m_N, \cdots, m_1, 0} \end{array}\right),$$

which is identical to the expansion given by [17, eq. (17)].

D. Average Channel Capacity

In an AWGN channel, the instantaneous capacity normalized to the channel bandwidth B for a communication system can be obtained as

(60)
[TeX:] $$C_n(\gamma)=\frac{C}{B}=\log _2(1+\gamma) \quad \text { bit } / \mathrm{sec} / \mathrm{Hz}.$$

In fact, the average channel capacity in frequency nonselective slow fading environment can be written as

(61)
[TeX:] $$\bar{C}_n=\int_0^{\infty} \log _2(1+\gamma) f_\gamma(\gamma) d \gamma.$$

Referring to Meijer G-function property [36, eq. (8.4.6.5)]

[TeX:] $$\log _2(1+\gamma)=\frac{1}{\ln (2)} G_{2,2}^{1,2}\left(\gamma \left\lvert\, \begin{array}{l} 1,1 \\ 1,0 \end{array}\right.\right)$$

and by substituting (57) into (61), one finds that the average channel capacity is

(62)
[TeX:] $$\begin{aligned} \bar{C}_n= & \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} I_N \int_0^{\infty} \frac{1}{\bar{\gamma} \ln (2)} \frac{\mathcal{A}_N}{\mathcal{P}_N} G_{2,2}^{1,2}\left(\gamma \left\lvert\, \begin{array}{l} 1,1 \\ 1,0 \end{array}\right.\right) \\ & \times G_{0, N}^{N, 0}\left(\left.\frac{\gamma}{\bar{\gamma}} \frac{\mathcal{A}_N}{\mathcal{P}_N} \right\rvert\, \begin{array}{c} -\\ \mathbf{J}_N \end{array}\right) \left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right] d \gamma \cdot(62) \end{aligned}$$

Consequently, with the aid of [37, eq. (7.811-1)], the integral in (62) is evaluated as

(63)
[TeX:] $$\begin{aligned} \bar{C}_n= & \frac{1}{\bar{\gamma} \ln (2)} \frac{\mathcal{A}_N}{\mathcal{P}_N} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} I_N \\ & \times G_{2, N+2}^{N+2,1}\left(\left.\frac{1}{\bar{\gamma}} \frac{\mathcal{A}_N}{\mathcal{P}_N} \right\rvert\, \begin{array}{c} -1,0 \\ \mathbf{J}_N,-1,-1 \end{array}\right)\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right], \end{aligned}$$

a further simplification is afforded by referring to the Meijer G-function property in (29), thus

(64)
[TeX:] $$\begin{aligned} \bar{C}_n= & \frac{1}{\ln (2)} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} I_N \\ & \times G_{2, N+2}^{N+2,1}\left(\left.\frac{1}{\bar{\gamma}} \frac{\mathcal{A}_N}{\mathcal{P}_N} \right\rvert\, \begin{array}{c} 0,1 \\ \mathbf{M}_N, 0,0 \end{array}\right)\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right] . \end{aligned}$$

For a special case of independent cascaded Nakagami-m fading channels [TeX:] $$\lambda_k=0(k=1,2, \cdots, N),$$ it can be easily shown that the average channel capacity in (64) becomes

[TeX:] $$\bar{C}_n=\frac{1}{\ln (2) \prod_{j=1}^N \Gamma\left(m_j\right)} G_{2, N+2}^{N+2,1}\left(\left.\frac{1}{\bar{\gamma}} \prod_{j=1}^N m_j\right\rvert \begin{array}{c} {0,1}\\ {m_N, \cdots, m_1, 0,0} \end{array}\right) .$$

E. Average Bit Error Probability

The average bit error probability (BEP) of a system gives an integrated look about its performance measure. In this section, we derive exact expression for the average BEP of coherently detected binary modulated signals over arbitrarily correlated non-identically distributed cascaded Nakagami-m fading channels. We will study the effect of cascaded level, correlation between cascaded paths and fading severity parameters on the average BEP. In fact, the average BEP for a modulated signal transmitted in a fading channel with AWGN may be written as [34, eq. (13.3-4)]

(65)
[TeX:] $$P_b=\int_0^{\infty} P_b(\gamma) f_\gamma(\gamma) d \gamma,$$

where [TeX:] $$P_b(\gamma)$$ is the conditional BEP with fixed [TeX:] $$\gamma$$

It is well known that, the conditional BEP of arbitrarily correlated binary signals with cross-correlation coefficient [TeX:] $$\rho (\rho=-1 \text{ for antipodal BPSK and }\rho=0$$ orthogonal BFSK) may be expressed as

(66)
[TeX:] $$P_b(\gamma)=\frac{1}{2} \operatorname{erfc}\left(\sqrt{\frac{\gamma}{2}(1-\rho)}\right),$$

where [TeX:] $$\text{erfc}(\cdot)$$ represents the complementary error function [34, eq. (2.2.18)]. We now proceed to find the average BEP for binary signals. As a result of the substitution of (57) and (66) into (65), it is now possible to write (65) in the form

(67)
[TeX:] $$\begin{aligned} P_b= & \frac{1}{\bar{\gamma}} \frac{\mathcal{A}_N}{\mathcal{P}_N} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \ldots \sum_{k_N=0}^{\infty} I_N\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right] \\ & \times \int_0^{\infty} \frac{1}{2} \operatorname{erfc}\left(\sqrt{\frac{\gamma}{2}(1-\rho)}\right) G_{0, N}^{N, 0}\left(\left.\frac{\gamma}{\bar{\gamma}} \frac{\mathcal{A}_N}{\mathcal{P}_N} \right\rvert\, \begin{array}{c} - \\ \mathbf{J}_N \end{array}\right) d \gamma, \end{aligned}$$

using the relation [36, eq.(8.4.14.2)]

[TeX:] $$\operatorname{erfc}(\sqrt{\gamma})=\frac{1}{\sqrt{\pi}} G_{1,2}^{2,0}\left(\left.\gamma\right\rvert \begin{array}{c}1\\ {0,1 / 2} \end{array}\right)$$

and with a simple simplification it is now possible to write (67) in the form

(68)
[TeX:] $$\begin{aligned} P_b= & \frac{1}{\bar{\gamma}(1-\rho) \sqrt{\pi}} \frac{\mathcal{A}_N}{\mathcal{P}_N} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \ldots \sum_{k_N=0}^{\infty} I_N\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right] \\ & \times \int_0^{\infty} G_{1,2}^{2,0}\left(\gamma \left\lvert\, \begin{array}{c} 1 \\ 0,1 / 2 \end{array}\right.\right) G_{0, N}^{N, 0}\left(\left.\frac{2 \gamma}{\bar{\gamma}(1-\rho)} \frac{\mathcal{A}_N}{\mathcal{P}_N} \right\rvert\, \begin{array}{c} - \\ \mathbf{J}_N \end{array}\right) d \gamma . \end{aligned}$$

The integral in (68) can be evaluated with the aid of [37, eq. (7.811)], then

(69)
[TeX:] $$\begin{aligned} P_b= & \frac{1}{\bar{\gamma}(1-\rho) \sqrt{\pi}} \frac{\mathcal{A}_N}{\mathcal{P}_N} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \ldots \sum_{k_N=0}^{\infty} I_N\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right] \\ & \times G_{2, N+1}^{N, 2}\left(\left.\frac{2}{\bar{\gamma}(1-\rho)} \frac{\mathcal{A}_N}{\mathcal{P}_N} \right\rvert\, \begin{array}{l} 0,-1 / 2 \\ \mathbf{J}_N,-1 \end{array}\right) . \end{aligned}$$

A further simplification can be obtained by referring to the Meijer G-function property in (29), thus

(70)
[TeX:] $$\begin{aligned} P_b= & \frac{1}{2 \sqrt{\pi}} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \ldots \sum_{k_N=0}^{\infty} I_N\left[\prod_{j=1}^N \frac{h_j^{k_j}}{k_{j} ! \Gamma\left(k_j+m_j\right)}\right] \\ & \times G_{2, N+1}^{N, 2}\left(\left.\frac{2}{\bar{\gamma}(1-\rho)} \frac{\mathcal{A}_N}{\mathcal{P}_N} \right\rvert\, \begin{array}{c} 1,1 / 2 \\ \mathbf{M}_N, 0 \end{array}\right) . \end{aligned}$$

To validate the obtained average bit error probability expression, consider a special case of a product of N independent Nakagami-m RVs, [TeX:] $$\lambda_k=0(k=1,2, \cdots, N) .$$ In this case [TeX:] $$h_i=\lambda_i^2 /\left(1-\lambda_i^2\right)=0$$ and [TeX:] $$\alpha_1=h_1=0, \alpha_2=h_2=0, \cdots, \alpha_N=h_N+1=1,$$ hence [TeX:] $$I_N=1, \mathcal{A}_N=\prod_{i=1}^N m_i \text{ and } \mathcal{P}_N=1.$$ Therefore (70) simplifies to

[TeX:] $$\begin{aligned} P_b= & \frac{1}{2 \sqrt{\pi}}\left[\prod_{j=1}^N \frac{1}{\Gamma\left(m_j\right)}\right] \\ & \times G_{2, N+1}^{N, 2}\left(\left.\frac{2}{\bar{\gamma}(1-\rho)} \prod_{i=1}^N m_i\right\rvert \begin{array}{c}{1,1/2}\\ {m_N, m_{N-1}, \cdots, m_1, 0} \end{array}\right), \end{aligned}$$

which agrees with [17, eq. (22)].

For a special case of a product of N correlated Rayleigh RVs, [TeX:] $$m_i=1(i=1,2, \cdots, N)$$ and after some mathematical manipulations, it can be easily shown that [TeX:] $$I_N$$ is reduced to

[TeX:] $$I_N=\Gamma\left(\mathcal{K}_N+1\right) \mathcal{H}_N^{-\left(\mathcal{K}_N+1\right)},$$

where [TeX:] $$\mathcal{K}_N=\sum_{n=1}^N k_n \text{ and } \mathcal{H}_N=1+\sum_{n=1}^N h_n.$$ In this case [TeX:] $$\mathcal{A}_N \text { and } \mathcal{P}_N$$ may be expressed as

[TeX:] $$\mathcal{A}_N=\left.\sum_{n=0}^N \frac{d^n}{d u^n}\left[\prod_{k=1}^N\left(1-\lambda_k^2+\lambda_k^2 u\right)\right]\right|_{u=0}$$

and

[TeX:] $$\mathcal{P}_N=\prod_{j=1}^N\left(1-\lambda_j^2\right).$$

Therefore the bit error probability can be written as

[TeX:] $$\begin{aligned} P_b= & \frac{1}{2 \sqrt{\pi}} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} \Gamma\left(\mathcal{K}_N+1\right) \mathcal{H}_N^{-\left(\mathcal{K}_N+1\right)}\left[\prod_{j=1}^N \frac{h_j^{k_j}}{\left(k_{j} !\right)^2}\right] \\ & \times G_{2, N+1}^{N, 2}\left(\left.\frac{2}{\bar{\gamma}(1-\rho)} \frac{\mathcal{A}_N}{\mathcal{P}_N}\right| \begin{array}{c}{1,1/2}\\ {k_N+1, k_{N-1}+1, \cdots, k_1+1,0} \end{array}\right), \end{aligned}$$

which is identical to bit error probability of binary signals in correlated cascaded Rayleigh fading channels given in [27].

F. Computational Complexity

This paper, provides expressions for the PDF, CDF, outage probability, average channel capacity and average bit error probability over the generalized cascaded Nakagami-m fading channels with the arbitrarily correlation and non-identical fading severity parameters m. As it is common for multivariate distributions, the statistical expressions obtained involves the evaluation of the Meijer-G functions and the multiple infinite series. Due to its single-fold integration representation, the Meijer-G function has a fixed and low computational complexity, and it is now a standard built-in function in wellknown mathematical software packages, such as Mathematica and Matlab. In numerical evaluations, it may seem difficult to compute an exact value for the multiple infinite sums, where the computational complexity of evaluating the multiple infinite sums grows linearly proportional to the total number of summation terms. When this happens we may truncate the multiple infinite sums as

[TeX:] $$\sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} F(\cdots) \approx \sum_{k_1=0}^{J_1} \sum_{k_2=0}^{J_2} \cdots \sum_{k_N=0}^{J_N} F(\cdots),$$

and the number of terms needed to completely capture a good approximate numerical value should be specified for an acceptable tolerance. The number of truncated terms in the infinite series [TeX:] $$J_1, J_2, \cdots, J_N,$$ depends on the software precision and on the values of the fading severity parameters, the correlation coefficient and the number of cascaded subchannels N. Indeed, the larger the value of the cascaded subchannels N, the larger the number of terms into the truncated series required to be represented to provide an acceptable tolerance.

The values of the required truncation terms [TeX:] $$J_1$$ of the ith sum (i = 1, 2, · · ·,N) required to maintain an acceptable tolerance of [TeX:] $$1 \times 10^{-9}$$ for fading severity parameters [TeX:] $$m=\left(m_1, m_2, \cdots, m_N\right)$$ and correlation coefficient parameters [TeX:] $$\lambda=\left(\lambda_1, \lambda_2, \cdots, \lambda_N\right)$$ are tabulated in Tables I-VI. The required truncation terms [TeX:] $$J_1$$ are computed for the average bit error probability expression given in (70), the outage probability expression given in (59) and the average channel capacity expression given in (64). Moreover, the truncation terms of the mentioned expressions are computed for a single value of the average SNR/bit, with the average SNR/bit equals to 15 dB and 30 dB respectively.

Table I and Table II shows the truncation order [TeX:] $$J_1$$ of the ith sum for the average bit error probability expression given in (70) with different fading severity parameters m and different correlation coefficient parameters λ. The number of truncated terms are computed for a single value of the average SNR/bit, where the SNR/bit equals to 15 dB and 30 dB respectively.

TABLE I

VALUES OF [TeX:] $$J_i$$ FOR DIFFERENT CASCADED LEVEL N WITH [TeX:] $$m=\left(m_1, m_2, m_3\right) \text { AND } \lambda=\left(\lambda_1, \lambda_2, \lambda_3\right)$$ REQUIRED TO MAINTAIN AN ACCEPTABLE TOLERANCE OF [TeX:] $$1 \times 10^{-9}$$ FOR THE AVERAGE BEP.
Average SNR/bit [TeX:] $$\bar{\gamma}=15 \mathrm{~dB}$$
[TeX:] $$m=\left(0.5, 1, 2\right) \lambda=(\sqrt{0.5}, \sqrt{0.5}, \sqrt{0.5})$$ [TeX:] $$m=\left(0.5, 1, 2\right) \lambda=(\sqrt{0.8}, \sqrt{0.8}, \sqrt{0.8})$$
N [TeX:] $$J_1$$ [TeX:] $$J_2$$ [TeX:] $$J_3$$ N [TeX:] $$J_1$$ [TeX:] $$J_2$$ [TeX:] $$J_3$$
1 6 - - 1 10 - -
2 22 22 - 2 62 62 -
3 27 27 27 3 78 78 78

TABLE II

VALUES OF [TeX:] $$J_i$$ FOR DIFFERENT CASCADED LEVEL N WITH [TeX:] $$m=\left(m_1, m_2, m_3\right) \text { AND } \lambda=\left(\lambda_1, \lambda_2, \lambda_3\right)$$ REQUIRED TO MAINTAIN AN ACCEPTABLE TOLERANCE OF [TeX:] $$1 \times 10^{-9}$$ FOR THE AVERAGE BEP.
Average SNR/bit [TeX:] $$\bar{\gamma}=30 \mathrm{~dB}$$
[TeX:] $$m=\left(0.5, 1, 2\right) \lambda=(\sqrt{0.5}, \sqrt{0.5}, \sqrt{0.5})$$ [TeX:] $$m=\left(0.5, 1, 2\right) \lambda=(\sqrt{0.8}, \sqrt{0.8}, \sqrt{0.8})$$
N [TeX:] $$J_1$$ [TeX:] $$J_2$$ [TeX:] $$J_3$$ N [TeX:] $$J_1$$ [TeX:] $$J_2$$ [TeX:] $$J_3$$
1 5 - - 1 7 - -
2 21 21 - 2 61 61 -
3 26 26 26 3 77 77 77

It is observed that, for the single value of a bit signal-to-noise ratio ranging from 15dB to 30dB, the nested infinite sums in the average bit error probability expression converge after the 27th term when [TeX:] $$m=(0.5,1,2) \text { and } \lambda=(\sqrt{0.5}, \sqrt{0.5}, \sqrt{0.5}) \text {. }$$ While for the same value of m = (0.5, 1, 2) but with different [TeX:] $$\lambda=(\sqrt{0.8}, \sqrt{0.8}, \sqrt{0.8}) ,$$ the nested infinite sums in the bit error probability expression converge after the 78th term.

In the computation of the outage probability expression given in (59), Table III and Table IV shows the truncation order [TeX:] $$J_1$$ of the ith sum for different fading severity parameters m and correlation coefficient parameters λ, In Table III and Table IV the number of truncated terms are computed for a threshold of [TeX:] $$\gamma_{t h}=3$$ and a single value of the average SNR/bit equals to 15 dB and 30 dB respectively. It is observed that, for the single value of [TeX:] $$\bar{\gamma}$$ ranging from 15dB to 30dB, the nested infinite sums in the outage probability expression converge after the 29th term when [TeX:] $$m=(1,2,3) \text { and } \lambda=(\sqrt{0.5}, \sqrt{0.5}, \sqrt{0.5}) \text {. }$$ While for the same value of m = (1, 2, 3) but with different [TeX:] $$\lambda=(\sqrt{0.8}, \sqrt{0.8}, \sqrt{0.8}) \text {, }$$ the nested infinite sums in the outage probability expression converge after the 83th term.

TABLE III

VALUES OF [TeX:] $$J_i$$ FOR DIFFERENT CASCADED LEVEL N WITH [TeX:] $$m=\left(m_1, m_2, m_3\right) \text { AND } \lambda=\left(\lambda_1, \lambda_2, \lambda_3\right)$$ REQUIRED TO MAINTAIN AN ACCEPTABLE TOLERANCE OF [TeX:] $$1 \times 10^{-9}$$ FOR THE OUTAGE PROBABILITY.
[TeX:] $$\gamma_{t h}=3 \mathrm{~dB}, \bar{\gamma}=15 \mathrm{~dB}$$
[TeX:] $$m=\left(1, 2, 3\right) \lambda=(\sqrt{0.5}, \sqrt{0.5}, \sqrt{0.5})$$ [TeX:] $$m=\left(1, 2, 3\right) \lambda=(\sqrt{0.8}, \sqrt{0.8}, \sqrt{0.8})$$
N [TeX:] $$J_1$$ [TeX:] $$J_2$$ [TeX:] $$J_3$$ N [TeX:] $$J_1$$ [TeX:] $$J_2$$ [TeX:] $$J_3$$
1 6 - - 1 10 - -
2 25 25 - 2 70 70 -
3 29 29 29 3 83 83 83

TABLE IV

VALUES OF [TeX:] $$J_i$$ FOR DIFFERENT CASCADED LEVEL N WITH [TeX:] $$m=\left(m_1, m_2, m_3\right) \text { AND } \lambda=\left(\lambda_1, \lambda_2, \lambda_3\right)$$ REQUIRED TO MAINTAIN AN ACCEPTABLE TOLERANCE OF [TeX:] $$1 \times 10^{-9}$$ FOR THE OUTAGE PROBABILITY.
[TeX:] $$\gamma_{t h}=3 \mathrm{~dB}, \bar{\gamma}=30 \mathrm{~dB}$$
[TeX:] $$m=\left(1, 2, 3\right) \lambda=(\sqrt{0.5}, \sqrt{0.5}, \sqrt{0.5})$$ [TeX:] $$m=\left(1, 2, 3\right) \lambda=(\sqrt{0.8}, \sqrt{0.8}, \sqrt{0.8})$$
N [TeX:] $$J_1$$ [TeX:] $$J_2$$ [TeX:] $$J_3$$ N [TeX:] $$J_1$$ [TeX:] $$J_2$$ [TeX:] $$J_3$$
1 5 - - 1 8 - -
2 24 24 - 2 69 69 -
3 27 27 27 3 81 81 81

In the computation of the average channel capacity expression given in (64), Table V and Table VI shows the truncation order [TeX:] $$J_i$$ of the ith sum for different fading severity parameters m and correlation coefficient parameters λ. In Table V and Table VI the number of truncated terms are computed for a single value of the average SNR/bit equals to 15 dB and 30 dB respectively. It is observed that, for the single value of [TeX:] $$\bar{\gamma}$$ ranging from 15dB to 30dB, the nested infinite sums in the average channel capacity expression converge after the 35th term when [TeX:] $$m=(0.5,1,1.5) \text { and } \lambda=(\sqrt{0.5}, \sqrt{0.5}, \sqrt{0.5}).$$ While for the same value of m = (0.5, 1, 1.5) but with different [TeX:] $$\lambda=(\sqrt{0.8}, \sqrt{0.8}, \sqrt{0.8}),$$ the nested infinite sums in the average channel capacity expression converge after the 114th term.

TABLE V

VALUES OF [TeX:] $$J_i$$ FOR DIFFERENT CASCADED LEVEL N WITH [TeX:] $$m=\left(m_1, m_2, m_3\right) \text { AND } \lambda=\left(\lambda_1, \lambda_2, \lambda_3\right)$$ REQUIRED TO MAINTAIN AN ACCEPTABLE TOLERANCE OF [TeX:] $$1 \times 10^{-9}$$ FOR THE AVERAGE CHANNEL CAPACITY.
[TeX:] $$\bar{\gamma}=15 \mathrm{~dB}$$
[TeX:] $$m=\left(0.5, 1, 1.5\right) \lambda=(\sqrt{0.5}, \sqrt{0.5}, \sqrt{0.5})$$ [TeX:] $$m=\left(0.5, 1, 1.5\right) \lambda=(\sqrt{0.8}, \sqrt{0.8}, \sqrt{0.8})$$
N [TeX:] $$J_1$$ [TeX:] $$J_2$$ [TeX:] $$J_3$$ N [TeX:] $$J_1$$ [TeX:] $$J_2$$ [TeX:] $$J_3$$
1 28 - - 1 80 - -
2 32 32 - 2 95 95 -
3 35 35 35 3 112 112 112

TABLE VI

VALUES OF [TeX:] $$J_i$$ FOR DIFFERENT CASCADED LEVEL N WITH [TeX:] $$m=\left(m_1, m_2, m_3\right) \text { AND } \lambda=\left(\lambda_1, \lambda_2, \lambda_3\right)$$ REQUIRED TO MAINTAIN AN ACCEPTABLE TOLERANCE OF [TeX:] $$1 \times 10^{-9}$$ FOR THE AVERAGE CHANNEL CAPACITY.
[TeX:] $$\bar{\gamma}=30 \mathrm{~dB}$$
[TeX:] $$m=\left(0.5, 1, 1.5\right) \lambda=(\sqrt{0.5}, \sqrt{0.5}, \sqrt{0.5})$$ [TeX:] $$m=\left(0.5, 1, 1.5\right) \lambda=(\sqrt{0.8}, \sqrt{0.8}, \sqrt{0.8})$$
N [TeX:] $$J_1$$ [TeX:] $$J_2$$ [TeX:] $$J_3$$ N [TeX:] $$J_1$$ [TeX:] $$J_2$$ [TeX:] $$J_3$$
1 29 - - 1 82 - -
2 32 32 - 2 96 96 -
3 35 35 35 3 114 114 114

IV. NUMERICAL RESULTS

In this section, numerical results verified by Monte Carlo simulation for the probability density function, Outage probability verses average SNR [TeX:] $$\bar{\gamma},$$ average channel capacity in (bit/sec/Hz) and average bit error probability are presented. Across this section, [TeX:] $$m=\left[\begin{array}{llll} m_1 & m_2 & \cdots & m_N \end{array}\right],$$ [TeX:] $$\lambda=\left[\begin{array}{llll} \lambda_1 & \lambda_2 & \cdots & \lambda_N \end{array}\right] \text { and } \sigma^2=\left[\begin{array}{llll} \sigma_1^2 & \sigma_2^2 & \cdots & \sigma_N^2 \end{array}\right]$$ are used as representations for the sets of [TeX:] $$\left\{m_i\right\},\left\{\lambda_i\right\} \text { and }\left\{\sigma_i^2\right\},$$ respectively. The impact of severity parameter m on the PDF of double cascaded Nakagami-m fading channels (N = 2) with the same correlations parameters [TeX:] $$\lambda=[\sqrt{0.5}, \sqrt{0.5}]$$ is illustrated in Fig. 2. It is obvious that as the value of m increases, the PDF will be more spread and the tails decline rates will be reduced. As a special case, the PDF of double Rayleigh fading channels is illustrated with m = [1, 1], which has more severity than the other distributions. In Fig. 3, the impact of correlation between fading channels for N = 2 with identical severity parameters m, i.e., m = [2, 2] and non-identical severity parameters m, i.e., m = [2, 3] , and with [TeX:] $$\lambda=[0,0], \lambda=[\sqrt{0.5}, \sqrt{0.5}] \text { and } \lambda=[\sqrt{0.8}, \sqrt{0.8}]$$ is illustrated. As observed in both scenarios, it is clear that, as the correlation parameters [TeX:] $$\left\{\lambda_i\right\}$$ increase, the PDF area will be shifted towards the origin with less value of peaks and slower tails decline rates. In Fig. 4, the impact of the fading parameters σ on the PDF of cascaded Nakagami-m fading channels for N = 3 and with [TeX:] $$\sigma^2=[0.3,0.3,0.3], \sigma^2=[0.5,0.5,0.5] \text { and } \sigma^2=[0.5,1,1.5]$$ is illustrated. The correlation and fading severity parameter are the same for each [TeX:] $$\sigma \text {, i.e., } \lambda=[\sqrt{0.3}, \sqrt{0.4}, \sqrt{0.5}] \text { and } m=[1,2,3] \text {. }$$ As noticed, the more values of [TeX:] $$\sigma^2$$ are, the more spread the PDF is.

Fig. 2.

The PDF of the cascaded Nakagami-m fading channels with N = 2, correlation parameters [TeX:] $$\lambda=[\sqrt{0.5}, \sqrt{0.5}]$$, fading parameters [TeX:] $$\sigma^2=[0.5, 0.5]$$ and different fading severity parameters m = [1, 1], m = [1, 2], m = [1.5, 2], m = [2, 3] and m = [3, 4].
2.png

Fig. 3.

The PDF of the cascaded Nakagami-m fading channels for N = 2 with different correlation parameters [TeX:] $$\lambda=[0,0], \lambda=[\sqrt{0.5}, \sqrt{0.5}] \text{and} \lambda=[\sqrt{0.8}, \sqrt{0.8}]$$ and fading parameters [TeX:] $$\sigma^2=[0.5, 0.5]$$ with different fading severity parameters m = [2, 2] and m = [2, 3].
3.png

Fig. 4.

The PDF of the cascaded Nakagami-m fading channels for N = 3 with correlation parameters [TeX:] $$\lambda=[\sqrt{0.3}, \sqrt{0.4}, \sqrt{0.5}],$$ fading severity parameters m = [1, 2, 3] and different fading parameters [TeX:] $$\sigma^2=[0.3, 0.3, 0.3],$$ [TeX:] $$\sigma^2=[0.5, 0.5, 0.5] \text{ and } \sigma^2=[0.5, 1, 1.5]$$
4.png

Fig. 5 illustrates the outage probability for uncorrelated Nakagami-m fading channels for different cascaded-levels N = {1, 2, 3, 4}, different severity parameters m, and with the same threshold [TeX:] $$\gamma_{t h}=3$$ for each cascaded-level. For a certain cascaded-level, it is noticeable that the increase of the values of severity parameters m will cause the outage probability to be decreased. Also, for certain fading parameters, as the value of cascaded-level increases, the outage probability will increase too. Fig. 6 shows the impact of correlation on the outage probability for correlated cascaded Nakagami-m fading channels with different cascaded-levels N. It is obvious that for each cascaded-level value, as the values of correlation parameters increase, the outage probability will be increased considerably.

Fig. 5.

Outage Probability for independent cascaded Nakagami-m fading channels for different cascaded-levels N = 1, 2, 3, 4 with [TeX:] $$\gamma_{t h}=3$$ and different severity fading parameters m.
5.png

Fig. 6.

Outage Probability for correlated cascaded Nakagami-m fading channels for different cascaded-levels N = 1, 2, 3, 4 with [TeX:] $$\gamma_{t h}=3$$, different correlation parameter λ and different severity fading parameters m.
6.png

In Figs. 7 and 8, the impact of cascaded-level N with different correlation parameters on the average channel capacity in bit/sec/Hz is illustrated. Fig. 7 shows the impact of cascaded-level with N = {1, 2, 3, 4} on the average channel capacity of uncorrelated cascaded Nakagami-m fading channels for different values of severity parameters m. As seen, for a certain cascaded-level N, as the values of fading severity parameters m increase, the average channel capacity increases too. Furthermore, as the value of cascaded-level N increases, the average channel capacity will be decreased. Fig. 8 shows the impact of correlation on the average channel capacity of cascaded Nakagami-m fading channels. Obviously, for a certain cascaded-level N, and with the same severity parameters m, the increasing of correlation parameters will cause the average channel capacity to be decreased.

Fig. 7.

Normalized average channel capacity for independent cascaded Nakagami-m fading channels with different value of cascaded-levels N = 1, 2, 3, 4 and different values of severity fading parameters m.
7.png

Fig. 8.

Normalized average channel capacity for correlated cascaded Nakagami-m fading channels with different value of cascaded-levels N = 1, 2, 3, different correlation parameters and different values of severity fading parameters m.
8.png

Figs. 9–12 show the impact of cascaded-level values N and correlation parameters on the average probability of a bit in error for coherent detected BPSK and BFSK. In Figs. 9 and 10 , the impact of severity parameters m and cascaded-levels N on the probability of a bit in error versus the average SNR [TeX:] $$\bar{\gamma}$$ of uncorrelated cascaded Nakagami-m fading channels for coherent detected BPSK and BFSK is illustrated. As noticed, for the same cascaded-level N, as the severity parameters m increase, there is a decrease in the average probability of bit in error. The impact of correlation parameters on the average probability of a bit in error for coherent detected BPSK and BFSK signals is illustrated in Figs. 11 and 12. It is obvious that for the same cascaded-level N and with the same severity parameters m, as the correlation parameters increase, the average probability of bit in error will also increase. Finally, it can be seen that for the same cascaded fading channels and with the same fading and correlation parameters, BPSK scheme outperforms BFSK scheme.

Fig. 9.

Average probability of a bit in error for BPSK modulation over uncorrelated cascaded Nakagami-m fading channels with N = 1, 2, 3, 4 and different severity parameters m.
9.png

Fig. 10.

Average probability of a bit in error for BFSK modulation over uncorrelated cascaded Nakagami-m fading channels with N = 1, 2, 3, 4 and different severity parameters m.
10.png

Fig. 11.

Average probability of a bit in error for BPSK modulation over correlated cascaded Nakagami-m fading channels with N = 1, 2, 3, different correlation parameters and different severity parameters m.
11.png

Fig. 12.

Average probability of a bit in error for BFSK modulation over correlated cascaded Nakagami-m fading channels with N = 1, 2, 3, different correlation parameters and different severity parameters m.
12.png

V. CONCLUSION

In this paper, statistics of the cascaded Nakagami-m fading channels with arbitrary correlation were presented. The compound end-to-end cascaded channels are constructed as the product of N arbitrarily correlated Nakagami-m RVs that are not necessarily identically distributed. Novel expressions for the PDF, CDF, and the n-th moment of the cascaded channels were derived in terms of the Meijer G-function. Also, the PDF, CDF, and the n-th moment of the received instantaneous SNR over slow and flat fading compound channels were obtained. Furthermore, to examine the performance over the compound channels, outage probability, average channel capacity, and average bit error probability for coherently detected binary modulation schemes were studied. Finally, numerical results for the derived expressions were illustrated and validated through Monte-Carlo simulations. It was shown that by increasing the value of the cascaded-level N the system performance will be degraded. As expected, this is because the transmitted signal undergoes more severity than in the conventional one-way wireless channels. Also, the results have shown that as the correlation between the compound channels increases, the performance of the system will be worse. Further research on this topic could include studying the performance of this communication scenario with the use of multiple communication users, and might explore many diversity and channel coding schemes in order to mitigate fading in such channels.

APPENDIX A

EVALUATION OF [TeX:] $$I_N$$

The main problem in this section is to find the value of [TeX:] $$I_N$$, where [TeX:] $$I_N$$ is a [TeX:] $$2 m_N$$-fold integral which may be defined as

(71)
[TeX:] $$\begin{aligned} I_N= & \int_{\infty}^{\infty} \int_{\infty}^{\infty} \cdots \int_{\infty}^{\infty} \frac{1}{\pi^{m_N}} \prod_{i=1}^N\left[\left(\sum_{\ell=1}^{2 m_i} t_{\ell}^2\right)^{k_i} \exp \left(-\alpha_i \sum_{\ell=1}^{2 m_i} t_{\ell}^2\right)\right] \\ & \times d t_1 d t_2 \cdots d t_{2 m_N} . \end{aligned}$$

In evaluating the integral in (71), the value of [TeX:] $$I_N$$ will be obtained first for N = 2, then for large value of N. Let N = 2, with a straightforward mathematical manipulation, [TeX:] $$I_2$$ can be written as

(72)
[TeX:] $$\begin{aligned} I_2= & \int_{\infty}^{\infty} \int_{\infty}^{\infty} \cdots \int_{\infty}^{\infty} \frac{1}{\pi^{m_1}} \frac{1}{\pi^{m_2-m_1}} z_1^{k_1}\left(z_1+z_2\right)^{k_2} \\ & \times e^{-\left(\alpha_1+\alpha_2\right) z_1} e^{-\alpha_2 z_2} d t_1 d t_2 \cdots d t_{2 m_2}, \end{aligned}$$

where [TeX:] $$z_1=\sum_{\ell=1}^{2 m_1} t_{\ell}^2 \text { and } z_2=\sum_{\ell=2 m_1+1}^{2 m_2} t_{\ell}^2 .$$ Apply the binomial series

(73)
[TeX:] $$(a+b)^n=\sum_{i=0}^n\left(\begin{array}{l} n \\ i \end{array}\right) a^i b^{n-i}$$

to the quantity [TeX:] $$\left(z_1+z_2\right)^{k_2}$$ in (72), then [TeX:] $$I_2$$ can be expressed as the product of two sets of separate integrals, that is,

(74)
[TeX:] $$\begin{aligned} I_2 & =\sum_{i_2=0}^{k_2}\left(\begin{array}{c} k_2 \\ i_2 \end{array}\right) \underbrace{\int_{\infty}^{\infty} \cdots \int_{\infty}^{\infty} \frac{1}{\pi^{m_1}} z_1^{k_1+i_2} e^{-\left(\alpha_1+\alpha_2\right) z_1} d t_1 \cdots d t_{2 m_1}}_{G_1} \\ & \times \underbrace{\int_{\infty}^{\infty} \cdots \int_{\infty}^{\infty} \frac{1}{\pi^{m_2-m_1}} z_2^{k_2-i_2} e^{-\alpha_2 z_2} d t_{2 m_1+1} \cdots d t_{2 m_2}}_{G_2} \\ & =\sum_{i_2=0}^{k_2}\left(\begin{array}{c} k_2 \\ i_2 \end{array}\right) G_1 G_2 . \end{aligned}$$

In evaluating the multi-integrals [TeX:] $$G_1 \text { and } G_2$$ in (74), hyperspherical coordinate system transformation [40] will be used.

Apply transformation to [TeX:] $$G_1,$$ the variables may be written as

(75)
[TeX:] $$\begin{aligned} t_1 & =\tau \cos \left(\theta_1\right) \\ t_2 & =\tau \sin \left(\theta_1\right) \cos \left(\theta_2\right) \\ t_3 & =\tau \sin \left(\theta_1\right) \sin \left(\theta_2\right) \cos \left(\theta_3\right) \\ & \vdots \\ t_{2 m_1-1} & =\tau \sin \left(\theta_1\right) \sin \left(\theta_2\right) \cdots \cos \left(\theta_{2 m_1-1}\right) \\ t_{2 m_1} & =\tau \sin \left(\theta_1\right) \sin \left(\theta_2\right) \cdots \sin \left(\theta_{2 m_1-1}\right) . \end{aligned}$$

Then, [TeX:] $$\tau^2=\sum_{\ell=1}^{2 m_1} t_{\ell}^2$$ and the Jacobian of the transformation can be evaluated as

[TeX:] $$|J|=\tau^{2 m_1-1} \sin ^{2 m_1-2}\left(\theta_1\right) \sin ^{2 m_1-3}\left(\theta_2\right) \cdots \sin \left(\theta_{2 m_1-2}\right) \text {. }$$

Hence, it is now possible to write [TeX:] $$G_1$$ in the form

(76)
[TeX:] $$\begin{aligned} G_1= & \int_0^{\infty} \int_0^\pi \cdots \int_0^\pi \int_0^{2 \pi} \frac{1}{\pi^{m_1}} \tau^{2\left(k_1+i_2\right)+2 m_1-1} e^{-\mu_1 \tau^2} \\ & \times \sin ^{2 m_1-2}\left(\theta_1\right) \sin ^{2 m_1-3}\left(\theta_2\right) \cdots \sin \left(\theta_{2 m_1-2}\right) \\ & \times d \theta_1 d \theta_2 \cdots d \theta_{2 m_1-2} d \theta_{2 m_1-1} d \tau, \end{aligned}$$

where [TeX:] $$\mu_1=\alpha_1+\alpha_2=1+h_1+h_2.$$ The inner integrals over the angles [TeX:] $$\theta_i \text { for } i=1,2, \cdots, 2 m_1-1 \text {, }$$ can be evaluated by simple induction until the general result is revealed. Thus,

(77)
[TeX:] $$G_1=\frac{2}{\Gamma\left(m_1\right)} \int_0^{\infty} \tau^{2\left(k_1+i_2\right)+2 m_1-1} e^{-\mu_1 \tau^2} d \tau .$$

Now let [TeX:] $$v=\tau^2$$ and with the aid of [37, eq. (3.351-3)], [TeX:] $$G_1$$ in (77) reduces to

(78)
[TeX:] $$G_1=\frac{1}{\Gamma\left(m_1\right)} \frac{\Gamma\left(k_1+m_1+i_2\right)}{\left(1+h_1+h_2\right)^{k_1+m_1+i_2}} .$$

The same method as that described above in evaluating [TeX:] $$G_1$$ is applied to evaluate the [TeX:] $$2\left(m_2-m_1\right)$$-fold integral in [TeX:] $$G_2$$. Therefore, [TeX:] $$G_2$$ can be expressed by

(79)
[TeX:] $$G_2=\frac{1}{\Gamma\left(m_2-m_1\right)} \frac{\Gamma\left(k_2+m_2-m_1-i_2\right)}{\left(1+h_2\right)^{k_2+m_2-m_1-i_2}} .$$

Substituting [TeX:] $$G_1 \text { and } G_2$$ in (74) yields

[TeX:] $$\begin{aligned} I_2= & \frac{1}{\Gamma\left(m_1\right) \Gamma\left(m_2-m_1\right)} \sum_{i_2=0}^{k_2}\left(\begin{array}{c} k_2 \\ i_2 \end{array}\right) \\ & \times \frac{\Gamma\left(k_1+m_1+i_2\right)}{\left(1+h_1+h_2\right)^{k_1+m_1+i_2}} \frac{\Gamma\left(k_2+m_2-m_1-i_2\right)}{\left(1+h_2\right)^{k_2+m_2-m_1-i_2}} . \end{aligned}$$

As before, let N = 3 and with some mathematical manipulation, [TeX:] $$I_3$$ can be written as

(80)
[TeX:] $$\begin{aligned} I_3= & \int_{\infty}^{\infty} \int_{\infty}^{\infty} \cdots \int_{\infty}^{\infty} \frac{1}{\pi^{m_1}} \frac{1}{\pi^{m_2-m_1}} \frac{1}{\pi^{m_3-m_2}} z_1^{k_1}\left(z_1+z_2\right)^{k_2} \\ & \times\left(z_1+z_2+z_3\right)^{k_3} e^{-\left(\alpha_1+\alpha_2+\alpha_3\right) z_1} e^{-\left(\alpha_2+\alpha_3\right) z_2} e^{-\left(\alpha_3\right) z_3} \\ & \times d t_1 d t_2 \cdots d t_{2 m_3}, \end{aligned}$$

where [TeX:] $$z_3=\sum_{\ell=2 m_2+1}^{2 m_3} t_{\ell}^2, \alpha_1=h_1, \alpha_2=h_2 \text { and } \alpha_3=h_2+1.$$ Apply the binomial series expansion to the quantities [TeX:] $$\left(z_1+z_2\right)^{k_2} \text { and }\left(z_1+z_2+z_3\right)^{k_3}$$ in (80), then [TeX:] $$I_3$$ can be expressed as the product of three sets of separate integrals, that is,

(81)
[TeX:] $$\begin{aligned} I_3= & \sum_{i_2=0}^{k_3} \sum_{i_3=0}^{k_2+i_2}\left(\begin{array}{c} k_3 \\ i_2 \end{array}\right)\left(\begin{array}{c} k_2+i_2 \\ i_3 \end{array}\right) \\ & \times \underbrace{\int_{\infty}^{\infty} \cdots \int_{\infty}^{\infty} \frac{1}{\pi^{m_1}} z_1^{k_1+i_3} e^{-\left(\alpha_1+\alpha_2+\alpha_3\right) z_1} d x_1 \cdots d x_{2 m_1}}_{A_1} \\ & \times \underbrace{\int_{\infty}^{\infty} \cdots \int_{\infty}^{\infty} \frac{1}{\pi^{m_2-m_1}} z_2^{k_2+i_2-i_3} e^{-\left(\alpha_2+\alpha_3\right) z_2} d x_{2 m_1+1} \cdots d x_{2 m_2}}_{A_2} \\ & \times \underbrace{\int_{\infty}^{\infty} \cdots \int_{\infty}^{\infty} \frac{1}{\pi^{m_3-m_2}} z_3^{k_3-i_2} e^{-\alpha_3 z_3} d x_{2 m_2+1} \cdots d x_{2 m_3}}_{A_3} \\ = & \sum_{i_2=0}^{k_3} \sum_{i_3=0}^{k_2+i_2}\left(\begin{array}{c} k_3 \\ i_2 \end{array}\right)\left(\begin{array}{c} k_2+i_2 \\ i_3 \end{array}\right) A_1 A_2 A_3 . \end{aligned}$$

The same method as that described in evaluating [TeX:] $$G_1$$ is applied to evaluate [TeX:] $$A_1, A_2 \text{ and } A_3.$$ Therefore, these expressions reduce to

[TeX:] $$\begin{aligned} A_1 & =\frac{1}{\Gamma\left(m_1\right)} \frac{\Gamma\left(k_1+m_1+i_3\right)}{\left(1+h_1+h_2+h_3\right)^{k_1+m_1+i_3}}, \\ A_2 & =\frac{1}{\Gamma\left(m_2-m_1\right)} \frac{\Gamma\left(k_2+m_2-m_1+i_2-i_3\right)}{\left(1+h_2+h_3\right)^{k_2+m_2-m_1+i_2-i_3}}, \\ A_3 & =\frac{1}{\Gamma\left(m_3-m_2\right)} \frac{\Gamma\left(k_3+m_3-m_2-i_2\right)}{\left(1+h_3\right)^{k_3+m_3-m_2-i_2}} . \end{aligned}$$

As a result, it is now possible to write (81) in the form

[TeX:] $$\begin{aligned} I_3= & \frac{1}{\Gamma\left(m_1\right) \Gamma\left(m_2-m_1\right) \Gamma\left(m_3-m_2\right)} \sum_{i_2=0}^{k_3} \sum_{i_3=0}^{k_2+i_2}\left(\begin{array}{c} k_3 \\ i_2 \end{array}\right) \\ & \times\left(\begin{array}{c} k_2+i_2 \\ i_3 \end{array}\right) \frac{\Gamma\left(k_1+m_1+i_3\right)}{\left(1+h_1+h_2+h_3\right)^{k_1+m_1+i_3}} \\ & \times \frac{\Gamma\left(k_2+m_2-m_1+i_2-i_3\right)}{\left(1+h_2+h_3\right)^{k_2+m_2-m_1+i_2-i_3}} \frac{\Gamma\left(k_3+m_3-m_2-i_2\right)}{\left(1+h_3\right)^{k_3+m_3-m_2-i_2}} . \end{aligned}$$

Consequently, using the same approach described above recursively. It can, in fact, be shown that the solution of [TeX:] $$I_N$$ in (71) satisfies

[TeX:] $$\begin{aligned} I_N= & c_N \sum_{i_2=0}^{k_N} \sum_{i_3=0}^{k_{N-1}+i_2} \sum_{i_4=0}^{k_{N-2}+i_3} \cdots \sum_{i_N=0}^{k_2+i_{N-1}} \\ & \times\left(\begin{array}{c} k_N \\ i_2 \end{array}\right)\left(\begin{array}{c} k_{N-1}+i_2 \\ i_3 \end{array}\right)\left(\begin{array}{c} k_{N-2}+i_3 \\ i_4 \end{array}\right) \cdots\left(\begin{array}{c} k_2+i_{N-1} \\ i_N \end{array}\right) \\ & \times \frac{\Gamma\left(k_1+n_1+i_N\right)}{\mu_1^{k_1+n_1+i_N}} \frac{\Gamma\left(k_2+n_2+i_{N-1}-i_N\right)}{\mu_2^{k_2+n_2+i_{N-1}-i_N}} \cdots\\ & \times \cdots \frac{\Gamma\left(k_{N-1}+n_{N-1}+i_2-i_3\right)}{\mu_{N-1}^{k_{N-1}+n_{N-1}+i_2-i_3}} \frac{\Gamma\left(k_N+n_N-i_2\right)}{\mu_N^{k_N+n_N-i_2}}, \end{aligned}$$

with

[TeX:] $$\begin{gathered} n_{\ell}=\left\{\begin{array}{ll} m_1 & \ell=1 \\ m_{\ell}-m_{\ell-1} & \ell=2,3, \ldots, N \end{array}\quad,\right. \\ \mu_{\ell}=1+\sum_{n=\ell}^N h_n \quad \text { and } \quad c_N=\prod_{\ell=1}^N \frac{1}{\Gamma\left(n_{\ell}\right)} . \end{gathered}$$

APPENDIX B

VERIFICATION OF THE IDENTITY (51)

The main problem in this appendix is to verify the identity given by (51). Using the integral definition of [TeX:] $$I_N$$ given by (71), then, (51) can be expressed as

(82)
[TeX:] $$\begin{aligned} 1 \stackrel{?}{=} & \frac{1}{\pi^{m_N}} \int_{\infty}^{\infty} \int_{\infty}^{\infty} \cdots \int_{\infty}^{\infty} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \cdots \sum_{k_N=0}^{\infty} \\ & \times \prod_{j=1}^N\left[z_j^{k_j} e^{-\alpha_j z_j} \frac{h_j^{k_j}}{k_{j} !}\right] d t_1 d t_2 \cdots d t_{2 m_N}, \end{aligned}$$

where [TeX:] $$z_j=\sum_{\ell=1}^{2 m_j} t_{\ell}^2.$$ The nested sum in (82) can be reorganized, therefore, (82) can be expressed as

(83)
[TeX:] $$\begin{aligned} 1 & \stackrel{?}{=} \frac{1}{\pi^{m_N}} \int_{\infty}^{\infty} \int_{\infty}^{\infty} \cdots \int_{\infty}^{\infty} \\ & \times \prod_{j=1}^N\left[e^{-\alpha_j z_j} \sum_{k_j=0}^{\infty} \frac{\left(z_j h_j\right)^{k_j}}{k_{j} !}\right] d t_1 d t_2 \cdots d t_{2 m_N} . \end{aligned}$$

Obviously, the infinite series in (83) represents the Taylor series expansion of exponential function [37, eq. (1.211-1)], thus

(84)
[TeX:] $$1 \stackrel{?}{=} \frac{1}{\pi^{m_N}} \int_{\infty}^{\infty} \int_{\infty}^{\infty} \cdots \int_{\infty}^{\infty} \prod_{j=1}^N\left[e^{\left(h_j-\alpha_j\right) z_j}\right] d t_1 d t_2 \cdots d t_{2 m_N}.$$

Since [TeX:] $$\alpha_1=h_1, \alpha_2=h_2, \cdots, \alpha_N=h_N+1,$$ therefore, (84) reduce to

(85)
[TeX:] $$1 \stackrel{?}{=} \frac{1}{\pi^{m_N}} \int_{\infty}^{\infty} \int_{\infty}^{\infty} \cdots \int_{\infty}^{\infty} e^{-z_N} d t_1 d t_2 \cdots d t_{2 m_N} .$$

Similar to the procedure of evaluating [TeX:] $$G_1$$ in appendix A, we can express (85) as

(86)
[TeX:] $$1 \stackrel{?}{=} \frac{2}{\Gamma\left(m_N\right)} \int_0^{\infty} \tau^{2 m_N-1} e^{-\tau^2} d t \text {. }$$

Let [TeX:] $$u=\tau^2$$ and with the aid of [37, eq. (3.351-3)], the equality is verified.

Biography

Ibrahim Ghareeb

Ibrahim Ghareeb obtained the B.Sc. degree in Electrical Engineering from Yarmouk University, Jordan, in 1985, the M.Sc. degree in Electrical Engineering from Jordan University of Science and Technology (JUST), Jordan in 1988, and the Ph.D. degree in Electrical Engineering from the University of Ottawa, Canada, in 1995. His Ph.D. research was focused on hybrid frequency phase modulation and its applications over wireless fading channels. He worked as a Lecturer and a System Engineer at the ministry of higher education in Jordan between 1988 and 1991. In 1995 he joined the department of Electrical Engineering at JUST where he is currently an Associate Professor. Between 2006 and 2009 he was Vice Dean; faculty of graduate studies at JUST. Between 2010 and 2011 he was the Vice Dean and Head of the Department Chair for the Electrical Engineering department at Alzaytoonah University. Between 2011 and 2013 he was the chairman of the department of Electrical Engineering at the JUST. In 1997 he spent a summer term as a research associated at the school of information technology and engineering at the University of Ottawa, Canada. His area of research is wireless communications with emphasis on wireless networks, cognitive radio enabling techniques and 5G wireless networks. He collaborates closely with the local industry and research institutions. He has been active in organizing several workshops and international conferences on communications. He is a professional engineer in Jordan and a member of IEEE.

Biography

Osama Al-Shalali

Osama Al-Shalali obtained the B.Sc. degree in Electrical Engineering from University of Science and Technology, Yemen, in 2014, the M.Sc. degree in Electrical Engineering from Jordan University of Science and Technology (JUST), Jordan in 2022. His M.Sc. research was focused on Statistics of Cascaded Fading Channels with Arbitrary Correlation. In Sept. 2023 he joined the department of Electrical and Computer Engineering at Memorial University of Newfoundland, Canada where he is currently an PhD student. During 2017, he worked as a Manager-Assistant in Sana’a University, Faculty of Commerce and Economics, Information systems department. His research interests focus on wireless communications with artificial intelligence and machine learning, also, mmWave and sub-THz communication.

References

  • 1 L. J. Greenstein et al., "Guest editorial channel and propagation models for wireless system design I," IEEE J. Sel. Areas Commun., vol. 20, no. 3, pp. 493-495, Apr. 2002.doi:[[[10.1109/JSAC.2002.995507]]]
  • 2 D. Chizhik et al., "Keyholes, correlations, and capacities of multielement transmit and receive antennas, " IEEE Trans. Wireless Commun., vol. 1, no. 2, pp. 361-368, Apr. 2002.doi:[[[10.1109/7693.994830]]]
  • 3 G. Levin and S. Loyka, "Multi-keyhole MIMO channels: Asymptotic analysis of outage capacity," in Proc. IEEE ISIT, Jul. 2006.doi:[[[10.1109/ISIT.2006.262037]]]
  • 4 H. Shin and J. H. Lee, "Performance analysis of space-time block codes over keyhole Nakagami-m fading channels," IEEE Trans. Veh. Technol., vol. 53, no. 2, pp. 351-362, Mar. 2004.doi:[[[10.1109/TVT.2004.823540]]]
  • 5 B. Talha and M. Patzold, "Channel models for mobile-to-mobile cooperative communication systems: A state of the art review," IEEE Veh. Technol. Mag., vol. 6, no. 2, pp. 33-43, Jun. 2011.doi:[[[10.1109/MVT.2011.940793]]]
  • 6 R. He et al., "Geometrical-based modeling for millimeter-wave MIMO mobile-to-mobile channels," IEEE Trans. Veh. Technol., vol. 67, no. 4, pp. 2848-2863, Apr. 2018.doi:[[[10.1109/TVT.2017.2774808]]]
  • 7 G. Makhoul et al., "On the modeling of time correlation functions for mobile-to-mobile fading channels in indoor environments," IEEE Antennas Wireless Propagation Lett., vol. 16, pp. 549-552, Mar. 2017.doi:[[[10.1109/LAWP.2017.2682959]]]
  • 8 A. Behnad, N. C. Beaulieu, and B. Maham, "Multi-hop amplify-andforward relaying on Nakagami-0.5 fading channels," IEEE Wireless Commun. Lett., vol. 1, no. 3, pp. 173-176, Jun. 2012.doi:[[[10.1109/WCL.2012.030912.110222]]]
  • 9 K. P. Peppas, G. C. Alexandropoulos, and P. T. Mathiopoulos, "Performance analysis of dual-hop AF relaying systems over mixed η − µ and κ − µ fading channels," IEEE Trans. Veh. Technol., vol. 62, no. 7, pp. 3149-3163, Sep. 2013.custom:[[[https://www.researchgate.net/publication/278211123_Performance_Analysis_of_Dual-Hop_AF_Relaying_Systems_over_Mixed_eta-mu_and_kappa-mu_Fading_Channels]]]
  • 10 N. Hajri, R. Khedhiri, and N. Youssef, "On selection combining diversity in dual-hop relaying systems over double rice channels: Fade statistics and performance analysis," IEEE Access, vol. 8, pp. 72188-72203, Apr. 2020.doi:[[[10.1109/ACCESS.2020.2986142]]]
  • 11 A. Bekkali et al., "Performance analysis of passive UHF RFID systems under cascaded fading channels and interference effects," IEEE Trans. Wireless Commun., vol. 14, no. 3, pp. 1421-1433, Mar. 2015.doi:[[[10.1109/TWC.2014.2366142]]]
  • 12 D. Tyrovolas et al., "Performance analysis of cascaded reconfigurable intelligent surface networks," IEEE Wireless Commun. Lett., vol. 11, no. 9, pp. 1855-1859, Sep. 2022.doi:[[[10.1109/LWC.2022.3184635]]]
  • 13 Z. Zhakipov et al., "Accurate approximation to channel distributions of cascaded RIS-aided systems with phase errors over Nakagami-m channels," IEEE Wireless Commun. Lett., vol. 12, no. 5, pp. 922-926, May 2023.doi:[[[10.1109/LWC.2023.3251647]]]
  • 14 J. Salo, H.M. El-Sallabi, and P. Vainikainen, "The distribution of the product of independent Rayleigh random variables," IEEE Trans. Antennas Propagation, vol. 54, no. 2, pp. 639-643, Feb. 2006.doi:[[[10.1109/TAP.2005.863087]]]
  • 15 Y . Alghorani et al., "On the performance of multihop-intervehicular communications systems Over n ∗ Rayleigh fading channels," IEEE Wireless Commun. Lett., vol. 5, no. 2, pp. 116-119, Apr. 2016.doi:[[[10.48550/arXiv.1609.00142]]]
  • 16 N. C. Sagias and G. S. Tombras, "On the cascaded Weibull fading channel model," J. Franklin Institute, vol. 344, no. 1, pp. 1-11 Jan. 2007.custom:[[[-]]]
  • 17 G. K. Karagiannidis, N. C. Sagias, and P. T. Math-iopoulos, "N∗ Nakagami: A novel stochastic model for cascaded fading channels," IEEE Trans. Commun., vol. 55, no. 8, pp. 1453-1458, Aug. 2007.custom:[[[-]]]
  • 18 P. M. Shankar, "Diversity in cascaded N∗ Nakagami channels," Ann. Telecommun., vol. 68, pp. 477-483, Aug. 2013.custom:[[[-]]]
  • 19 F. Yilmaz and M. S. Alouini, "Product of the powers of generalized Nakagami-m variates and performance of cascaded fading channels," in Proc. IEEE GLOBECOM, Nov. 2009.doi:[[[10.1109/GLOCOM.2009.5426254]]]
  • 20 E. J. Leonardo and M. D. Yacoub, " Product of α − µ variates" IEEE Wireless Commun. Lett., vol. 4, no. 6, pp. 637-640, Dec. 2015.custom:[[[https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=5962382]]]
  • 21 O. S. Badarneh et al., "Ratio of products of fluctuating two-ray variates," IEEE Commun. Lett., vol. 23, no. 11, pp. 1944-1948, Nov. 2019.doi:[[[10.1109/LCOMM.2019.2937959]]]
  • 22 I. Ghareeb and D. Tashman, "Statistical analysis of cascaded Rician fading channels," Int. J. Electron. Lett., vol. 8, no. 1, pp. 46-59. 2020.doi:[[[10.1080/21681724.2018.1545925]]]
  • 23 H. S. Silva et al., "Cascaded double Beaulieu-Xie fading channels," IEEE Commun. Lett., vol. 24, no.‘10, pp. 2133-2136, Oct. 2020.doi:[[[10.1109/LCOMM.2020.3004540]]]
  • 24 J. D. Griffin and G. D. Durgin, "Link envelope correlation in the backscatter channel," IEEE Commun. Lett., vol. 11, no. 9, pp. 735-737, Sep. 2007.doi:[[[10.1109/LCOMM.2007.070686]]]
  • 25 N. Fawaz, et al., "Asymptotic capacity and optimal precoding in MIMO multi-hop relay networks," IEEE Trans. Inf. Theory, vol. 57, no. 4, pp. 2050-2069, Apr. 2011.doi:[[[10.1109/TIT.2011.2111830]]]
  • 26 H. D. Goldman and R. C. Sommer, "An analysis of cascaded binary communication links," IRE Trans. Commun. Syst., vol. 10, no. 3, pp. 291-299, Sep. 1962.doi:[[[10.1109/TCOM.1962.1088660]]]
  • 27 I. Ghareeb and J. Darwish, "Statistics of cascaded Rayleigh fading channels with arbitrary correlation," IET Commun., vol. 14, no. 16, pp. 2849-2857, Oct. 2020.doi:[[[10.1049/iet-com.2019.1067]]]
  • 28 M. Nakagami, "The m-distribution—a general formula of intensity distribution of rapid fading," In: Hoffman, W.G., Ed., Statistical Methods in Radio Wave Propagation, Pergamon, Oxford, pp. 3-35, 1960.doi:[[[10.1016/B978-0-08-009306-2.50005-4]]]
  • 29 W. R. Braun and U. Dersch, "A physical mobile radio channel model," IEEE Trans. Veh. Technol., vol. 40, no. 2, pp. 472-482, May 1991.doi:[[[10.1109/25.289429]]]
  • 30 Z. Zheng, L. Wei, and J. H¨ am¨ al¨ ainen, "Novel approximations to the statistics of general cascaded Nakagami-m channels and their applications in performance analysis," in Proc. IEEE ICC, May 2013.doi:[[[10.1109/ICC.2013.6655528]]]
  • 31 Y . Zhang et al., "Backscatter communications over correlated Nakagamim fading channels," IEEE Trans. Commun., vol. 67, no. 2, pp. 1693-1704, Feb. 2019.doi:[[[10.1109/TCOMM.2018.2879611]]]
  • 32 L. Rubio et al., "The use of semi-deterministic propagation models for the prediction of the short-term fading statistics in mobile channels," in Proc. IEEE VTS, Sep. 1999.doi:[[[10.1109/VETECF.1999.801504]]]
  • 33 N. C. Beaulieu and K. T. Hemachandra, "Novel simple representations for Gaussian class multivariate distributions with generalized correlation," IEEE Trans. Inf. Theory, vol. 57, no. 12, pp. 8072-8083, Dec. 2011.doi:[[[10.1109/TIT.2011.2170133]]]
  • 34 J. G. Proakis, "Digital Communications," 5 th ed., NewYork, NY , USA: McGraw-Hill, 2008.custom:[[[-]]]
  • 35 V . K. Rohatgi and A. K. Md. Ehsanes Saleh, "An introduction to probability and statistics," 3 ed ed., Wiley Series in Probability and Statistics, 2015.custom:[[[https://onlinelibrary.wiley.com/doi/book/10.1002/9781118799635]]]
  • 36 A. P. Prudnikov, Y . A. Brychkov, O. I. Marichev, and G. G. Gould, "Integrals and Series, V ol. 3, More Special Functions," 1 st ed., Gordon and Breach science publ, Amsterdam, 1990.custom:[[[https://www.researchgate.net/publication/268650078_Integrals_and_Series_Volume_3_More_Special_Functions]]]
  • 37 I. Gradshteyn and I. Ryzhik, "Table of integrals, series,and products," 8 th ed, Elsevier Inc., USA, 2014.custom:[[[https://www.sciencedirect.com/book/9780123849335/table-of-integrals-series-and-products]]]
  • 38 M. Abramowitz and I. A. Stegun, "Handbook of mathematical functions with formulas, graphs, and mathematical tables," Applied Mathematics Series 55, Dover Publications, Inc., New York, 1965.custom:[[[-]]]
  • 39 V . S. Adamchik and O. I. Marichev, "The algorithm for calculating integrals of hypergeometric type functions and its realization in REDUCE system," in Proc. ISSAC, Jul. 1990.doi:[[[10.1145/96877.96930]]]
  • 40 L. E. Blumenson, "A derivation of n-dimensional spherical coordinates," Amer. Math. Monthly, vol. 67, no. 1, pp. 63-66, Jan. 1960.custom:[[[https://sites.math.washington.edu/~morrow/335_16/sphericalCoords.pdf]]]
  • 41 A. Papoulis and S. U. Pillai, Probability, Random Variables and Stochastic Processes, 4 th ed., McGraw-Hill, New York, 2002.custom:[[[-]]]