Age of Information in a Decentralized Network of Parallel Queues with Routing and Packets Losses

Josu Doncel and Mohamad Assaad

Abstract

Abstract: The paper deals with age of information (AoI) in a network of multiple sources and parallel queues with buffering capabilities, preemption in service and losses in served packets. The queues do not communicate between each other and the packets are dispatched through the queues according to a predefined probabilistic routing. By making use of the stochastic hybrid system (SHS) method, we provide a derivation of the average AoI of a system of two parallel queues (with and without buffer capabilities) and compare the results with those of a single queue. We show that known results of packets delay in Queuing Theory do not hold for the AoI. Unfortunately, the complexity of computing the average AoI using the SHS method increases highly with the number of queues. We therefore provide an upper bound of the average AoI in a system of an arbitrary number of M/M/1/(N+1)* queues and show its tightness in various regimes. This upper bound allows providing a tight approximation of the average AoI with a very low complexity. We then provide a game framework that allows each source to determine its best probabilistic routing decision. By using Mean Field Games, we provide an analysis of the routing game framework, propose an efficient iterative method to find the routing decision of each source and prove its convergence to the desired equilibrium.

Keywords: age of information , probabilistic routing , stochastic hybrid system

I. INTRODUCTION

AGE of information (AoI) is a relatively new metric that measures the freshness of information in the network. AoI is gaining interest in many areas (e.g., control, communication networks, etc) due to the proliferation of applications in which a monitor is interested in having timely updates about a process of interest. As a typical example, AoI can capture the timeliness of information in a sensor network where the status of a sensor is frequently monitored. Since its introduction in the seminal papers [1], [2], AoI has attracted the attention of many researchers in different fields.

A main part of the AoI literature focuses on the computation of the average AoI and its minimization, where the channel/network in which the updates are sent to the monitor is modeled as a queueing system. The computation of AoI in various queueing models have therefore been investigated. For instance, the authors in [2] considered an M/M/1 queue, an M/D/1 queue or a D/M/1 queue model, and the authors in [3], [4] studied an M/M/2 queue model. Other queueing models can also be found in [5]–[6]. While in the aforementioned work, the status updates of the system are assumed to be sent under a predefined transmission policy, the problem of the design of the update policy has been considered in some papers as well, e.g., [10]–[12]. Furthermore, the problem of scheduling and random access design with the aim of minimizing the average age of the network has been considered recently in several papers [13]–[18]. Besides, since single server queue models are not representative of networks in which packets can be sent through multiple paths, the average AoI has also been studied in networks with parallel servers [19]–[21], or in more complex networks such as multihop systems [22], [23]. For instance, the scheduling of a single packet flow in multi-hop queueing networks was studied in [22], [23]. It was shown that preemptive last generated first served (P-LGFS) policy is age-optimal if service times are i.i.d. exponentially distributed. In [24], a multihop scenario in which each node is both a source and a monitor is considered. Fundamental age limits and near optimal scheduling policies are provided in this work. Recently, the analysis of AoI in a multihop multicast context has been studied in [25]. For more detailed and comprehensive review of recent work on AoI, one can refer to [26].

In this paper, the network model is different from the aforementioned previous work since we consider multiple sources that can send their status updates through a system of different parallel queues. The queues are assumed to be decentralized in the sense that they cannot communicate between each other. The incoming traffic from each source is dispatched through the parallel queues according to a predefined probabilistic routing. Note that we also develop a framework to optimize this probabilistic routing decision as we will see later on in this paper. Furthermore, we consider a realistic assumption that the transmissions through the parallel queues are not perfect and that packets can be lost. This assumption is quite realistic in many scenarios, e.g. when the transmission arises over wireless links (which induces errors and hence packet losses) or even in wired networks when the service provider breaks down. We aim to analyze the average AoI of this system composed of parallel queues and, for this purpose, we use the stochastic hybrid systems (SHS) method, which is introduced in [27] (we explain it in detail in Section II). A related work to ours is [21], where the authors use the SHS method to compute the average AoI for a system formed by multiple sources and an arbitrary number of homogeneous M/M/1/1 queues (i.e., with no buffer) as well as two heterogeneous M/M/1/1 queues, where in both cases preemption in service is allowed. In our work, we compute the average AoI using the SHS method, including a system formed by two heterogeneous M/M/1/2* queues with preemption in service and packet losses. Due to the buffering capability at different queues, the analysis becomes more challenging and complex as compared to [21]. In addition, we assume that queues are decentralized in the sense that they do not communicate between each other. This makes our model different than [21], where it is assumed that all the queues know where is the freshest update. Besides, we provide an upper bound of the average AoI in a system of an arbitrary number of M/M/1/(N+1) queues. This allows obtaining an approximation of the AoI with a low complexity. Finally, we provide in this paper a game framework to optimize the probabilistic routing decision for each source, which is to the best of our knowledge has not been considered before in the AoI literature.

The main contributions of this work are twofold. First, we consider a system with multiple sources where the packets in service can be lost and preemption is allowed. The packets are sent to the parallel queues according to a predefined probabilistic routing. We compute the average AoI of a system with two parallel queues and we compare its average AoI with that of a single queue. On one hand, in Table 1, we present the obtained results where we compare the following systems: (i) Two parallel M/M/1/1 queues, each of them with arrival rate [TeX:] $$\lambda / 2,$$ service rate [TeX:] $$\mu$$ and loss rate [TeX:] $$\theta / 2,$$ (denoted by SERVERROUTING), (ii) one M/M/1/1 queue with arrival rate [TeX:] $$\lambda / 2,$$ loss rate [TeX:] $$\theta / 2$$ and service rate [TeX:] $$\mu$$ (denoted by SERVER-HALF), and (iii) one M/M/1/1 queue with arrival rate [TeX:] $$\lambda,$$ loss rate [TeX:] $$\theta$$ and service rate [TeX:] $$2 \mu$$ (denoted by SERVER-DOUBLE). On the other hand, in Table 2, we show results in which we compare the following systems: (i) Two parallel M/M/1/2* queues, each of them with arrival rate [TeX:] $$\lambda / 2,$$ service rate [TeX:] $$\mu$$ and loss rate [TeX:] $$\theta$$ (denoted by QUEUE-ROUTING), (ii) one M/M/1/3* queue with arrival rate [TeX:] $$\lambda / 2,$$ service rate [TeX:] $$\mu$$ and loss rate [TeX:] $$\theta / 2$$ (denoted by QUEUE-HALF), and (iii) one M/M/1/3* queue with arrival rate [TeX:] $$\lambda,$$ service rate [TeX:] $$2 \mu$$ and loss rate [TeX:] $$\theta$$ (denoted by QUEUEDOUBLE). The description of M/M/1/3* and M/M/1/2* queues will be provided in Section III. The main conclusion of this part of the work is that the known results of packet delay in Queuing Theory do not hold for the average AoI. For instance, we know that the delay of the systems SERVER-ROUTING and SERVER-HALF is the same, whereas according to our results, the average AoI of SERVER-ROUTING is smaller. This property also holds when we compare the systems QUEUEROUTING and QUEUE-HALF. Besides, we also conclude that the average AoI of SERVER-ROUTING is very close to the average AoI of SERVER-DOUBLE and also that the average AoI of QUEUE-ROUTING is very close to the average AoI of QUEUE-DOUBLE.

Besides, since the complexity of computing the exact average AoI with SHS method increases hugely with the number of parallel queues, the second contribution of this work consists of providing an upper-bound on the average AoI of a system composed of multiple sources with an arbitrary number of parallel M/M/1/(N+1)* queues. We also study numerically the accuracy of the upper bound and we conclude that when the arrival rate is large or when there are multiple sources, the upper bound is very tight. The interest of this upper bound lies in the fact that it allows obtaining the average AoI with a low complexity. The last contribution of this work consists in using the derived upper bound of the average AoI in order to optimize the probabilistic routing decision. For instance, we formulate a distributed framework where each source optimizes its own routing decision using game theory. By using mean field games, we then provide a modification/simplification of the framework and derived a simple iterative algorithm allowing each source to find separately its own routing decision. We also provide a theoretical proof of the convergence of the iterative algorithm to the desired fixed point (equilibrium) of the game.

The rest of the paper is organized as follows. In Section II, we formulate the problem of calculating the average AoI and we present how the SHS can be used. In Section III we focus on the average AoI derivation in the different systems under consideration. We present the upper bound of the AoI in Section IV and, finally, we provide the main conclusion of our work in Section V.

II. AOI AND SHS

We consider a transmitter sending status updates to a monitor. Packet i is generated at time [TeX:] $$s_{i}$$ and is received by the monitor at time [TeX:] $$s_{i}^{\prime}.$$ Hence, we define by N(t) the index of the last received update at time t, i.e., [TeX:] $$N(t)=\max \left\{i \mid s_{i}^{\prime} \leq t\right\},$$ and the time stamp of the last received update at time t as [TeX:] $$U(t)=s_{N(t)}.$$ The AoI, or simply the age, is defined as

[TeX:] $$\Delta(t)=t-U(t).$$

We are interested in calculating the average of the stochastic process [TeX:] $$\Delta(t),$$ that is, the average age, which is defined as

[TeX:] $$\Delta=\lim _{\tau \rightarrow \infty} \frac{1}{\tau} \int_{0}^{\tau} \Delta(t) d t.$$

The computation of the average age in a general setting is known to be a challenging task since the random variables of the interarrival times and of the system times are dependent. To overcome this difficulty, the authors in [27] introduce the SHS. For completeness, we describe hereinafter this method and, for further details one can refer to [28].

In SHS, the system is modeled as a hybrid state (q(t), x(t)), where q(t) is a state of a continuous time Markov chain and x(t) is a vector whose component belong to [TeX:] $$\mathbb{R}_{0}^{+}$$ and captures the evolution of the age in the system.

A link l of the Markov chain represents a transition from two states [TeX:] $$q \text { and } q^{\prime}$$ with rate [TeX:] $$\lambda^{l} .$$ The interest of SHS is that each transition l implies a reset mapping in the continuous process x. In other words, in each transition l, the vector x is transformed to [TeX:] $$\mathrm{x}^{\prime}$$ using a linear mapping where transformation matrix is given by [TeX:] $$\mathbf{A}_{l},$$ that is, we have the following SHS transition for every [TeX:] $$l: \mathrm{x}^{\prime}=\mathrm{xA}_{1}.$$ Throughout this paper, we denote by [TeX:] $$x_{i}^{\prime}$$ the ith element of the vector [TeX:] $$\mathrm{x}^{\prime} .$$

Furthermore, each state of the Markov chain represents the elements of the system whose age increases at unit rate. In other

Table 1.
Summary of average AoI comparison of the systems without buffer (see Section III.A).
Table 2.
Summary of average AoI comparison of the systems with buffer (see Section III.B).

words, for each state q, we define [TeX:] $$\mathbf{b}_{q}$$ as the vector whose elements are zero or one. Besides, the evolution of the vector x(t) for state q is given by [TeX:] $$\dot{\mathbf{x}}(t)=\mathbf{x} \mathbf{b}_{q}.$$

We assume the Markov chain is ergodic and we denote by [TeX:] $$\pi_{q}$$ the stationary distribution of state q. Let [TeX:] $$\mathcal{L}_{q}$$ the set of links that get out of state q and [TeX:] $$\mathcal{L}_{q}^{\prime}$$ the set of links that get into state q. The following theorem allows us to characterize the average AoI:

Theorem 1 ( [27, Thm 4]) Let [TeX:] $$v_{q}(i)$$ denote the ith element of the vector [TeX:] $$\mathbf{v}_{q}.$$ For each state q, if [TeX:] $$\mathbf{v}_{q}$$ is a non-negative solution of the following system of equations

(1)
[TeX:] $$\mathbf{v}_{q} \sum_{l \in \mathcal{L}_{q}} \lambda^{l}=\mathbf{b}_{q} \pi_{q}+\sum_{l \in \mathcal{L}_{q}^{\prime}} \lambda^{l} \mathbf{v}_{q_{l}} \mathbf{A}_{l},$$

then the average AoI is [TeX:] $$\Delta=\sum_{q} v_{q}(0).$$

In the following section, we use the above result to characterize the average AoI of several systems. In Section IV, we show that the method under consideration can be also used to obtain an upper-bound on the average AoI of very complex systems.

III. AVERAGE AOI OF ROUTING SYSTEMS VERSUS OF A SINGLE QUEUE

In this section, we aim to study the average AoI for different configurations using the SHS method. We first focus on a system formed by queues without buffer and then consider several cases of queues with buffer. Furthermore, we consider in this section two main scenarios: i) System with single queue and ii) system with multiple parallel queues. In the latter, we consider that multiple sources are dispatching their packets through the different parallel queues according to a predefined probabilistic routing. This kind of routing policies is used in practice and is widely considered in routing literature since it can be implemented without knowing the instantaneous states of the network or of the servers (this assumption is realistic as in reallife networks such information cannot be known at the sources). In more detail, the routing policy can be explained as follows. Each source i dispatches its packets according to the following policy: Each job/packet of the source is routed to queue j with probability [TeX:] $$p_{i j}.$$ We can see then that the arrival rate from sPource i to queue j is [TeX:] $$\lambda_{i} p_{i j}.$$ In addition, it is obvious to see that [TeX:] $$\sum_{j=1}^{K} p_{i j}=1.$$ Besides, we consider also that the transmission through the queues is not reliable and that the loss rate of each queue j is denoted by [TeX:] $$\theta_{j}.$$ We also assume that the queues are decentralized in the sense that they do not communicate between each other.

A. Queues without Buffer

In this section, we study the age when the queues do not have a buffer.We first focus on a system formed by an M/M/1/1 queue and then in a system with two parallel M/M/1/1 queues.

A.1 The M/M/1/1 Queue

We consider a system formed by one M/M/1/1 queue that receives traffic from different n sources when preemption of the packets that are in service is permitted. We therefore consider a Poisson arrival process from each source and hence the resulting arrival from all sources is a Poisson process and two update arrivals cannot occur simultaneously. We are interested in calculating the average AoI of any source. Without loss of generality, we focus on source 1. Thus, we consider that updates arrive to the system according to a Poisson process, where, without loss of generality, the rate of updates of source 1 are denoted by [TeX:] $$\lambda_{1}$$ and of the rest of the sources [TeX:] $$\sum_{k>1} \lambda_{k}.$$ The total arrival rate in the system is denoted by [TeX:] $$\lambda \text {, i.e., } \lambda=\sum_{k=1}^{n} \lambda_{k}.$$ We assume that the service time is exponentially distributed with rate [TeX:] $$\mu .$$ We also assume that an update that is in service is lost with an exponential time of rate [TeX:] $$\theta .$$

We use the SHS method to compute the age of this system. The continuous state of the SHS is [TeX:] $$\mathbf{x}(t)=\left[\begin{array}{ll} x_{0}(t) & x_{1}(t) \end{array}\right],$$ where the correspondence between xi(t) and the elements of the system is as follows: [TeX:] $$x_{0}$$ is the age at the monitor and [TeX:] $$x_{1}$$ the generation time of the packet in service. One can notice that [TeX:] $$x_{0}(t)=x_{1}(t)$$ if the packet/update in service is delivered. The discrete state of the SHS is a two-state Markov Chain, where 0 represents that the system is empty and 1 that an update is being executed. As explained in the previous section, in each transition in the Markov

Table 1.
Table of transitions of Fig. 1.
Fig. 1.
The SHS Markov chain for the one M/M/1/1 queue system with multiple sources and losses of packets in service.

chain, the continuous state of the SHS x changes to [TeX:] $$\mathrm{x}^{\prime}.$$ The Markov chain is represented in Fig. 1 and the SHS transitions are given in Table 3 in which we state explicitly the transitions from [TeX:] $$\mathrm{X} \text { to } \mathrm{x}^{\prime}.$$

We now explain each transition l:

l = 0: There is an arrival of source 1 and the queue is idle. Therefore, the age of the monitor does not change, i.e., [TeX:] $$x_{0}^{\prime}= x_{0}$$ and the age of the packet in service is zero since there is a fresh update arrived.

l = 1: There is an arrival of one of the others sources when the queue is idle. Since we are interested in the age of source 1, an arrival from the rest of the sources does not bring a fresh update of the status of source 1 and hence it changes the value of [TeX:] $$x_{1}$$ to the age of the monitor, that is, the age of the update in service satisfies [TeX:] $$x_{1}^{\prime}=x_{0},$$ where [TeX:] $$x_{0}$$ is the age of the monitor. Therefore, when this update ends its service, the age of the monitor remains unchanged.

l = 2: The update under execution ends its service and the age of the monitor is updated by that of this update, i.e., [TeX:] $$x_{0}^{\prime}= x_{1}.$$

l = 3: The update that is in service is lost and, therefore, the age of the monitor does not change.

l = 4: There is an arrival of source 1 when the queue is in service. For this case, the update in service is replaced by the fresh one and, therefore, the age of the monitor does not change, i.e., [TeX:] $$x_{0}^{\prime}=x_{0},$$ but the age of the update in service changes to zero.

l = 5: There is an arrival of another source when the queue is in service. For this case, the update in service is replaced by the fresh one and the age of the update in service changes to that of the monitor, i.e., [TeX:] $$x_{1}^{\prime}=x_{0}.$$

The stationary distribution of the Markov chain of Fig. 1 is [TeX:] $$\pi_{0}=(\mu+\theta) /(\lambda+\mu+\theta) \text { and } \pi_{1}=\lambda /(\lambda+\mu+\theta).$$

Besides, for the state q = 0, we have that [TeX:] $$\mathbf{b}_{0}=[1,0]$$ since the age of the monitor is the only one that grows at unit rate and the age of the update in service is irrelevant, whereas for the state q = 1 we have that [TeX:] $$\mathrm{b}_{1}=[1,1]$$ and the age of the monitor and of the update in service grow at a unit rate. On the other hand, we have that [TeX:] $$\mathbf{v}_{0}=\left[\mathbf{v}_{0}(\mathbf{0}) \mathbf{v}_{0}(1)\right] \text { and } \mathbf{v}_{1}=\left[\mathbf{v}_{1}(\mathbf{0}) \mathbf{v}_{1}(\mathbf{1})\right].$$ From Theorem 4 of [27], we know that the age of this system is [TeX:] $$v_{0}(0)+v_{1}(0),$$ where

[TeX:] $$\begin{aligned} \lambda \mathbf{v}_{}=& {\left[\pi_{0} 0\right]+\mu\left[v_{1}(1) 0\right]+\theta\left[v_{1}(0) 0\right] } \\ (\lambda+\mu+\theta) \mathbf{v}_{1}=& {\left[\pi_{1} \pi_{1}\right]+\lambda_{1}\left[v_{1}(0) 0\right]+\sum_{k>1} \lambda_{k}\left[v_{1}(0) v_{1}(0)\right] } \\ &+\lambda_{1}\left[v_{0}(0) 0\right]+\sum_{k>1} \lambda_{k}\left[v_{0}(0) v_{0}(0)\right]. \end{aligned}$$

The above expressions can be written as the following system of equations:

(2a)
[TeX:] $$\lambda v_{0}(0)=\pi_{0}+\mu v_{1}(1)+\theta v_{1}(0),$$

(2b)
[TeX:] $$(\mu+\theta) v_{1}(0)=\pi_{1}+\lambda v_{0}(0),$$

(2c)
[TeX:] $$(\lambda+\mu+\theta) v_{1}(1)=\pi_{1}+\sum_{k>1} \lambda_{k} v_{1}(0)+\sum_{k>1} \lambda_{k} v_{0}(0).$$

The solution to the above system of equations is

[TeX:] $$\begin{aligned} &v_{0}(0)=\frac{1}{\lambda_{1}}+\frac{\theta}{\lambda_{1} \mu}-\frac{\pi_{1}}{\lambda+\mu+\theta}, \\ &v_{1}(0)=\frac{\lambda}{\lambda_{1} \mu}+\frac{\pi_{1}}{\lambda+\mu+\theta}, \\ &v_{1}(1)=\frac{\sum_{k>1} \lambda_{k}}{\lambda_{1} \mu}+\frac{\pi_{1}}{\lambda+\mu+\theta}. \end{aligned}$$

Therefore, since, from (1), the average AoI for this case is [TeX:] $$v_{0}(0)+v_{1}(0),$$ the next result follows:

Proposition 1: The average AoI of source 1 in the aforementioned system is given by

[TeX:] $$\frac{1}{\lambda_{1}}+\frac{\theta}{\lambda_{1} \mu}+\frac{\lambda}{\lambda_{1} \mu}.$$

Remark 1: We remark that, when [TeX:] $$\theta=0,$$ (2) coincides with the result of Theorem 2(a) in [27]. In their model, they consider a Markov chain with a single state, but when there are updates that are lost their model cannot be considered. Therefore, we believe that the model presented above is the simplest one to study the average AoI using the SHS method when there are update losses.

A.2 Two Parallel M/M/1/1 Queues

We now consider a system formed by two parallel M/M/1/1 queues receiving traffic from different n sources and where preemption of packets in service is permitted. We aim to calculate the average age of information of source 1. As in the previous case, the arrivals are Poisson and the rate of source 1 is denoted by [TeX:] $$\lambda_{1}$$ and that of the rest of the sources is denoted by [TeX:] $$\sum_{k>1} \lambda_{k}.$$ Hence, [TeX:] $$\lambda=\sum_{k=1}^{n} \lambda_{k}.$$ The packets are dispatched according to a predefined probabilistic routing, where [TeX:] $$p_{1 j}\left(\operatorname{resp} . p_{k j}\right)$$ is the probability that a job of source 1 (resp. of another source [TeX:] $$k \neq 1)$$ is routed to queue j. We denote by [TeX:] $$\lambda_{1} p_{11} \text { and by } \lambda_{1} p_{12}$$ the arrival rates from source 1 to queue 1 and to queue 2, respectively. Likewise, [TeX:] $$\sum_{k>1} \lambda_{k} p_{k 1} \text { and } \sum_{k>1} \lambda_{k} p_{k 2}$$ denotes the arrival rates from the rest of the sources to queue 1 and to queue

Fig. 2.
The SHS Markov chain for system with two parallel M/M/1/1 queues with multiple sources and losses of packets in service

2, respectively. The service rate and the loss rate in queue j are respectively [TeX:] $$\mu_{j} \text { and } \theta_{j},$$ where j = 1, 2. We assume that the queues are decentralized in the sense that they do not communicate between each other.

Remark 2: The latter assumption makes the model under study here different than [21], where it is assumed that the servers know where is the freshest update.

We compute the average AoI using the SHS method. First, we define the continuous state as [TeX:] $$\mathrm{x}(\mathrm{t})=\left[\mathrm{x}_{0}(\mathrm{t}) \mathrm{x}_{1}(\mathrm{t}) \mathrm{x}_{2}(\mathrm{t})\right],$$ where the correspondence between [TeX:] $$x_{i}(\mathrm{t})$$ and each element in the system is as follows: [TeX:] $$x_{0}$$ is the age of the monitor and [TeX:] $$x_{1}$$ (resp. [TeX:] $$\left.x_{2}\right)$$ is the age if an update of queue 1 (resp. of queue 2) is delivered. The discrete state is a Markov Chain with four states, represented in Fig. 2 and where state [TeX:] $$k_{1} k_{2}$$ represents that in queue 1 there are [TeX:] $$k_{1}$$ updates, with [TeX:] $$k_{1} \in\{0,1\},$$ and in queue 2 there are [TeX:] $$k_{2}$$ updates, with [TeX:] $$k_{2} \in\{0,1\}.$$ We also represent the SHS transitions in Table 4.

We now explain each transition l:

l = 0: There is an arrival of source 1 to queue 2, when the system is empty. Therefore, [TeX:] $$x_{0} \text { and } x_{1}$$ do not change and the age of the update in service in queue 2 is zero due to a fresh update arrival.

l = 1: There is an arrival of an update of the other sources to queue 2 when it is idle. Therefore, we set [TeX:] $$x_{2}^{\prime}=x_{0},$$ which means that, when this update ends its service, the age of the monitor is again [TeX:] $$x_{0}.$$

l = 2: The update under execution in queue 2 is delivered and the age of the monitor is updated by that of this update, i.e., [TeX:] $$x_{0}^{\prime}=x_{2}.$$

l = 3: The update that is in service in queue 2 is lost and, therefore, the age of the monitor does not change and queue 2 has no updates.

l = 4: There is an arrival of source 1 to queue 2 when it is in service. For this case, since preemption is permitted, the update in service is replaced by the fresh one and, therefore, the age of the update in service in queue 2 changes to zero.

l = 5: There is an arrival of another source to queue 2 when it has an update in service. For this case, the update in service is replaced by the fresh one and the age of the update in service in queue 1 changes to that of the monitor, i.e., [TeX:] $$x_{2}^{\prime}=x_{0}.$$

Table 4.
Table of transitions of Fig. 2.

The transitions 6–11 are symmetric to 0–5, respectively.

l = 12: When there is an update in queue 2, if an update of source 1 arrives to queue 1, the age of the monitor and of the update in queue 2 do not change, whereas that of queue 1 is set to zero, that is, [TeX:] $$x_{1}^{\prime}=0.$$

l = 13: When there is an update in queue 2, if there is an arrival of one of the other sources in queue 1, the age of the update in queue 1 changes to the age of the monitor, i.e,. [TeX:] $$x_{1}^{\prime}= x_{0},$$ whereas [TeX:] $$x_{0} \text { and } x_{2}$$ do not change.

l = 14: When there are updates in both queues, an update of queue 1 is delivered and the age of the monitor changes to [TeX:] $$x_{1},\text { i.e., } x_{0}^{\prime}=x_{1}.$$

l = 15: When there are updates in both queues, if an update of queue 1 is lost, the age of the monitor does not change and queue 1 is idle.

l = 16: When both queues have updates in service, if an update of source 1 arrives to queue 1, we set [TeX:] $$x_{1}^{\prime}$$ to zero and the rest does not change.

l = 17: When both queues have updates in service, if an update of the other sources arrives to queue 1, we set [TeX:] $$x_{1}^{\prime}$$ to the same as the monitor.

The transitions 18–23 are symmetric to 12–17, respectively.

The stationary distribution of the Markov chain in Fig. 2 is given by

[TeX:] $$\pi_{k_{1} k_{2}}=\frac{\rho_{1}^{k_{1}} \rho_{2}^{k_{2}}}{\left(1+\rho_{1}\right)\left(1+\rho_{2}\right)}, \text { for } k_{1}, k_{2}=0,1,$$

where [TeX:] $$\rho_{j}=\left(\lambda_{1} p_{1 j}+\sum_{k>1} \lambda_{k} p_{k j}\right)\left(\mu_{j}+\theta_{j}\right), j=1,2.$$ Moreover, we define the value of [TeX:] $$b_{q}$$ for each state [TeX:] $$q \in\{00,10,01,11\}$$ as follows: [TeX:] $$\mathrm{b}_{00}=\left[\begin{array}{lll} 1 & 0 & 0 \end{array}\right], \mathrm{b}_{10}=\left[\begin{array}{lll} 1 & 1 & 0 \end{array}\right], \mathrm{b}_{01}=\left[\begin{array}{lll} 1 & 0 & 1 \end{array}\right]$$ and [TeX:] $$b_{11}=\left[\begin{array}{lll} 1 & 1 & 1 \end{array}\right].$$ We also define, for [TeX:] $$q \in\{00,10,01,11\},$$ the following vector [TeX:] $$\mathbf{v}_{\mathbf{q}}=\left[\mathbf{v}_{\mathbf{q}}(\mathbf{0}) \mathbf{v}_{\mathbf{q}}(1) \mathbf{v}_{\mathbf{q}}(2)\right].$$

Let [TeX:] $$\hat{\mu}=\mu_{1}+\mu_{2} \text { and } \hat{\theta}=\theta_{1}+\theta_{2}.$$ The SHS method says that the average AoI of this system is [TeX:] $$\sum_{q} v_{q}(0), \text { where } v_{q}(0)$$ is the solution of the following system of equations:

(3)
[TeX:] $$\begin{aligned} \lambda \mathbf{v}_{\mathbf{0 0}}=& {\left[\pi_{00} 00\right]+\mu_{1}\left[v_{10}(1) 00\right] } \\ &+\theta_{1}\left[v_{10}(0) 00\right] \\ &+\mu_{2}\left[v_{01}(2) 00\right]+\theta_{2}\left[v_{01}(0) 00\right], \end{aligned}$$

(4)
[TeX:] $$\begin{aligned} \left(\lambda+\mu_{1}+\theta_{1}\right) \mathbf{v}_{\mathbf{1 0}}=& {\left[\pi_{10} \pi_{10} 0\right]+\lambda_{1} p_{11}\left[v_{00}(0) 00\right] } \\ &+\sum_{k>1} \lambda_{k} p_{1 k}\left[v_{00}(0) v_{00}(0) 0\right] \\ &+\mu_{2}\left[v_{11}(2) v_{11}(1) 0\right] \\ &+\theta_{2}\left[v_{11}(0) v_{11}(1) 0\right]+\lambda_{1} p_{11}\left[v_{10}(0) 00\right] \end{aligned} \\ +\sum_{k>1} \lambda_{k} p_{1 k}\left[v_{10}(0) v_{10}(0) 0\right],$$

(5)
[TeX:] $$\begin{aligned} \left(\lambda+\mu_{2}+\theta_{2}\right) \mathbf{v}_{01}=& {\left[\pi_{01} 0 \pi_{01}\right]+\lambda_{1} p_{12}\left[v_{00}(0) 00\right] } \\ &+\sum_{k>1} \lambda_{k} p_{k 2}\left[v_{00}(0) 0 v_{00}(0)\right] \\ &+\mu_{1}\left[v_{11}(1) 0 v_{11}(2)\right] \\ &+\theta_{1}\left[v_{11}(0) 0 v_{11}(2)\right] \\ &+\lambda_{1} p_{12}\left[v_{01}(0) 00\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 2}\left[v_{01}(0) 0 v_{01}(0)\right], \end{aligned}$$

(6)
[TeX:] $$\begin{aligned} (\lambda+\hat{\mu}+\hat{\theta}) \mathbf{v}_{11}=& {\left[\pi_{11} \pi_{11} \pi_{11}\right]+\lambda_{1} p_{11}\left[v_{01}(0) 0 v_{01}(2)\right] } \\ &+\sum_{k>1} \lambda_{k} p_{k 1}\left[v_{01}(0) v_{01}(0) v_{01}(2)\right] \\ &+\lambda_{1} p_{11}\left[v_{11}(0) 0 v_{11}(2)\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 1}\left[v_{11}(0) v_{11}(0) v_{11}(2)\right] \\ &+\lambda_{1} p_{12}\left[v_{10}(0) v_{10}(1) 0\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 2}\left[v_{10}(0) v_{10}(1) v_{10}(0)\right] \\ &+\lambda_{1} p_{12}\left[v_{11}(0) v_{11}(1) 0\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 2}\left[v_{11}(0) v_{11}(1) v_{11}(0)\right]. \end{aligned}$$

Since the first equation has two irrelevant variables and the second and third ones have one irrelevant variable, the above expression can be written alternatively as a system of 8 equations.

Proposition 2: The average AoI of source 1 in the aforementioned system is given by [TeX:] $$v_{00}(0)+v_{10}(0)+v_{01}(0)+v_{11}(0),$$ where for [TeX:] $$q \in\{00,10,01,11\}, v_{q}(0)$$ is the solution of (3)–(6).

A.3 Average AoI Comparison

We now compare the average AoI of source 1 for the models we have studied in this section. For this purpose, we consider three systems. The first one consists of a single M/M/1/1 queue with arrival rate of source [TeX:] $$1 \lambda_{1} / 2$$ and of the rest of the sources (1/2) [TeX:] $$\sum_{k>1} \lambda_{k},$$ loss rate [TeX:] $$\theta / 2$$ and service rate [TeX:] $$\mu$$ (see Fig. 3(a)). The average AoI of this model is represented with a solid line. The second system we consider is a single M/M/1/1 queue wPith arrival rate of source 1 [TeX:] $$\lambda_{1}$$ and of the rest of the sources [TeX:] $$\sum_{k>1} \lambda_{k},$$ loss rate [TeX:] $$\theta$$ and service rate [TeX:] $$2 \mu$$ (see Fig. 3(b)). The average AoI of this model is represented with a dotted line. Finally, we consider a system with two parallel M/M/1/1 queues with arrival rate of source 1 equal to [TeX:] $$\lambda_{1}$$ and of the rest of the sources [TeX:] $$\sum_{k>1} \lambda_{k}.$$ Besides, we consider that [TeX:] $$p_{k j}=1 / 2$$ for all [TeX:] $$k=1, \cdots, n \text { and } j=1,2,$$ the loss rate and the service rate are in both servers [TeX:] $$\theta / 2 \text { and } \mu,$$ respectively (see Fig. 3(c)). The average AoI of this model is represented with a dashed line. Our goal is to determine which system has the smallest average AoI when [TeX:] $$\lambda_{1}$$ varies and the rest of the parameters are fixed. To this end, we have solved numerically the systems of equations in (2) and in (3)–(6). We set [TeX:] $$\mu=1$$ in these simulations. When we study the system with multiple sources, we consider that [TeX:] $$\sum_{k>1} \lambda_{k}=10,$$ and in the case of losses in the packets in service, we set [TeX:] $$\theta=10.$$

In Fig. 4, we plot the average AoI of these systems as a function of [TeX:] $$\lambda_{1}$$ when there is a single source and there are no losses in the queues. We observe that the smallest age is achieved for the single M/M/1/1 queue system with service rate [TeX:] $$2 \mu.$$ We also observe that the age of the two parallel M/M/1/1 queues is the same as the latter when [TeX:] $$\lambda_{1}$$ is either very small or very large.

In Figs. 5–7, we show that the average AoI of a system with two parallel M/M/1/1 queues is equal to that of a single server with service rate [TeX:] $$2 \mu \text {. }$$

In queueing theory, it is known that, among the systems under consideration in this section, the one that minimizes the delay is the single M/M/1/1 queue with service rate [TeX:] $$2 \mu .$$ Therefore, these illustrations show that the AoI also verifies this property. On the other hand, for classical queueing theory metrics such as delay, the performance of two parallel M/M/1/1 queues coincides with that of a single M/M/1/1 queue with arrival rate [TeX:] $$\lambda / 2$$ and loss rate [TeX:] $$\hat{\theta} / 2.$$ However, as far as average AoI is concerned, one can see that, according to the figures we present in this section, this is not the case (solid and dashed lines do not coincide on these figures).

B. Queues with Buffer

We now focus on queues with buffer. For this case, an update starts getting served upon its arrival to a queue, if the queue is idle. However, if the queue is busy, the incoming update is put in the last position of the queue and, if the queue is full, the last update of the buffer is replaced by the new one. In this section, we aim to compare the average AoI of a system with one queue and buffer size two with that of two parallel queues with buffer size one. We first compute the average AoI of the former system and then of the latter one.

B.1 The M/M/1/3* Queue

We concentrate on a system formed by a queue with a buffer of size two. When an update arrives and the system is empty, it gets served immediately. However, if an update arrives when another packet is in service, it replaces the last update in the queue if it is full and, otherwise, it is put in the last position of

Fig. 3.
Representation of the models under comparison in Section III.A.3 for two sources: (a) M/M/1/1 queue with arrival rate [TeX:] $$\lambda / 2,$$ (b) M/M/1/1 queue with service rate [TeX:] $$2 \mu,$$ and (c) Two parallel M/M/1/1 queues.
Fig. 4.
Average AoI of source 1 when [TeX:] $$\lambda_{1}$$ varies from 0.1 to [TeX:] $$10^{3}$$ with a single source and without losses [TeX:] $$\left(\lambda_{k}=0 \text { for all } k>1 \text { and } \theta=0\right) . \mu=1.$$
Fig. 5.
Average AoI of source 1 when [TeX:] $$\lambda_{1}$$ varies from [TeX:] $$0.1 \text { to } 10^{3}$$ with multiple sources and without losses [TeX:] $$\left(\sum_{k>1} \lambda_{k}=10 \text { and } \theta=0\right) . \mu=1.$$
Fig. 6.
Average AoI of source 1 when [TeX:] $$\lambda_{1}$$ varies from [TeX:] $$0.1 \text { to } 10^{3}$$ with a single source and losses [TeX:] $$\left(\lambda_{k}=0 \text { for all } k>1 \text { and } \theta=10\right) . \mu=1.$$

the queue. This system will be denoted in the remainder of the

Fig. 7.
Average AoI of source 1 when [TeX:] $$\lambda_{1}$$ varies from 0.1 to [TeX:] $$10^{3}$$ with multiple sources and losses [TeX:] $$\left(\sum_{k>1} \lambda_{k}=10 \text { and } \theta=10\right) . \mu=1$$
Fig. 8.
The SHS Markov Chain for the M/M/1/3* queue with multiple sources and losses of packets in service.

paper as the M/M/1/3* queue.

When traffic comes from n different sources, we are interested in computing the average AoI of source 1. Updates of source 1 arrive to the queue according to a Poisson process of rate [TeX:] $$\lambda_{1}$$ and of those of the rest of the sources with rate [TeX:] $$\sum_{k>1} \lambda_{k}.$$ We assume that the updates that are waiting in the queue are served according to the FCFS discipline and that the service time is exponentially distributed with rate [TeX:] $$\mu,$$ as well as the update that is in service is lost with exponentially distributed time with rate [TeX:] $$\theta \text {. }$$

We employ the SHS method to calculate the average AoI of this system. The continuous state is given by [TeX:] $$\mathrm{x}(\mathrm{t})=\left[\mathrm{x}_{0}(\mathrm{t}) \mathrm{x}_{1}(\mathrm{t}) \mathrm{x}_{2}(\mathrm{t}) \mathrm{x}_{3}(\mathrm{t})\right],$$ where the correspondence between [TeX:] $$x_{i}(\mathrm{t})$$ and each element in the system is as follows: [TeX:] $$x_{0}$$ is the age of the monitor, [TeX:] $$x_{1}$$ is the age if the update in service is delivered and [TeX:] $$x_{2} \text { and } x_{3}$$ is respectively the age if the update in the first and second positions of the queue are delivered. The discrete state is a four state Markov chain, where state k represents that there are k updates present in the system, with k = 0, 1, 2, and 3. The Markov chain under consideration and the SHS transition are represented, respectively, in Fig. 8 and Table 5.

Table 5.
Table of transitions of Fig. 8.

We now explain each transition l:

l = 0: The system is empty and an update of source 1 arrives. The age of the monitor is not modified and we set [TeX:] $$x_{1}^{\prime}=0.$$

l = 1: The system is empty and an update of another source arrives. The age of the monitor is not modified and the age [TeX:] $$x_{1}$$ changes to [TeX:] $$x_{0}, \text { i.e., } x_{1}^{\prime}=x_{0}.$$

l = 2: When there is an update getting in service and the queue is empty. If the update in service is delivered, the age of the monitor changes to [TeX:] $$x_{1} \text {, i.e., } x_{0}^{\prime}=x_{1}.$$

l = 3: The queue is empty and the update in service is lost. For this case, the age of the monitor does not change and the age of [TeX:] $$x_{1}$$ is replaced by zero.

l = 4: The queue is busy and an update of source i arrives. The age of the monitor and that of [TeX:] $$x_{1}$$ are not modified and we set [TeX:] $$x_{2}^{\prime}=0.$$

l = 5: The queue is busy and an update of source i arrives. The age of the monitor and that of [TeX:] $$x_{1}$$ are not modified and the age [TeX:] $$x_{2}$$ changes to [TeX:] $$x_{1}, \text { i.e., } x_{2}^{\prime}=x_{1}.$$

l = 6: There are two updates in the system and the update in service is delivered and, therefore, the age of the monitor changes to [TeX:] $$x_{1}$$ and the age [TeX:] $$x_{2} \text { to } x_{1}, \text { i.e., } x_{0}^{\prime}=x_{1} \text { and } x_{1}^{\prime}=x_{2}.$$

l = 7: There are two updates in the system and the update in service is lost. For this case, the age of the monitor does not change, but the age [TeX:] $$x_{1}$$ is replaced by [TeX:] $$x_{2}, \text { i.e., } x_{1}^{\prime}=x_{2}$$ since the update that was waiting start getting served.

l = 8: There are two updates in the system and an update of source 1 arrives. The ages of the updates that are present in the system do not change and we set [TeX:] $$x_{3}^{\prime}=0.$$

l = 9: There are two updates in the system and an update of another source arrives. The ages of the updates that are present in the system do not change and the age [TeX:] $$x_{3}$$ changes to [TeX:] $$x_{2}, \text { i.e., } x_{3}^{\prime}=x_{2} .$$

l = 10: The system is full and the update in service is delivered. For this case, the age of the monitor changes to [TeX:] $$x_{1},$$ the age of [TeX:] $$x_{1} \text { to } x_{2}$$ and the age of [TeX:] $$x_{2} \text { to } x_{3}.$$

l = 11: The system is full and the update in service is lost. For this case, the age of the monitor does not change, but the age of [TeX:] $$x_{1}$$ changes to [TeX:] $$x_{2}$$ and the age of [TeX:] $$x_{2} \text { to } x_{3}.$$

l = 12: The system is full and an update of source 1 arrives. The age of the monitor and of that of [TeX:] $$x_{1} \text { and } x_{2}$$ are not modified and we set [TeX:] $$x_{3}^{\prime}=0$$.

l = 13: The system is full and an update of another source arrives. The ages of the monitor, of [TeX:] $$x_{1} \text { and of } x_{2}$$ do not change, but the age of [TeX:] $$x_{3} \text { is set to } x_{2}, \text { i.e., } x_{3}^{\prime}=x_{2}.$$

Let [TeX:] $$\lambda=\sum_{k=1}^{n} \lambda_{k} \text { and } \rho=\lambda /(\mu+\theta).$$ The stationary distribution of the Markov chain in Fig. 8 is

[TeX:] $$\pi_{j}=\frac{\rho^{j}}{1+\rho+\rho^{2}+\rho^{3}}, j=0,1,2,3.$$

We now define the vector bq for all state q of the Markov chain of Fig. 8: [TeX:] $$b_{0}=\left[\begin{array}{llll} 1 & 0 & 0 & 0 \end{array}\right], b_{1}=\left[\begin{array}{llll} 1 & 1 & 0 & 0 \end{array}\right], b_{2}=\left[\begin{array}{llll} 1 & 1 & 1 & 0 \end{array}\right]$$ and [TeX:] $$\mathbf{b}_{3}=\left[\begin{array}{llll} 1 & 1 & 1 & 1 \end{array}\right].$$ Besides, for all state [TeX:] $$q \in\{0,1,2,3\},\mathbf{v}_{\mathbf{q}}=\left[\mathbf{v}_{\mathbf{q}}(\mathbf{0}) \mathbf{v}_{\mathbf{q}}(1) \mathbf{v}_{\mathbf{q}}(2) \mathbf{v}_{\mathbf{q}}(3)\right].$$ From Theorem 4 in [27], we have that the average AoI of the M/M/1/3* queue is [TeX:] $$v_{0}(0)+ v_{1}(0)+v_{2}(0)+v_{3}(0), \text { where } v_{q}(0)$$ is the solution of the following system of equations:

(7)
[TeX:] $$\lambda \mathbf{v}_{\mathbf{0}}=\left[\begin{array}{llll} \pi_{0} & 0 & 0 & 0 \end{array}\right]+\mu\left[\begin{array}{llll} v_{1}(1) & 0 & 0 & 0 \end{array}\right]+\theta\left[v_{1}(0) 00 } &{0} \end{array}\right],$$

(8)
[TeX:] $$\begin{aligned} (\lambda+\mu+\theta) \mathrm{v}_{1}=& {\left[\pi_{1} \pi_{1} 00\right]+\lambda_{1}\left[v_{0}(0) 000\right] } \\ &+\sum_{k>1} \lambda_{k}\left[v_{0}(0) v_{0}(0) 00\right] \\ &+\mu\left[v_{2}(1) v_{2}(2) 00\right] \\ &+\theta\left[v_{2}(0) v_{2}(2) 00\right], \end{aligned}$$

(9)
[TeX:] $$\begin{aligned} (\lambda+\mu+\theta) \mathrm{v}_{2}=& {\left[\pi_{2} \pi_{2} \pi_{2} 0\right]+\lambda_{1}\left[v_{1}(0) v_{1}(1) 00\right] } \\ &+\sum_{k>1} \lambda_{k}\left[v_{1}(0) v_{1}(1) v_{1}(1) 0\right] \\ &+\mu\left[v_{3}(1) v_{3}(2) v_{3}(3) 0\right] \\ &+\theta\left[v_{3}(0) v_{3}(2) v_{3}(3) 0\right], \end{aligned}$$

(10)
[TeX:] $$\begin{aligned} (\lambda+\mu+\theta) \mathrm{v}_{3}=& {\left[\pi_{3} \pi_{3} \pi_{3} \pi_{3}\right]+\lambda_{1}\left[v_{2}(0) v_{2}(1) v_{2}(2) 0\right] } \\ &+\sum_{k>1} \lambda_{k}\left[v_{2}(0) v_{2}(1) v_{2}(2) v_{2}(2)\right] \\ &+\lambda_{1}\left[v_{3}(0) v_{3}(1) v_{3}(2) 0\right] \\ &+\sum_{k>1} \lambda_{k}\left[v_{3}(0) v_{3}(1) v_{3}(2) v_{3}(2)\right]. \end{aligned}$$

Since the first equation has three irrelevant variables and the second and third equations have respectively two and one irrelevant variables, the above expression can be alternatively written as a system of 10 equations.

Proposition 3: The average AoI of source 1 in the aforementioned system is given by [TeX:] $$v_{0}(0)+v_{1}(0)+v_{2}(0)+v_{3}(0),$$ where for [TeX:] $$q \in\{0,1,2,3\}, v_{q}(0)$$ is the solution of (7)–(10).

B.2 Two Parallel M/M/1/2* Queues

We consider a system formed by two parallel queues with buffer size equal to one and n different sources. The packets are dispatched according to a predefined probabilistic routing. An update of source 1 arrives to the system with rate [TeX:] $$\lambda_{1}$$ and it is sent to queue j with probabilityand it is sent to queue j with probability [TeX:] $$p_{1 j}.$$ Therefore, the arrival rate of source 1 to queue j is [TeX:] $$\lambda_{1} p_{1 j}.$$ Besides, an update of the rest of the sources arrives to the system with rate [TeX:] $$\sum_{k>1} \lambda_{k}$$ and an update of source [TeX:] $$k \neq i$$ is sent to queue j with probability [TeX:] $$p_{k j}.$$ Therefore, the arrival rate of the rest of the sources to queue j is [TeX:] $$\sum_{k>1} \lambda_{k} p_{k j}.$$ If an update finds the queue full, it replaces the update that is waiting in the queue, whereas when the queue

Fig. 9.
The SHS Markov chain for system with two parallel M/M/1/2* queues with multiple sources and losses of packets in service.

is idle, it starts being served immediately. This system will be denoted as two parallel M/M/1/2* queues.

We assume that the service time of queue j is exponentially distributed with rate [TeX:] $$\mu_{j}$$ and that updates/packets in service in queue j are lost with exponential time of rate [TeX:] $$\theta_{j}, j=1,2.$$ We assume that the queues are decentralized in the sense that they do not communicate between each other.

We seek to compute the average AoI of this system using the SHS method. The continuous state is [TeX:] $$\mathrm{X}(\mathrm{t})= \left[\mathrm{x}_{0}(\mathrm{t}) \mathrm{X}_{11}(\mathrm{t}) \mathrm{X}_{12}(\mathrm{t}) \mathrm{X}_{21}(\mathrm{t}) \mathrm{x}_{22}(\mathrm{t})\right],$$ where the correspondence between [TeX:] $$x_{i}(\mathrm{t})$$ and each element is as follows: [TeX:] $$x_{0}$$ is the age of the monitor, [TeX:] $$x_{j 1}$$ is the age if the update/packet in service in queue j is delivered and [TeX:] $$x_{j 2}$$ the age if the update that is waiting for service in queue j is delivered. The discrete state is described by a Markov chain, where the state [TeX:] $$k_{1} k_{2}$$ denotes that there are [TeX:] $$k_{1}$$ updates in queue 1 and [TeX:] $$k_{2}$$ in queue 2, with [TeX:] $$k \in\{0,1,2\}.$$ The Markov chain we study is depicted in Fig. 9. We note that some of the links are unified to avoid heavy notation. The SHS transitions for this model are reported in Appendix VI.

Let [TeX:] $$\rho_{1}=\left(\lambda_{1} p_{11}+\sum_{k>1} \lambda_{k} p_{k 1}\right) /\left(\mu_{1}+\theta_{1}\right) \text { and } \rho_{2}= \left(\lambda_{1} p_{12}+\lambda_{k} p_{k 2}\right) /\left(\mu_{2}+\theta_{2}\right).$$ The stationary distribution of the Markov chain in Fig. 9 is

[TeX:] $$\pi_{k_{1} k_{2}}=\frac{\rho_{1}^{k_{1}} \rho_{2}^{k_{2}}}{\left(1+\rho_{1}+\rho_{1}^{2}\right)\left(1+\rho_{2}+\rho_{2}^{2}\right)}, k_{1}, k_{2} \in\{0,1,2\}.$$

Let [TeX:] $$\hat{\mu}=\mu_{1}+\mu_{2}, \quad \hat{\theta}=\theta_{1}+\theta_{2}, \quad \text { and } \mathcal{Q}= \{\prime \prime, \infty /, \in \prime, 1 \infty, \infty \infty, \in \infty, I \in, \infty \in, \in \in\}.$$ For every [TeX:] $$q \in \mathcal{Q},$$ we define [TeX:] $$\mathrm{v}_{\mathbf{q}}=\left[\mathrm{v}_{\mathbf{q}}(\mathbf{0}) \mathrm{v}_{\mathbf{q}}(1) \mathrm{v}_{\mathbf{q}}(2) \mathrm{v}_{\mathbf{q}}(3) \mathrm{v}_{\mathbf{q}}(4)\right]$$ and the vector [TeX:] $$\mathrm{b}_{\mathrm{q}}$$ as follows: [TeX:] $$b_{00}=\left[\begin{array}{lllll} 1 & 0 & 0 & 0 & 0 \end{array}\right], b_{10}=\left[\begin{array}{lllll} 1 & 1 & 0 & 0 & 0 \end{array}\right],b_{20}=\left[\begin{array}{lllll} 1 & 1 & 1 & 0 & 0 \end{array}\right], b_{01}=\left[\begin{array}{lllll} 1 & 0 & 0 & 1 & 0 \end{array}\right], b_{11}=\left[\begin{array}{lllll} 1 & 1 & 0 & 1 & 0 \end{array}\right], \mathrm{b}_{21}=\left[\begin{array}{lllll} 1 & 1 & 1 & 1 & 0 \end{array}\right], \mathrm{b}_{02}=\left[\begin{array}{lllll} 1 & 0 & 0 & 1 & 1 \end{array}\right], \mathrm{b}_{12}=\left[\begin{array}{lllll} 1 & 1 & 0 & 1 & 1 \end{array}\right],$$ and [TeX:] $$b_{22}=\left[\begin{array}{lllll} 1 & 1 & 1 & 1 & 1 \end{array}\right].$$ We use the result of Theorem 4 in [27] that shows that the average AoI of this system is given by [TeX:] $$\sum_{q \in \mathcal{Q}} v_{q}(0), \text { where } v_{q}(0)$$ is the solution to the following system of equations:

(11)
[TeX:] $$\begin{aligned} \lambda_{\mathbf{v}_{\mathbf{0 0}}=}=& {\left[\pi_{00} 0000\right]+\mu_{1}\left[v_{10}(1) 0000\right] } \\ &+\theta_{1}\left[v_{10}(0) 0000\right]+\mu_{2}\left[v_{01}(3) 0000\right] \\ &+\theta_{2}\left[v_{01}(0) 0000\right], \end{aligned}$$

(12)
[TeX:] $$\begin{aligned} \left(\lambda+\mu_{1}+\theta_{1}\right) \mathrm{v}_{10}=& {\left[\pi_{10} \pi_{10} 000\right] } \\ &+\lambda_{1} p_{11}\left[v_{00}(0) 0000\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 1}\left[v_{00}(0) v_{00}(0) 000\right] \\ &+\mu_{2}\left[v_{11}(3) v_{11}(1) 000\right] \\ &+\theta_{2}\left[v_{11}(0) v_{11}(1) 000\right] \\ &+\mu_{1}\left[v_{20}(1) v_{20}(2) 000\right] \\ &+\theta_{1}\left[v_{20}(0) v_{20}(2) 000\right] \\ &+\mu_{2}\left[v_{11}(3) v_{11}(1) 000\right] \\ &+\theta_{2}\left[v_{11}(0) v_{11}(1) 000\right], \end{aligned}$$

(13)
[TeX:] $$\begin{aligned} \left(\lambda+\mu_{1}+\theta_{1}\right) \mathrm{v}_{\mathbf{2 0}}=& {\left[\pi_{20} \pi_{20} \pi_{20} 00\right] } \\ &+\mu_{2}\left[v_{21}(3) v_{21}(1) v_{21}(2) 00\right] \\ &+\theta_{2}\left[v_{21}(0) v_{21}(1) v_{21}(2) 00\right] \\ &+\lambda_{1} p_{11}\left[v_{10}(0) v_{10}(1) 000\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 1}\left[v_{10}(0) v_{10}(1) v_{10}(1) 00\right] \\ &+\lambda_{1} p_{11}\left[v_{20}(0) v_{20}(1) 0000\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 1}\left[v_{20}(0) v_{20}(1) v_{20}(1) 00\right], \end{aligned}$$

(14)
[TeX:] $$\begin{aligned} \left(\lambda+\mu_{2}+\theta_{2}\right) \mathrm{v}_{01}=& {\left[\pi_{01} 00 \pi_{01} 0\right] } \\ &+\lambda_{1} p_{12}\left[v_{00}(0) 0000\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 2}\left[v_{00}(0) 00 v_{00}(0) 0\right] \\ &+\mu_{2}\left[v_{02}(3) 00 v_{20}(4) 0\right] \\ &+\theta_{2}\left[v_{02}(0) 00 v_{20}(4) 0\right] \\ &+\mu_{1}\left[v_{11}(1) 00 v_{11}(3) 0\right] \\ &+\theta_{1}\left[v_{11}(0) 00 v_{11}(3) 0\right], \end{aligned}$$

(15)
[TeX:] $$\begin{aligned} (\lambda+\hat{\mu}+\hat{\theta}) \mathrm{v}_{\mathbf{1 1}}=& {\left[\pi_{11} \pi_{11} 0 \pi_{11} 0\right] } \\ &+\lambda_{1} p_{11}\left[v_{01}(0) 00 v_{01}(3) 0\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 1}\left[v_{01}(0) v_{01}(0) 0 v_{01}(3) 0\right] \\ &+\lambda_{1} p_{12}\left[v_{10}(0) v_{10}(1) 000\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 2}\left[v_{10}(0) v_{10}(1) 0 v_{10}(0) 0\right] \\ &+\mu_{2}\left[v_{12}(3) v_{12}(1) 0 v_{21}(4) 0\right] \\ &+\theta_{2}\left[v_{12}(0) v_{12}(1) 0 v_{21}(4) 0\right] \\ &+\mu_{1}\left[v_{21}(1) v_{21}(2) 0 v_{21}(3) 0\right] \\ &+\theta_{1}\left[v_{21}(0) v_{21}(2) 0 v_{21}(3) 0\right], \end{aligned}$$

(16)
[TeX:] $$\begin{aligned} (\lambda+\hat{\mu}+\hat{\theta}) \mathbf{v}_{\mathbf{2 1}} &=\left[\pi_{21} \pi_{21} \pi_{21} \pi_{21} 0\right] \\ &+\lambda_{1} p_{11}\left[v_{11}(0) v_{11}(1) 0 v_{11}(3) 0\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 1}\left[v_{11}(0) v_{11}(1) v_{11}(1) v_{11}(3) 0\right] \\ &+\lambda_{1} p_{11}\left[v_{21}(0) v_{21}(1) 0 v_{21}(3) 0\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 1}\left[v_{21}(0) v_{21}(1) v_{21}(1) v_{21}(3) 0\right] \\ &+\lambda_{1} p_{12}\left[v_{20}(0) v_{20}(1) v_{20}(2) 00\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 2}\left[v_{20}(0) v_{20}(1) v_{20}(2) v_{20}(0) 0\right] \\ &+\mu_{2}\left[v_{22}(3) v_{22}(1) v_{22}(2) v_{22}(4) 0\right] \\ &+\theta_{2}\left[v_{22}(0) v_{22}(1) v_{22}(2) v_{22}(4) 0\right], \end{aligned}$$

(17)
[TeX:] $$\begin{aligned} \left(\lambda+\mu_{2}+\theta_{2}\right) \mathbf{v}_{\mathbf{0 2} 2}=& {\left[\pi_{02} 00 \pi_{02} \pi_{02}\right] } \\ &+\lambda_{1} p_{12}\left[v_{01}(0) 00 v_{01}(3) 0\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 2}\left[v_{01}(0) 00 v_{01}(3) v_{01}(3)\right] \\ &+\lambda_{1} p_{12}\left[v_{02}(0) 00 v_{02}(3) 0\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 2}\left[v_{02}(0) 00 v_{02}(3) v_{02}(3)\right] \\ &+\mu_{1}\left[v_{12}(1) 00 v_{12}(3) v_{12}(4)\right] \\ &+\theta_{1}\left[v_{12}(0) 00 v_{12}(3) v_{12}(4)\right], \end{aligned}$$

(18)
[TeX:] $$\begin{aligned} (\lambda+\hat{\mu}+\hat{\theta}) \mathbf{v}_{12} &=\left[\pi_{12} \pi_{12} 0 \pi_{12} \pi_{12}\right] \\ &+\lambda_{1} p_{11}\left[v_{11}(0) v_{11}(1) 0 v_{11}(3) 0\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 1}\left[v_{11}(0) v_{11}(1) 0 v_{11}(3) v_{11}(3)\right] \\ &+\lambda_{1} p_{12}\left[v_{20}(0) v_{20}(1) v_{20}(2) 00\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 2}\left[v_{20}(0) v_{20}(1) v_{20}(2) v_{20}(0) 0\right] \\ &+\mu_{1}\left[v_{22}(1) v_{22}(2) 0 v_{22}(3) v_{22}(4)\right] \\ &+\theta_{1}\left[v_{22}(0) v_{22}(2) 0 v_{22}(3) v_{22}(4)\right], \end{aligned}$$

(19)
[TeX:] $$\begin{aligned} (\lambda+\hat{\mu}+\hat{\theta}) \mathrm{v}_{\mathbf{2 2}} &=\left[\pi_{22} \pi_{22} \pi_{22} \pi_{22} \pi_{22}\right] \\ &+\lambda_{1} p_{11}\left[v_{12}(0) v_{12}(1) 0 v_{12}(3) v_{12}(4)\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 1}\left[v_{12}(0) v_{12}(1) v_{12}(1) v_{12}(3) v_{12}(4)\right] \\ &+\lambda_{1} p_{11}\left[v_{22}(0) v_{22}(1) 0 v_{22}(3) v_{22}(4)\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 1}\left[v_{22}(0) v_{22}(1) v_{22}(1) v_{22}(3) v_{22}(4)\right] \\ &+\lambda_{1} p_{12}\left[v_{21}(0) v_{21}(1) v_{21}(2) v_{21}(3) 0\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 2}\left[v_{21}(0) v_{21}(1) v_{21}(2) v_{21}(3) v_{21}(3)\right] \\ &+\lambda_{1} p_{12}\left[v_{22}(0) v_{22}(1) v_{22}(2) v_{22}(3) 0\right] \\ &+\sum_{k>1} \lambda_{k} p_{k 2}\left[v_{22}(0) v_{22}(1) v_{22}(2) v_{22}(3) v_{22}(3)\right]. \end{aligned}$$

From the above expressions, if we remove the irrelevant variables, we obtain a system of 27 equations.

Proposition 4: The average AoI of source 1 in the aforementioned system is given by [TeX:] $$\sum_{q \in \mathcal{Q}} v_{q}(0), \text { where } v_{q}(0)$$ is the solution of (11)–(19).

B.3 Age Comparison

We compare the average AoI of the models presented in this section.We focus on the following three systems. First, we consider an M/M/1/3* queue with arrival rate of source 1 equal to [TeX:] $$\lambda_{1} / 2$$ and that of the rest of the sources [TeX:] $$(1 / 2) \sum_{k>1} \lambda_{k},$$ loss rate [TeX:] $$\theta / 2$$ and service rate [TeX:] $$\mu$$ (see Fig. 10(a)). The average AoI of this model is represented with a solid line. The second system we consider is an M/M/1/3* queue with arrival rate of source 1 equal to [TeX:] $$\lambda_{1}$$ and that of the rest of the sources [TeX:] $$\sum_{k>1} \lambda_{k},$$ loss rate [TeX:] $$\theta$$ and service rate [TeX:] $$2 \mu$$ (see Fig. 10(b)). The average AoI of this model is represented with a dotted line. We also consider a system with two parallel M/M/1/2* queues with arrival rate of source [TeX:] $$1 \lambda_{1}$$ and that of the rest of the sources [TeX:] $$\sum_{k>1} \lambda_{k}.$$ Each of the servers satisfies that [TeX:] $$p_{k 1}=p_{k 2}=1 / 2 \text { for all } k=1, \cdots, n,$$ has a loss rate equal to [TeX:] $$\theta / 2$$ and a service rate [TeX:] $$\mu$$ (see Fig. 10(c)). The average AoI of the latter model is represented with a dashed line. We aim to investigate which system has the smallest average AoI when [TeX:] $$\lambda_{1}$$ varies. Thus, we have solved numerically the systems of equations in (7)–(10) and of (11)–(19). We set [TeX:] $$\mu=1$$ in these simulations. When we study the system with multiple sources, we consider that [TeX:] $$\sum_{k>1} \lambda_{k}=10$$ and in the case of packet losses, we set [TeX:] $$\theta=10.$$

We first focus on the average AoI for a single source and when there are no packet losses. The evolution of the AoI of source 1 with respect to [TeX:] $$\lambda_{1}$$ is represented in Fig. 11. We observe that for the M/M/1/3* queue with service rate [TeX:] $$2 \mu,$$ the average AoI coincides with that of the two parallel M/M/1/2* queues when [TeX:] $$\lambda_{1}$$ is either very small or very large. Another interesting property obtained from these simulations is that the average AoI of the M/M/1/3* queue is not monotone in [TeX:] $$\lambda_{1}.$$ This phenomenon is due to the FCFS discipline and the presence of the buffer of size 2 and can be interpreted as follows. For small arrival rates, all the packets will be delivered directly without staying too much time in the buffer and the system will behave like an M/M/1/1 queue. Therefore, when the arrival rate increases and on average there is only one packet (or less) in the server, the AoI will keep decreasing since more fresh packets improves the AoI. We can observe in this figure that the minimum AoI is achieved when [TeX:] $$\lambda_{1}=\mu$$ (which means that on average we have one packet in the server as explained above). Then, when the arrival rate keeps increasing, there will be always packets in the buffer (in addition to the packet in the server) and the arrived packets will be delayed, which will increase the AoI. When the arrival rate grows very large, the packet in the second place in the buffer will be constantly replaced by the new arrived packet. However, there will be always a delay due the fact that the packet in the first place of the buffer should wait until the packet in the server is delivered. In other words, the AoI will converge to an asymptotic value. However, this asymptotic value is greater than the AoI when [TeX:] $$\lambda=\mu$$ since in that case there is on average one packet in the system (i.e., the packet is directly served by the server) and hence the packets are not delayed by the buffer. In addition to

Fig. 10.
Representation of the models under comparison in Section III.B.3 for two sources: (a) M/M/1/2 queue with arrival rate [TeX:] $$\lambda / 2,$$ (b) M/M/1/2 queue with service rate [TeX:] $$2 \mu,$$ and (c) Two parallel M/M/1/2 queues.

these results, we provide in Appendix B some results obtained by simulations for single M/M/1/3* queue. As expected The results assess the accuracy of the SHS method used to evaluate the average AoI.

In Fig. 12, we study the average AoI for a system with multiple sources and without losses. We see that the AoI of the two parallel M/M/1/2* queues coincides with that of the M/M/1/3* queue with half traffic rate when [TeX:] $$\lambda$$ is small, whereas it coincides with that of the M/M/1/3* queue with double service rate when [TeX:] $$\lambda$$ is large. It is worth mentioning that we consider here that the arrival rate of the rest of the sources is [TeX:] $$\sum_{k>1} \lambda_{k}=10,$$ which implies that there will be always packets in the server and in the buffer and the system cannot behave as an M/M/1/1 by changing [TeX:] $$\lambda_{1}$$ of the first source. This explains why the average AoI decreases with [TeX:] $$\lambda_{1}$$ until reaching a limiting value and the average AoI does not have the same shape as in Fig. 11.

We also study the average AoI with a single source and losses in Fig. 13. For this case, the AoI of the system with two parallel M/M/1/2* queues and of an M/M/1/3* queue with double service rate coincide when [TeX:] $$\lambda_{1}$$ is either small or large. In this case, we can see that the average AoI decreases with the arrival rate [TeX:] $$\lambda_{1}.$$ This can be explained by the fact that, since the packets can get lost, it is better from AoI perspective to have more arrived packets (even if these packets are delayed in the buffer).

Finally, in Fig. 14, we show the average age of information for different values of [TeX:] $$\lambda_{1}$$ when there are multiple sources and losses. This illustration presents that, depending on the value of [TeX:] $$\lambda_{1},$$ the AoI approaches that of an M/M/1/3* with half arrival rate and half loss rate or that of an M/M/1/3* with double service rate, as in Fig. 14.

The main conclusion of these illustrations is that, from an AoI perspective, the M/M/1/3* queue with double service rate is the optimal one among the systems under consideration. Besides, we characterize the instances where the AoI of the two parallel queues coincides with the optimal AoI.

C. Parallel Queues with Buffer Size [TeX:] $$N>1$$

We now focus on the study of the average AoI in a system with K parallel queues with buffer size [TeX:] $$N>1.$$ We notice that using the SHS method leads to the analysis of a Markov chain with a number of states equal to [TeX:] $$K \cdot(N+2).$$ This implies that the number of SHS transitions increases at a very high rate with the number of queues and with the buffer size. As a result, according to (1), the number of equations to be solved so as to obtain the AoI suffers from the curse of dimensionality. Thus, providing an analytical expression of the AoI of source 1 in a system with an arbitrary routing system (with an arbitrary number of queues and an arbitrary buffer size) seems to be intractable us-

Fig. 11.
Average AoI comparison when [TeX:] $$\lambda_{1}$$ varies from 0.1 to [TeX:] $$10^{3}$$ with a single source and without losses [TeX:] $$\left(\lambda_{k}=0 \text { for all } k>1 \text { and } \theta=0\right) \cdot \mu=1.$$
Fig. 12.
Average AoI comparison when [TeX:] $$\lambda_{1}$$ varies from 0.1 to [TeX:] $$10^{3}$$ with multiple sources and without losses [TeX:] $$\left(\sum_{k>1} \lambda_{k}=10 \text { and } \theta=0\right) . \mu=1.$$
Fig. 13.
Average AoI comparison when [TeX:] $$\lambda_{1}$$ varies from 0.1 to [TeX:] $$10^{3}$$ with a single source and losses [TeX:] $$\left(\lambda_{k}=0 \text { for all } k>1 \text { and } \theta=10\right) . \mu=1.$$

ing the considered method. However, as we will see in the next section, it is possible to provide an upper-bound on the AoI.

Fig. 14.
Average AoI comparison when [TeX:] $$\lambda_{1}$$ varies from 0.1 to [TeX:] $$10^{3}$$ with multiple sources and losses [TeX:] $$\left(\sum_{k>1} \lambda_{k}=10 \text { and } \theta=10\right) . \mu=1.$$

IV. UPPER-BOUND ON THE AVERAGE AOI FOR AN ARBITRARY ROUTING SYSTEM

We study the average AoI of a system with [TeX:] $$K>2$$ parallel queues with [TeX:] $$N>1$$ buffer size. In this section, we provide an upper bound on the age of information using the SHS method in a system with a single and multiple sources.

We now explain the system we study here. We consider a system with n sources where, for all i, the updates of source i and of the rest of the sources arrive to the system with rate [TeX:] $$\lambda_{i}.$$ We denote by [TeX:] $$p_{i j}$$ the probability that a job of source i is routed to server j. Hence, [TeX:] $$\lambda_{i}=\sum_{j=1}^{K} \lambda_{i} p_{i j}.$$ Besides, the total incoming traffic to the system is denoted by [TeX:] $$\lambda, \text { i.e. }, \lambda=\sum_{i=1}^{n} \lambda_{i}.$$ We assume that the service rate in queue j is exponentially distributed with rate [TeX:] $$\mu_{j}.$$ In the following result, we provide an upper bound of the average AoI of the system under study here. Without loss of generality, the result is provided for source 1, however one can see that the result can be obtained for any source i. The proof of this result is reported in Appendix V.

Theorem 2: For the aforementioned system, the average AoI of source 1 is upper bounded by

(20)
[TeX:] $$\frac{1}{\sum_{j=1}^{K} \mu_{j}}\left(1+K N+\sum_{j=1}^{K} \frac{\sum_{k>1} \lambda_{k} p_{k j}+\mu_{j}}{\lambda_{j} p_{1 j}}\right).$$

It is also important to note that to obtain the above result we consider that when an update completes the service, we create a fake update to keep the system full of packets. This is the reason why, unlike in the previous section, we are not able to study the influence of the packet losses on the upper bound of the average AoI we provide in Theorem 2. In fact, let us consider thatN = 0 as an example. In this case, when a packet in service is lost, we put a false/fake update with the same age of the lost one in service. Therefore, this fake update modifies the age of the monitor when it is served and delivered to the monitor. As a result, the fake update will modify the age at the monitor and the system will behave like a system with no packet losses (but with a different service rate). This explains why the above result cannot capture the impact of packet losses.

Fig. 15.
Upper bound and real average AoI comparison when [TeX:] $$\lambda$$ varies from 0.1 to [TeX:] $$10^{3}$$ for two parallel M/M/1/2* queues with a single source [TeX:] $$\left(\lambda_{2}=0\right).$$
Fig. 16.
Upper bound and real average AoI comparison when [TeX:] $$\lambda$$ varies from 0.1 to [TeX:] $$10^{3}$$ for two parallel M/M/1/2* queues with multiple sources [TeX:] $$\left(\lambda_{2}=10\right). \mu=1.$$
A. Tightness of the Upper Bound

We now aim to explore if the upper bound on the average AoI is tight for the systems we have studied in Section III. We consider [TeX:] $$\mu=1$$ and, when we analyze the AoI for multiple sources, wPe fix the arrival rate of the rest of the sources to 10, that is, [TeX:] $$\sum_{k>1} \lambda_{k}=10.$$

We first study in Figs. 15 and 16, a system formed by 2 parallel queues with equal arrival rate and service rate, i.e., [TeX:] $$K=2, N=1, p_{11}=p_{21}=p_{12}=p_{22}=1 / 2, \text { and } \mu_{1}=\mu_{2}=\mu.$$ For this case, we get from Theorem 2 that the upper bound for source 1 is

[TeX:] $$\frac{1}{2 \mu}\left(3+\frac{\sum_{k>1} \lambda_{k}+2 \mu}{\lambda_{1}}\right).$$

As we observe in Fig. 15, the upper bound is tight when the arrival rate of source 1 is large enough, whereas in Fig. 16, we show that it is always very close to the real age.

We now focus on the influence of [TeX:] $$\mu$$ on the tightness of the upper bound of the average AoI in a system with two parallel M/M/1/2* queues with a single source. First, we consider [TeX:] $$\mu=0.1$$ in Fig. 17 and we show that, when [TeX:] $$\lambda_{1}$$ is larger than 2, the upper bound is very tight. Then, we consider [TeX:] $$\mu=10$$ in Fig. 18 and we show that, when [TeX:] $$\lambda_{1}$$ is larger than 10, the upper bound is accurate. Finally, we consider [TeX:] $$\mu=100$$ in Fig. 19 and, as it can be observed in this illustration, the upper bound is very close to the real age when [TeX:] $$\lambda_{1}$$ is larger than 1000.

We also investigate a system formed by one M/M/1/3* queue

Fig. 17.
Upper bound and real average AoI comparison when [TeX:] $$\lambda$$ varies from 0.1 to [TeX:] $$10^{3}$$ for two parallel M/M/1/2* queues with a single source [TeX:] $$\left(\lambda_{2}=0\right). \mu=0.1.$$
Fig. 18.
Upper bound and real average AoI comparison when [TeX:] $$\lambda$$ varies from 0.1 to [TeX:] $$10^{3}$$ for two parallel M/M/1/2* queues with a single source [TeX:] $$\left(\lambda_{2}=0\right). \mu=10.$$
Fig. 19.
Upper bound and real average AoI comparison when [TeX:] $$\lambda$$ varies from 0.1 to [TeX:] $$10^{3}$$ for two parallel M/M/1/2* queues with a single source [TeX:] $$\left(\lambda_{2}=0\right). \mu=100.$$

with arrival rate [TeX:] $$\lambda,$$ service rate [TeX:] $$\mu$$ and multiple sources in Fig. 20. For this case, we have that K = 1 and N = 2 and, therefore, from Theorem 2, the average AoI of source 1 is given by

[TeX:] $$\frac{1}{\mu}\left(3+\frac{\sum_{k>1} \lambda_{k}+\mu}{\lambda_{1}}\right).$$

As we see in Fig. 20, we show that it is always very close to the real age for any value of [TeX:] $$\lambda_{1}.$$ Thus, this plot confirms that the upper bound on the average AoI we provide in this paper is very tight when the arrival rate is large.

Fig. 20.
Upper bound and real average AoI comparison when [TeX:] $$\lambda$$ varies from 0.1 to [TeX:] $$10^{3}$$ for one M/M/1/3* queue with multiple source [TeX:] $$\left(\lambda_{2}=10\right) \cdot \mu=1.$$
B. AoI Comparison with a Single M/M/1/1 Queue

We now consider a system with a single source which is formed by K homogeneous queues without buffer, i.e., [TeX:] $$\mu_{j}=\mu$$ for all [TeX:] $$j=1, \cdots, K.$$ According to the result of Theorem 2, the average AoI is upper bounded by

(21)
[TeX:] $$\frac{1}{K \mu}\left(1+\mu \sum_{j=1}^{K} \frac{1}{\lambda_{j}}\right).$$

We now aim to compare the above expression with the average AoI of a single M/M/1/1 queue with preemption of jobs in service, arrival rate [TeX:] $$\lambda / K$$ and service rate [TeX:] $$\mu,$$ which according to Theorem 2(a) in [27] is given by

[TeX:] $$\frac{K}{\lambda}+\frac{1}{\mu}.$$

In the following result, we compare the above expressions.

Proposition 5: Let [TeX:] $$K>1.$$ Then,

[TeX:] $$\frac{K}{\lambda}+\frac{1}{\mu}>\frac{1}{K \mu}\left(1+\mu \sum_{j=1}^{K} \frac{1}{\lambda_{j}}\right).$$

Proof: First, we note that, when [TeX:] $$\lambda_{j}=\lambda / K$$ for all j, we have that

[TeX:] $$\frac{1}{K \mu}\left(1+\mu \sum_{j=1}^{K} \frac{1}{\lambda_{j}}\right)=\frac{1}{K \mu}\left(1+\frac{\mu K^{2}}{\lambda}\right)=\frac{1}{K \mu}+\frac{K}{\lambda}.$$

Therefore, we aim to show that

[TeX:] $$\frac{1}{K \mu}+\frac{K}{\lambda}<\frac{1}{\mu}+\frac{K}{\lambda} \Longleftrightarrow K>1.$$

And the desired result follows since the last expression is always true.

An interesting result is derived from the above proposition. Indeed, when we consider a system formed by K parallel queues and each of them receives the same arrival rate, since the expression (21) provides an upper bound on the average AoI, this result implies that the average AoI of a single M/M/1/1 queue with arrival rate [TeX:] $$\lambda / K$$ is larger than that of the considered system.

C. AoI Comparison with [21]

In Theorem 2 in [21], the author provides the following expression of the average AoI of a system with homogeneous parallel queues where the incoming jobs are always sent to the server with the oldest job:

(22)
[TeX:] $$\frac{1}{\mu}\left(\frac{1}{K} \prod_{i=1}^{K-1} \frac{\rho}{i+\rho}+\frac{1}{\rho}+\frac{1}{\rho} \sum_{l=1}^{K-1} \prod_{i=1}^{l} \frac{\rho}{i+\rho}\right).$$

We now notice that, in our model, the knowledge of the queue with the oldest job is not considered. Therefore, one might expect that the average AoI is always smaller in the model in [21]. In the following result, we consider the regime where [TeX:] $$\lambda$$ tends to infinity and we compare both models.

Proposition 6: When [TeX:] $$\lambda \rightarrow \infty,$$ we have that (22) and (20) tend to [TeX:] $$1 /(K \mu).$$

Proof: The proof is straightforward from (22) and (20). 2 From this result, we conclude that, when [TeX:] $$\lambda \rightarrow \infty,$$ the improvement of the average AoI caused by the knowledge of the state of the queues is negligible.

D. Optimization of the Upper Bound

In this section, we provide a framework to minimize the upper bound of the average age of each source i. The objective is to find the routing probabilities [TeX:] $$p_{i j}$$ in such that the age upper bound for each source i is minimized. The problem can be formulated as a game framework. More formally, the problem can be formulated as

(23)
[TeX:] $$\min _{\mathbf{p}_{\mathbf{i}}} \frac{1}{\sum_{j=1}^{K} \mu_{j}}\left(1+K N+\sum_{j=1}^{K} \frac{\sum_{l \neq i}^{n} \lambda_{l} p_{l j}+\mu_{j}}{\lambda_{i} p_{i j}}\right), \forall i$$

(24)
[TeX:] $$\text { s.t. } \sum_{j=1}^{K} p_{i j}=1, \forall i,$$

where [TeX:] $$\mathrm{p}_{\mathrm{i}}=\left[\mathrm{p}_{\mathrm{i} 1}, \cdots, \mathrm{p}_{\mathrm{i} \mathrm{K}}\right].$$ In the sequel, we will characterizes the Nash equilibrium (NE) of the above problem and provide a solution that achieves the NE. Before defining NE, we first introduce the so-called best response set valued function [TeX:] $$\left(B R_{i}\right)$$ for each source (or player) i, which is given as follows

[TeX:] $$\begin{gathered} \left(B R_{i}\right): \text { Given } \mathrm{p}_{-\mathrm{i}} \triangleq\left(\mathrm{p}_{1}, \cdots, \mathrm{p}_{\mathrm{i}-1}, \mathrm{p}_{\mathrm{i}+1}, \cdots, \mathrm{p}_{\mathrm{N}}\right) \\ \mathrm{p}_{\mathrm{i}} \in \arg \max _{\mathbf{p}_{\mathrm{i}}^{\prime}} \mathrm{U}_{\mathrm{i}}\left(\mathrm{p}_{\mathrm{i}}^{\prime}, \mathrm{p}-\mathrm{i}\right), \end{gathered}$$

where [TeX:] $$\mathrm{p}_{\mathrm{i}}=\left[\mathrm{p}_{\mathrm{i} 1}, \cdots, \mathrm{p}_{\mathrm{i} K}\right].$$ In the sequel, we will characterizes the Nash equilibrium (NE) of the above problem and provide a solution that achieves the NE. Before defining NE, we first introduce the so-called best response set valued function [TeX:] $$\left(B R_{i}\right)$$ for each source (or player) i, which is given as follows

[TeX:] $$\begin{gathered} \left(B R_{i}\right): \text { Given } \mathrm{p}_{-\mathbf{i}} \triangleq\left(\mathrm{p}_{1}, \cdots, \mathrm{p}_{\mathbf{i}-\mathbf{1}}, \mathrm{p}_{\mathbf{i}+\mathbf{1}}, \cdots, \mathrm{p}_{\mathbf{N}}\right) \\ \mathrm{p}_{\mathbf{i}} \in \arg \max _{\mathbf{p}_{\mathbf{i}}^{\prime}} \mathbf{U}_{\mathbf{i}}\left(\mathbf{p}_{\mathbf{i}}^{\prime}, \mathbf{p}_{-\mathbf{i}}\right), \end{gathered}$$

where [TeX:] $$\mathrm{p}_{\mathrm{i}}^{\prime}$$ satisfies [TeX:] $$\sum_{j=1}^{K} p_{i j}^{\prime}=1 \text { and } U_{i}\left(\mathbf{p}_{\mathbf{i}}^{\prime}, \mathbf{p}_{-\mathbf{i}}\right)= 1 / \sum_{\mathrm{j}=1}^{\mathrm{K}} \mu_{\mathrm{j}}\left(1+\mathrm{KN}+\sum_{\mathrm{j}=1}^{\mathrm{K}}\left(\sum_{\mathrm{l} \neq \mathrm{i}}^{\mathrm{N}} \lambda_{\mathrm{l}} \mathrm{P}_{\mathrm{j}}+\mu_{\mathrm{j}}\right) /\left(\lambda_{\mathrm{i}} \mathrm{p}_{\mathrm{ij}}\right)\right).$$ In order to show explicitly the dependence of [TeX:] $$B R_{i} \text { in } \mathrm{p}_{-\mathbf{i}},$$ we will use the notation [TeX:] $$B R_{i}(\mathrm{p}-\mathrm{i})$$ to represent the best response set valued function of source i. In other words, the best response consists of optimizing the utility of each source with respect only to its own action vector (i.e., routing probability [TeX:] $$\left.\mathrm{P}_{\mathrm{i}}\right).$$

Furthermore, we can also define the sources’ joint bestresponse function as

[TeX:] $$B R(\mathbf{p})=\left(\mathbf{B R}_{\mathbf{1}}(\mathbf{p}-\mathbf{1}), \cdots, \mathbf{B R}_{\mathbf{N}}(\mathbf{p}-\mathbf{N})\right)$$

We now provide the definition of NE and the relation with the sources’ best response.

Definition 1: A strategy profile [TeX:] $$\mathrm{p}=\left(\mathrm{p}_{1}, \cdots, \mathrm{p}_{\mathrm{N}}\right)$$ is a pure NE iff [TeX:] $$\forall i,$$

[TeX:] $$\forall \mathrm{p}_{\mathrm{i}}^{\prime}, \mathrm{U}_{\mathrm{i}}\left(\mathrm{p}_{\mathrm{i}}^{\prime}, \mathrm{p}_{-\mathrm{i}}\right) \geq \mathrm{U}_{\mathrm{i}}\left(\mathrm{p}_{\mathrm{i}}, \mathrm{p}_{-\mathrm{i}}\right).$$

Definition 2: A strategy profile is a NE iff

[TeX:] $$\mathrm{p} \in \mathrm{BR}(\mathrm{p}).$$

In words, a NE is a fixed point of the BR dynamic. In the sequel, we will therefore show that a fixed point of BR exists and it is unique.

It is straightforward to see that [TeX:] $$U_{i}\left(\mathbf{p}_{\mathbf{i}}^{\prime}, \mathbf{p}_{-\mathbf{i}}\right)$$ is convex with respect to pi. The best response problem for each source i can be solved easily using the standard Lagrangian technique.

The Lagrangian for each source can be written as follows,

[TeX:] $$L_{i}\left(\mathrm{p}_{\mathrm{i}}, \delta\right)=\mathrm{U}_{\mathrm{i}}\left(\mathrm{p}_{\mathrm{i}}, \mathrm{p}_{-\mathbf{i}}\right)+\delta\left(\sum_{\mathrm{j}=1}^{\mathrm{K}} \mathrm{p}_{\mathrm{ij}}-1\right).$$

The optimal solution of the above optimization problem for each source i, i.e., [TeX:] $$\left(\mathrm{p}_{\mathrm{i}}^{*}, \delta^{*}\right),$$ can then be obtained by KKT and complementarity conditions, i.e., by using [TeX:] $$\frac{\partial L_{i}}{\partial p_{i j}}=0, \forall j$$ and [TeX:] $$\delta^{*}\left(\sum_{j=1}^{K} p_{i j}^{*}-1\right)=0.$$ After some algebraic manipulations, this leads to the following expression of [TeX:] $$\mathrm{p}_{\mathrm{i}}^{*}$$

[TeX:] $$p_{i j}^{*}=\frac{\omega_{i j}}{\sum_{j^{\prime}=1}^{K} \omega_{i j^{\prime}}},$$

where [TeX:] $$\omega_{i j}=\sqrt{\sum_{l \neq i}^{N} \lambda_{l} p_{l j}+\mu_{j}} .$$

Consequently, a NE is simply the solution of the following system of equations

(25)
[TeX:] $$p_{i j}^{*}=\frac{\sqrt{\sum_{l \neq i}^{N} \lambda_{l} p_{l j}^{*}+\mu_{j}}}{\sum_{j^{\prime}=1}^{K} \sqrt{\sum_{l \neq i}^{N} \lambda_{l} p_{l j^{\prime}}^{*}+\mu_{j^{\prime}}}}, \forall i, j.$$

We will show numerically later on that the aforementioned system of equations has a solution. Before that, we will provide an analysis in the case of large number of sources, by using mean field analysis, and develop a simple iterative algorithm allowing each source to find its probabilistic routing vector [TeX:] $$\mathrm{p}_{\mathrm{i}}=\left[\mathrm{p}_{\mathrm{i} 1}, \cdots, \mathrm{p}_{\mathrm{i} \mathrm{K}}\right]^{\mathrm{T}}.$$

D.1 Mean Field Analysis

We provide here an analysis in the case of large number of sources and provide a distributed iterative algorithm that converges to the solution of the system of equations in (25). In order to use mean field tools, indistinguishable sources should be considered. We therefore consider that [TeX:] $$\lambda_{l}=\bar{\lambda}.$$ We first define the following mean field term for each queue j:

[TeX:] $$m_{i j}^{n}=\frac{1}{n} \sum_{l \neq i}^{n} p_{l j}.$$

One can notice that [TeX:] $$m_{i j}^{n} \rightarrow m_{j} \text { when } n \rightarrow \infty, \forall i, \text { where } m_{j}= \lim _{n \rightarrow \infty}(1 / n) \sum_{i=1}^{n} p_{i j}.$$

The problem in (23) can be written as

[TeX:] $$\min _{\mathbf{p}_{\mathbf{i}}} \frac{1}{\sum_{j=1}^{K} \frac{\mu_{j}}{n}}\left(\frac{1+K N}{n}+\sum_{j=1}^{K} \frac{\lambda m_{i j}^{n}+\frac{\mu_{j}}{n}}{\lambda p_{i j}}\right), \forall i.$$

One can notice also that when [TeX:] $$n \rightarrow \infty, \mu_{j} / n \rightarrow \bar{\mu}_{j}, \text { where } \bar{\mu}_{j}$$ represents the scaled asymptotic service rate per source. In fact, when the number of sources tends to infinity, the service rate of each queue must be high enough to serve all sources.

When n is very large, the system of equations in (25) tends to:

(26)
[TeX:] $$p_{i j}^{*}=\frac{\sqrt{m_{j}+\frac{\bar{\mu}_{j}}{\lambda}}}{\sum_{j^{\prime}=1}^{K} \sqrt{m_{j^{\prime}}+\frac{\bar{\mu}_{j^{\prime}}}{\lambda}}}, \forall i, j.$$

By summing the previous equations over all source indexes i and taking the limit when [TeX:] $$n \rightarrow \infty,$$ we get

(27)
[TeX:] $$m_{j}=\frac{\sqrt{m_{j}+\frac{\bar{\mu}_{j}}{\lambda}}}{\sum_{j^{\prime}=1}^{K} \sqrt{m_{j^{\prime}}+\frac{\bar{\mu}_{j^{\prime}}}{\lambda}}}, \forall j.$$

By making the variable change [TeX:] $$y_{j}=\sqrt{m_{j}+\left(\bar{\mu}_{j}\right) /(\bar{\lambda})},$$ the previous equations can be written as:

(28)
[TeX:] $$y_{j}=\frac{1}{\sum_{j^{\prime}=1}^{K} y_{j^{\prime}}}+\frac{\bar{\mu}_{j}}{\bar{\lambda} y_{j}}, \forall j.$$

Proposition 7: If the system of equations in (28) has a solution, then this solution is unique.

Proof: Let’s assume that the system of equations has two different solutions [TeX:] $$\mathrm{y}^{1}=\left[\mathrm{y}_{1}^{(1)}, \cdots, \mathrm{y}_{\mathrm{K}}^{(1)}\right]^{\mathrm{T}} \text { and } \mathrm{y}^{2}= \left[\mathbf{y}_{\mathbf{1}}^{(\mathbf{2 )}}, \cdots, \mathbf{y}_{\mathbf{n}}^{(\mathbf{2})}\right]^{\mathrm{T}}.$$ Without loss of generality, we can consider that there exists [TeX:] $$\nu>1$$ such that [TeX:] $$\mathrm{y}^{1} \leq \nu \mathrm{y}^{2}$$ and [TeX:] $$\exists$$ at least one j for which [TeX:] $$y_{j}^{(1)}<\nu y_{j}^{(2)} \text { and } \exists$$ at least one queue k for which [TeX:] $$y_{k}^{(1)}=\nu y_{k}^{(2)}.$$ One can see easily that for any possible vectors [TeX:] $$\mathrm{y}^{1}=\left[\mathrm{y}_{1}^{(1)}, \cdots, \mathrm{y}_{\mathrm{K}}^{(1)}\right]^{\mathrm{T}} \text { and } \mathrm{y}^{2}=\left[\mathrm{y}_{1}^{(2)}, \cdots, \mathrm{y}_{\mathrm{n}}^{(2)}\right]^{\mathrm{T}}$$ obtaining ν to satisfy the above statement is straightforward. Recall that for queue k

[TeX:] $$y_{k}^{(2)}=\frac{1}{\sum_{j^{\prime}=1}^{K} y_{j^{\prime}}^{(2)}}+\frac{\bar{\mu}_{k}}{\bar{\lambda} y_{k}^{(2)}}.$$

By using [TeX:] $$\mathbf{y}^{\mathbf{1}} \leq \nu \mathbf{y}^{\mathbf{2}} \text { and } y_{j}^{(1)}<\nu y_{j}^{(2)}$$ for at least one queue j, we get

[TeX:] $$y_{k}^{(2)}<\frac{\nu}{\sum_{j^{\prime}=1}^{K} y_{j^{\prime}}^{(1)}}+\frac{\nu \bar{\mu}_{k}}{\bar{\lambda} y_{k}^{(1)}}=\nu y_{k}^{(1)}.$$

Therefore, we obtain [TeX:] $$y_{k}^{(2)}<\nu y_{k}^{(1)}$$ which contradicts the fact that [TeX:] $$y_{k}^{(2)}=\nu y_{k}^{(1)}.$$ Consequently, it is not possible to have two different solutions to the system of equations in (28).

In order to solve the system of nonlinear equations in (28), we provide an efficient iterative learning algorithm and prove its convergence to the solution of (28). The proposed algorithm has a reduced complexity and can be implemented in a distributed manner. We first consider the following iterative algorithm:

(29)
[TeX:] $$y_{j}(t+1)=(1-\alpha) y_{j}(t)+\alpha \frac{1}{\sum_{j^{\prime}=1}^{K} y_{j^{\prime}}(t)}+\alpha \frac{\bar{\mu}_{j}}{\bar{\lambda} y_{j}(t)}, \forall j,$$

where [TeX:] $$\alpha$$ is a sufficiently small step size. This algorithm is a simple class of Ishikawa algorithm (using Mann-like iteration but with constant step size [TeX:] $$\alpha$$; see [29] for details).

By definition of y, one can notice that [TeX:] $$\forall j y_{j} \in \mathcal{S}_{\mid}= \left[\sqrt{\frac{\bar{\mu}_{1}}{\lambda}}, \sqrt{\infty+\frac{\bar{\mu}_{1}}{\lambda}}\right] . \text { Let } \mathcal{S}=\mathcal{S}_{\infty} \times \cdots \times \mathcal{S}_{\mathcal{K}}$$ be the set of feasible values of y. In order to ensure that the solution obtained by the iterative algorithm lies in the feasible set [TeX:] $$\mathcal{S},$$ we consider the following projection [TeX:] $$\Pi_{\mathcal{S}}: \hat{\mathbf{y}}=\Pi_{\mathcal{S}}(\mathbf{y})$$ defined as,

[TeX:] $$\forall j, \text { if } y_{j}<\sqrt{\frac{\bar{\mu}_{j}}{\lambda}} \text { then } \hat{y}_{j}=\max \left\{y_{j}, \sqrt{\frac{\bar{\mu}_{j}}{\lambda}}\right\} ; \text { if } y_{j}>\sqrt{1+\frac{\bar{\mu}_{j}}{\lambda}}$$ then [TeX:] $$\hat{y}_{j}=\min \left\{y_{j}, \sqrt{1+\frac{\bar{\mu}_{j}}{\lambda}}\right\} ; \text { and } \hat{y}_{j}=y_{j}$$ otherwise. Using the aforementioned projection, the iterative algorithm becomes

(30)
[TeX:] $$\hat{y}_{j}(t+1)=\Pi_{\mathcal{S}}\left((1-\alpha) \hat{y}_{j}(t)+\alpha \frac{1}{\sum_{j^{\prime}=1}^{K} \hat{y}_{j^{\prime}}(t)}+\alpha \frac{\bar{\mu}_{j}}{\bar{\lambda} \hat{y}_{j}(t)}\right), \forall j.$$

The following result proves that the algorithm above converges to the solution of the system of equations in (28), whenever it exists. Notice that once the algorithm converges, one can obtain m from y from the relation [TeX:] $$y_{j}=\sqrt{m_{j}+\left(\bar{\mu}_{j}\right) /(\bar{\lambda})}, \forall j.$$ The routing probabilities for each source i, i.e., pi, can then be obtained from (26). Finally, one can see that since the aforementioned algorithm depends only on the average arrival and service rates, without requiring any information about the instantaneous status of the network, it can be implemented separately by each source.

Proposition 8: The algorithm in (30) converges to the solution of the system of equations in (28).

Proof: We denote by [TeX:] $$f_{j}(\mathbf{y}(\mathbf{t}))=1 /\left(\sum_{\mathbf{j}^{\prime}=\mathbf{1}}^{\mathrm{K}} \mathrm{y}_{\mathbf{j}^{\prime}}(\mathrm{t})\right)+ \bar{\mu}_{\mathbf{j}} /\left(\bar{\lambda} \mathbf{y}_{\mathbf{j}}(\mathbf{t})\right).$$ One can see that [TeX:] $$f_{j}(\mathrm{y}(\mathrm{t}))$$ can be written as [TeX:] $$f_{j}(\mathbf{y}(\mathbf{t}))=\mathbf{g}(\mathbf{y}(\mathbf{t}))+\mathbf{g}_{\mathbf{j}}\left(\mathbf{y}_{\mathbf{j}}(\mathbf{t})\right) \text { where } \tilde{g}(\mathbf{y}(\mathbf{t}))= 1 /\left(\sum_{\mathbf{j}^{\prime}=\mathbf{1}}^{\mathbf{K}} \mathbf{y}_{\mathbf{j}^{\prime}}(\mathbf{t})\right) \text { and } g_{j}\left(y_{j}(t)\right)=\bar{\mu}_{j} /\left(\bar{\lambda} y_{j}(t)\right).$$ We also denote by [TeX:] $$\mathrm{y}^{*}=\left[\mathrm{y}_{1}^{*}, \cdots, \mathrm{y}_{\mathrm{K}}^{*}\right] \in \mathcal{S}$$ the solution of (28). Recall that by using the algorithm in (30), we have [TeX:] $$\left.y_{j}(t+1)\right)= (1-\alpha) \hat{y}_{j}(t)+\alpha /\left(\sum_{j^{\prime}=1}^{K} \hat{y}_{j^{\prime}}(t)\right)+\alpha \bar{\mu}_{j} /\left(\bar{\lambda} \hat{y}_{j}(t)\right) \text { and } \hat{y}_{j}(t+1)= \Pi_{\mathcal{S}}\left(y_{j}(t+1)\right).$$ We can also write the following

(31)
[TeX:] $$\begin{gathered} y_{j}(t+1)-y_{j}^{*}=(1-\alpha)\left(\hat{y}_{j}(t)-y_{j}^{*}\right)+ \\ \alpha\left(\tilde{g}(\hat{\mathbf{y}}(t))-\tilde{g}\left(\mathbf{y}^{*}\right)\right)+\alpha_{t}\left(g_{j}\left(\hat{y}_{j}(t)\right)-g_{j}\left(y_{j}^{*}\right)\right). \end{gathered}$$

By using the mean value theorem, [TeX:] $$\exists \overline{\mathbf{y}}(\mathbf{t}) \in\left[\hat{\mathbf{y}}(\mathbf{t}), \mathrm{y}^{*}\right]$$ such that

[TeX:] $$\tilde{g}(\hat{\mathbf{y}}(t))-\tilde{g}\left(\mathbf{y}^{*}\right)=\left.\nabla \tilde{\mathbf{g}}\right|_{\overline{\mathbf{y}}(\mathbf{t})} ^{\mathbf{T}}\left(\hat{\mathbf{y}}(\mathbf{t})-\mathbf{y}^{*}\right).$$

Similarly, [TeX:] $$\exists \tilde{\mathbf{y}}(\mathbf{t})=\left[\tilde{\mathbf{y}}_{\mathbf{1}}(\mathbf{t}), \cdots, \tilde{\mathbf{y}}_{\mathbf{K}}(\mathbf{t})\right]^{\mathbf{T}} \in\left[\hat{\mathbf{y}}(\mathbf{t}), \mathbf{y}^{*}\right]$$ such that

[TeX:] $$g_{j}\left(\hat{y}_{j}(t)\right)-g_{j}\left(y_{j}^{*}\right)=\left.\frac{d g_{j}}{d y_{j}}\right|_{\tilde{y}_{j}(t)}\left(\hat{y}_{j}(t)-y_{j}^{*}\right), \forall j.$$

From all the above, we can therefore write y(t + 1) − y∗ as follows:

(32)
[TeX:] $$\mathrm{y}(\mathrm{t}+1)-\mathrm{y}^{*}=(1-\alpha)\left(\hat{\mathrm{y}}(\mathrm{t})-\mathrm{y}^{*}\right)+\alpha \mathbf{J}_{\mathrm{t}}\left(\hat{\mathrm{y}}(\mathrm{t})-\mathrm{y}^{*}\right),$$

where [TeX:] $$\mathrm{J}_{\mathrm{t}} \text { is a } K \times K$$ matrix with diagonal elements [TeX:] $$-\left(1 /\left(\sum_{l=1}^{K} \bar{y}_{j}(t)\right)\right)^{2}-\bar{\mu}_{j} /\left(\bar{\lambda} \tilde{y}_{j}^{2}(t)\right)$$ and the non-diagonal elements are [TeX:] $$-\left(1 /\left(\sum_{l=1}^{K} \bar{y}_{j}(t)\right)\right)^{2}.$$ We can show easily then that [TeX:] $$\forall \mathbf{a} \in \mathbb{R}^{\mathbb{K}} \mathbf{a}^{\mathbf{T}} \mathbf{J}_{\mathbf{t}} \mathbf{a} \leq 0, \text { i.e., } \mathbf{J}_{\mathbf{t}}$$ is negative semi definite. Then, by using [TeX:] $$\left\|y(t+1)-y^{*}\right\|^{2}=\left(y(t+1)-y^{*}\right)^{T}\left(y(t+1)-y^{*}\right),$$ we get

(33)
[TeX:] $$\begin{aligned} &\left\|\mathbf{y}(\mathbf{t}+1)-\mathbf{y}^{*}\right\|^{2} \\ &=(1-\alpha)^{2}\left\|\hat{\mathbf{y}}(t)-\mathbf{y}^{*}\right\|^{2}+\alpha^{2}\left\|\mathbf{J}_{\mathbf{t}}\left(\hat{\mathbf{y}}(\mathbf{t})-\mathbf{y}^{*}\right)\right\|^{2} \\ &\quad+2\left(1-\alpha_{t}\right) \alpha\left(\hat{\mathbf{y}}(t)-\mathbf{y}^{*}\right)^{\mathrm{T}} \mathbf{J}_{\mathbf{t}}\left(\hat{\mathbf{y}}(\mathbf{t})-\mathbf{y}^{*}\right). \end{aligned}$$

Since J is negative semi definite and [TeX:] $$\left\|J_{t}\left(\hat{y}(t)-y^{*}\right)\right\|^{2} \leq \left\|\mathbf{J}_{\mathbf{t}}\right\|^{2}\left\|\hat{\mathbf{y}}(\mathrm{t})-\mathrm{y}^{*}\right\|^{2},$$ we get

(34)
[TeX:] $$\left\|\mathrm{y}(\mathrm{t}+1)-\mathrm{y}^{*}\right\|^{2} \leq\left((1-\alpha)^{2}+\alpha^{2}\left\|\mathrm{~J}_{\mathrm{t}}\right\|^{2}\right)\left\|\hat{\mathrm{y}}(\mathrm{t})-\mathrm{y}^{*}\right\|^{2}.$$

Recall that [TeX:] $$\hat{y} \in \mathcal{S} $$ and therefore [TeX:] $$\hat{\mathbf{y}}(t) \neq 0,$$ which implies that [TeX:] $$\overline{\mathbf{y}} \neq 0 \text { and } \tilde{\mathbf{y}} \neq 0$$ and hence [TeX:] $$\left\|\mathbf{J}_{\mathrm{t}}\right\|^{2}$$ is bounded. Therefore, [TeX:] $$\exists$$ a sufficiently small [TeX:] $$\alpha$$ such that [TeX:] $$\forall t \alpha^{2}\left\|\mathbf{J}_{\mathbf{t}}\right\|^{2}+\alpha^{2}-2 \alpha<-\epsilon<0,$$ where [TeX:] $$0<\epsilon<1.$$ Consequently,

[TeX:] $$\left\|\mathrm{y}(\mathrm{t}+1)-\mathrm{y}^{*}\right\|^{2} \leq(1-\epsilon)\left\|\hat{\mathrm{y}}(\mathrm{t})-\mathrm{y}^{*}\right\|^{2}.$$

Then, by using the inequality [TeX:] $$\left\|\hat{\mathbf{y}}(t+1)-\mathbf{y}^{*}\right\|^{2} \leq \left\|\mathrm{y}(\mathrm{t}+1)-\mathrm{y}^{*}\right\|^{2}\left(\text { since } \mathrm{y}^{*} \in \mathcal{S}\right),$$ we get

(35)
[TeX:] $$\begin{aligned} \left\|\hat{\mathbf{y}}(t+1)-\mathrm{y}^{*}\right\|^{2} & \leq(1-\epsilon)\left\|\hat{\mathrm{y}}(t)-\mathrm{y}^{*}\right\|^{2} \\ & \vdots \\ & \leq(1-\epsilon)^{t+1}\left\|\hat{\mathrm{y}}(0)-\mathrm{y}^{*}\right\|^{2}, \end{aligned}$$

and [TeX:] $$\left\|\hat{\mathbf{y}}(t)-\mathbf{y}^{*}\right\|^{2} \rightarrow 0 \text { when } t \rightarrow \infty.$$ This concludes the proof.

From the proof above, one can see that if [TeX:] $$\alpha$$ is taken such that [TeX:] $$\alpha^{2}\left\|\mathbf{J}_{\mathbf{t}}\right\|^{2}+\alpha^{2}-2 \alpha<0$$ then the algorithm in (30) converges to the solution of (28). Since for each [TeX:] $$j \hat{y}_{j} \geq \sqrt{\frac{\overline{\mu_{j}}}{\lambda}}$$ and [TeX:] $$\left\|\mathbf{J}_{\mathbf{t}}\right\|^{2} \leq \sum_{\mathbf{i}=1}^{\mathbf{K}} \sum_{\mathbf{j}=1}^{\mathbf{K}} \mathbf{J}_{\mathbf{t}}(\mathbf{i}, \mathbf{j}),$$ which implies that [TeX:] $$\left\|J_{t}\right\|^{2} \leq \mathrm{K}^{2}\left(1 /\left(\sum_{\mathrm{l}=1}^{\mathrm{K}} \sqrt{\frac{\bar{\mu}_{\mathrm{j}}}{\lambda}}\right)\right)^{2}+\mathrm{K}.$$ Consequently, it is sufficient to take [TeX:] $$\alpha<2 /\left\{K^{2}\left(\frac{1}{\sum_{l=1}^{K} \sqrt{\frac{\mu_{j}}{\lambda}}}\right)^{2}+K+1\right\}.$$

D.2 Numerical Results

We provide here an example to show numerically that the system of equations in (25) has a solution. While in the previous subsection, a detailed mathematical analysis is provided to optimize the upper bound in the case of large number of sources, we show here numerical results for finite number of sources. In order to solve (25), we consider the following iterative algorithm:

[TeX:] $$p_{i j}(t+1)=(1-\alpha) p_{i j}(t)+\alpha \frac{\sqrt{\sum_{l \neq i}^{N} \lambda_{l} p_{l j}(t)+\mu_{j}}}{\sum_{j^{\prime}=1}^{K} \sqrt{\sum_{l \neq i}^{N} \lambda_{l} p_{l j^{\prime}}(t)+\mu_{j^{\prime}}}}, \forall i, j.$$

One can see that actually this iterative algorithm is similar to (30), or more precisely (30) can be obtained from the above algorithm by using mean field analysis (using the derivations

Fig. 21.
Convergence of the fixed point algorithm defined from (25).

in the previous subsection). In Fig. 21 we show an illustrative example where we consider a system with 6 sources et 10 servers with the following parameters for the sources [TeX:] $$\lambda_{1}=100,\lambda_{2}=20, \lambda_{3}=50, \lambda_{4}=\lambda_{5}=10, \text { and } \lambda_{6}=1000$$ and for the servers [TeX:] $$\mu_{1}=1, \mu_{2}=2, \mu_{3}=3, \mu_{4}=5, \mu_{5}=10, \mu_{6}=20,\mu_{7}=50, \mu_{8}=100, \mu_{9}=200 \text {, and } \mu_{10}=1000 \text {. }$$ The plots show that (25) has a solution, that [TeX:] $$p_{1,2}, p_{1,6}, p_{1,7}, p_{1,8}, p_{1,9}, \text { and } p_{1,10}$$ converge to the solution of (25) and also that in this example the algorithm converges quickly. This shows that the proposed algorithm can converge also for finite number of sources with different arrival rates. Providing a formal proof of convergence would be an interesting topic and is left for future work.

V. CONCLUSION

In this paper, we studied the average AoI of a system of multiple sources and parallel queues using the SHS method. We considered that the queues do not communicate between each other and that the sources send their updates to the queues according to a predefined probabilistic routing scheme. First, we computed the average AoI for the following systems: i) Two parallel M/M/1/1 queues, ii) one M/M/1/1 queue with half arrival and loss rates, and iii) one M/M/1/1 queue with double service rate. Then, we computed the average AoI for two parallel M/M/1/2* queues, one M/M/1/3* queue with half arrival and loss rates, and one M/M/1/3* queue with double service time. We conclude that the average AoI of the system composed of parallel queues is always smaller than that of one queue with half arrival and loss rates, and can be as small as that of one queue with double service rate. We also studied the average AoI of a system with an arbitrary number of heterogeneous M/M/1/(N+1)* queues and we provided an upper bound of AoI that is tight when there are multiple sources. We then provided a framework allowing each source to determine its routing decision, by using game theory and best response method. In the contest of large number of sources, we simplified the game framework by using mean field games, provided a simple distributed algorithm and proved its convergence to the desired fixed point.

APPENDIX A PROOF OF THEOREM 2

We consider a system with K parallel M/M/1/(N+1)* queues. Without loss of generality, we compute the average AoI of source 1. The result is of course true for any source i. The arrival rate to queue j from source 1 is [TeX:] $$\lambda_{1} p_{1 j}$$ and that from the rest of the sources is [TeX:] $$\sum_{k>1} \lambda_{k} p_{k j} \text { for all } j=1, \cdots, K.$$ Let M = N + 1. The continuous state is given by a vector of size 1 + K ∗ M

[TeX:] $$\begin{array}{r} \mathrm{x}(\mathrm{t})=\left[\mathrm{x}_{0}(\mathrm{t}) \mathrm{x}_{11}(\mathrm{t}) \cdots \mathrm{x}_{1 \mathrm{M}}(\mathrm{t}) \mathrm{x}_{21}(\mathrm{t}) \mathrm{x}_{\mathrm{K} 1}(\mathrm{t}) \cdots\right. \\ \left.x_{K M}(t)\right], \end{array}$$

where [TeX:] $$x_{0}$$ represents the current age, [TeX:] $$x_{j 1}$$ the age if the update in service in the queue j is delivered and [TeX:] $$x_{j l}$$ the age if the update in the position l − 1 of the queue j is delivered. The discrete state is a Markov chain with a single state. We note that, when an event (arrival or departure) occurs in queue j, the age of the updates in the rest of the queues is not modified. This allows us to focus on a queue j to illustrate the Markov Chain and the SHS transitions, which are presented respectively in Fig. 22 and Table 6.

We now explain each transition l:

l = 0: There is an update of source i that arrives to queue j. For this case, the incoming update replaces the update in the last position of the queue and the age of the incoming update is set to zero, i.e., [TeX:] $$x_{j M}^{\prime}=0.$$

l = 1: There is an update of another source that arrives to queue j. For this case, the incoming update replaces the update in the last position of the queue and the age of the

Table 6.
Table of SHS transitions of Fig. 22.
Fig. 22.
The SHS Markov Chain for a system formed by K parallel M/M/1/(N+1)* queues with multiple sources.

incoming update is set to the same value as the age of penultimate update, , i.e., [TeX:] $$x_{j M}^{\prime}=x_{j N}.$$

Remark 3: If N = 0, the last update in the queue is an update that is in service. Therefore, when an update of other sources arrives to the system, the incoming update replaces the update in service and [TeX:] $$x_{j 1}^{\prime}=x_{0}.$$

l = 2: The update in service in queue j is delivered and therefore the age of the monitor changes to [TeX:] $$x_{j 1}.$$ Besides, all the elements in the queue move a position ahead, which causes that their ages change respectively from [TeX:] $$x_{j 1} \text { to } x_{j 2},$$ from [TeX:] $$x_{j 2} \text { to } x_{j 3}, \cdots$$ and from [TeX:] $$x_{j N} \text { to } x_{j M}.$$ Finally, in the last position of the queue, we put a fake update whose age value is set to [TeX:] $$x_{j M},$$ that is, the age of the penultimate element in the queue, i.e., [TeX:] $$x_{j M}^{\prime}=x_{j M}.$$

As we have just mentioned, when an update of queue i is delivered to the monitor, a fake update is put in the last position of the queue. We now explain that these additional fake updates lead to a larger sojourn time of the incoming updates, which implies that the overall average AoI is larger and, as a result, the result we obtain provides an upper bound of the real average AoI of the system (i.e., without fake updates). Consider that all the updates of queue i are delivered before the arrival of a new update to that queue. According to the above explained system, the queue is full of fake updates and, therefore, when a new update arrives to the system, it is enqueued in the last position of the queue. However, in a system without fake updates, a new update would find the queue empty and would start the service upon arrival. As a result, we have that the sojourn time of new updates in a system with fake updates (i.e., the above presented system) is clearly larger than the sojourn time without fake updates.

Since the Markov chain is formed by a single state, the stationary distribution is trivial. We define the vector [TeX:] $$\mathrm{V}=\left[\begin{array}{ll} \mathrm{v}_{0} & \left.\mathrm{v}_{11} \mathrm{~V}_{12} \mathrm{v}_{1 \mathrm{M}} \mathrm{v}_{21} \cdots \mathrm{V}_{\mathrm{K} 1} \cdots \mathrm{V}_{\mathrm{KM}}\right] \end{array}\right.$$ and also b as the vector of size 1+K∗M with all ones. From the result of Theorem 4 in [27] and the above reasoning, we know that an upper bound of the AoI is given by [TeX:] $$v_{0},$$ that is, the first coordinate of the vector v.

In the remainder of the proof, we present the system of equations that v satisfies and solve it. We first present the equation of the first coordinate of v:

[TeX:] $$\begin{aligned} \sum_{j=1}^{K}\left(\lambda_{1} p_{1 j}\right.&\left.+\sum_{k>1} \lambda_{k} p_{k j}+\mu_{j}\right) v_{0} \\ &=1+\sum_{j=1}^{K}\left(\left(\lambda_{1} p_{1 j}+\sum_{k>1} \lambda_{k} p_{k j}\right) v_{0}+\mu_{j} v_{j 1}\right), \end{aligned}$$

which can be alternatively written as

(36)
[TeX:] $$v_{0} \sum_{j=1}^{K} \mu_{j}=1+\sum_{j=1}^{K} \mu_{j} v_{j 1}.$$

Let [TeX:] $$l^{\prime}=l+1.$$ We now present that, for all [TeX:] $$j=1, \cdots, K$$ and all [TeX:] $$l=1, \cdots, M,$$ the following equation is satisfied:

(37)
[TeX:] $$\begin{aligned} &\sum_{m=1}^{K}\left(\lambda_{1} p_{1 m}+\sum_{k>1} \lambda_{k} p_{k m}+\mu_{m}\right) v_{m l} \\ &=1+\sum_{m \neq j}\left(\lambda_{1} p_{1 m}+\sum_{k>1} \lambda_{k} p_{k m}+\mu_{m}\right) v_{m l} \\ &+\left(\lambda_{1} p_{1 j}+\sum_{k>1} \lambda_{k} p_{k j}\right) v_{j l}+\mu_{j} v_{m l^{\prime}} \\ &\Longleftrightarrow \mu_{j} v_{j l}=1+\mu_{j} v_{j l^{\prime}} . \end{aligned}$$

Using recursively the last expression for l equals 1 to N, we get that

(38)
[TeX:] $$\mu_{j} v_{j 1}=N-1+\mu_{j} v_{m N}.$$

We now focus on the last position of queue j and the equation that it must satisfy is the following:

[TeX:] $$\begin{aligned} &\sum_{m=1}^{K}\left(\lambda_{1} p_{1 m}+\sum_{k>1} \lambda_{k} p_{k m}+\mu_{m}\right) v_{m M} \\ &=1+\sum_{m \neq j}\left(\lambda_{1} p_{1 m}+\sum_{k>1} \lambda_{k} p_{k m}+\mu_{m}\right) v_{m M} \\ &+\sum_{k>1} \lambda_{k} p_{k j} v_{j N}+\mu_{j} v_{j M} \\ &\Longleftrightarrow\left(\lambda_{1} p_{1 j}+\sum_{k>1} \lambda_{k} p_{k j}\right) v_{j M}=1+\sum_{k>1} \lambda_{k} p_{k j} v_{j N}. \end{aligned}$$

Besides, from (37), for l = N, we have that

[TeX:] $$\mu_{j} v_{j N}=1+\mu_{j} v_{j M}.$$

Using the last two expressions, we get that

[TeX:] $$\begin{aligned} \mu_{j} v_{j N} &=1+\mu_{j} v_{j M} \\ &=1+\mu_{j} \frac{1+\sum_{k>1} \lambda_{k} p_{k j} v_{j N}}{\lambda_{1} p_{1 j}+\sum_{k>1} \lambda_{k} p_{k j}}. \end{aligned}$$

Fig. 23.
Simulation results of the M/M/1/1 queue with different values of arrival rates; x-axis in logarithmic scale.
Fig. 24.
Simulation results of the M/M/1/3* queue with different values of arrival rates; x-axis in logarithmic scale.

The last expression is equivalent to the following one:

[TeX:] $$\begin{gathered} \mu_{j} v_{j N}\left(1-\frac{\lambda_{2} p_{2 j}}{\lambda_{1} p_{1 j}+\sum_{k>1} \lambda_{k} p_{k j}}\right)=1+\frac{\mu_{j}}{\lambda_{1} p_{1 j}+\sum_{k>1} \lambda_{k} p_{k j}} \\ \mu_{j} v_{j N}\left(\frac{\lambda_{1 p_{1 j}}}{\lambda_{1 p_{1 j}}+\sum_{k>1} \lambda_{k} p_{k j}}\right)=\frac{\lambda_{1} p_{1 j}+\sum_{k>1} \lambda_{k} p_{k j}+\mu_{j}}{\lambda_{1} p_{1 j}+\sum_{k>1} \lambda_{k} p_{k j}} \\ \mu_{j} v_{j N} \lambda_{1} p_{1 j}=\lambda_{1} p_{1 j}+\sum_{k>1} \lambda_{k} p_{k j}+\mu_{j} \Longleftrightarrow \\ \mu_{j} v_{j N}=1+\frac{\sum_{k>1} \lambda_{k} p_{k j}+\mu_{j}}{\lambda_{1} p_{1 j}} . \end{gathered}$$

Using the last expression with (38) and (36), the desired result follows for i = 1.

APPENDIX B SIMULATIONS FOR M/M/1/1 AND M/M/1/3* QUEUES VI.

We provide in this section a comparison between the results obtained by SHS and those obtained by simulation to show the accuracy of the SHS method. We consider two different examples with either M/M/1/1 or M/M/1/3* queues. We consider a system with a single source and without packet losses. We set [TeX:] $$\mu=2.$$ As expected, the curves presented in Figs. 23 and 24 show that, in all the considered cases, the results obtained by simulation coincide with those of SHS.

APPENDIX C TABLE OF TRANSITIONS OF FIG. 9

Table 7.
Table of transitions of Fig. 9.

Biography

Mohamad Assaad

Mohamad Assaad (Senior Member, IEEE) received the M.Sc. and Ph.D. degrees in telecommunications from the Telecom ParisTech, Paris, France, in 2002 and 2006, respectively. Since 2006, he has been with the Telecommunications Department, CentraleSupelec, where he is currently a Professor and holds the TCL Chair on 5G systems. He has coauthored one book and more than 100 publications in journals and conference proceedings, and serves regularly as a TPC member and a TPC co-chair for several top-tier international conferences. His research interests include mathematical modeling of communication networks, MIMO systems, age of information, resource optimization and machine learning for wireless networks. He has given in the past successful tutorials on 5G systems at various conferences including the IEEE ISWCS’15 and the IEEE WCNC’16 conferences. He is an Editor for the IEEE Wireless Communications Letters and Journal of Communications and Information Networks.

Biography

Josu Doncel

Josu Doncel obtained from the University of the Basque Country (UPV/EHU) the Industrial Engineering degree in 2007, the Mathematics degree in 2010 and, in 2011, the Master degree in Applied Mathematics and Statistics. He received in 2015 the Ph.D. degree from Université de Toulouse (France). He is currently Assistant Professor in the Applied Mathematics, Statistics and Operations Research department of the UPV/EHU. He has previously held research positions at LAAS-CNRS (France), INRIA Grenoble (France) and BCAM-Basque Center for Applied Mathematics (Spain), teaching positions at ENSIMAG (France), INSAToulouse (France) and IUT-Blagnac (France) and invited researcher positions at Laboratory of Signals and Systems of CentraleSupelec (France), at Inria Paris (France) and at Laboratory David (France). APPENDIX A PROOF OF THEOREM 2 We consider a system withK parallel M/M/1/(N+1)* queues. Without loss of generality, we compute the average AoI of source 1. The result is of course true for any source i. The arrival rate to queue j from source 1 is λ 1 p 1j and that from the rest of the sources is P k>1 λ k p kj for all j = 1,··· ,K. Let M = N +1. The continuous state is given by a vector of size 1+K∗ M x(t)=[x 0 (t)x 11 (t)··· x 1M (t)x 21 (t) x K1 (t)··· x KM (t)], wherex 0 represents the current age,x j1 the age if the update in service in the queuej is delivered andx jl the age if the update in the position l− 1 of the queue j is delivered. The discrete state is a Markov chain with a single state. We note that, when an event (arrival or departure) occurs in queuej, the age of the updates in the rest of the queues is not modified. This allows us to focus on a queuej to illustrate the Markov Chain and the SHS transitions, which are presented respectively in Fig. 22 and Table 6. We now explain each transitionl: l =0: There is an update of sourcei that arrives to queuej. For this case, the incoming update replaces the update in the last position of the queue and the age of the incoming update is set to zero, i.e.,x ′ jM =0. l = 1: There is an update of another source that arrives to queue j. For this case, the incoming update replaces the update in the last position of the queue and the age of the

References

  • 1 S. Kaul, M. Gruteser, V. Rai, J. Kenney, "Minimizing age of information in vehicular networks," in Proc.IEEE SECON, 2011;custom:[[[-]]]
  • 2 S. Kaul, R. Yates, M. Gruteser, "Real-time status: How often should one update?," in Proc.IEEE INFOCOM, 2012;custom:[[[-]]]
  • 3 C. Kam, S. Kompella, A. Ephremides, "Effect of message transmission diversity on status age," in Proc.IEEE ISIT, 2014;custom:[[[-]]]
  • 4 C. Kam, S. Kompella, G. D. Nguyen, A. Ephremides, "Effect of message transmission path diversity on status age," IEEE Trans. Inf. Theory, vol. 62, no. 3, pp. 1360-1374, Mar, 2015.custom:[[[-]]]
  • 5 S. K. Kaul, R. D. Yates, "Timely updates by multiple sources: The m/m/1 queue revisited," in Proc.IEEE CISS, 2020;custom:[[[-]]]
  • 6 M. Moltafet, M. Leinonen, M. Codreanu, "On the age of information in multi-source queueing models," IEEE Trans. Commun.Early Access, 2020.custom:[[[-]]]
  • 7 A. Kosta, N. Pappas, V. Angelakis, Age of information: A new concept, metric, and tool, Foundations and Trends Networking, vol. 12, no. 3, pp. 162-259, 2017.custom:[[[-]]]
  • 8 A. Maatouk, M. Assaad, A. Ephremides, "The age of updates in a simple relay network," in Proc.IEEE ITW, 2018;custom:[[[-]]]
  • 9 Y. Sun, E. Uysal-Biyikoglu, S. Kompella, "Age-optimal updates of multiple information flows," in arXivpreprint, arXiv:1801.02394, 2018;custom:[[[-]]]
  • 10 R. Talak, S. Karaman, E. Modiano, "Can determinacy minimize age of information?," arXivpreprintarXiv:1810.04371, 2018.custom:[[[-]]]
  • 11 Y. Sun, E. Uysal-Biyikoglu, R. D. Yates, C. E. Koksal, N. B. Shroff, "Update or wait: How to keep your data fresh," IEEE Trans. Inf. Theory, vol. 63, no. 11, pp. 7492-7508, Nov, 2017.custom:[[[-]]]
  • 12 A. Maatouk, S. Kriouile, M. Assaad, A. Ephremides, "The age of incorrect information: A new performance metric for status updates," To appearinIEEE /ACMTrans.NetwTo appearinIEEE /ACMTrans.Netw. custom:[[[-]]]
  • 13 Y.-P. Hsu, E. Modiano, L. Duan, "Scheduling algorithms for minimizing age of information in wireless broadcast networks with random arrivals: The no-buffer case," arXive-printsp.arXiv:1712.07419, 2017.custom:[[[-]]]
  • 14 A. M. Bedewy, Y. Sun, S. Kompella, N. B. Shroff, "Age-optimal sampling and transmission scheduling in multi-source systems," in arXiv eprints, p.arXiv:1812.09463, 2018;custom:[[[-]]]
  • 15 I. Kadota, A. Sinha, E. Uysal-Biyikoglu, R. Singh, E. Modiano, "Scheduling policies for minimizing age of information in broadcast wireless networks," IEEE /ACM Trans. Netw., vol. 26, no. 6, pp. 2637-2650, Dec, 2018.custom:[[[-]]]
  • 16 A. Maatouk, M. Assaad, A. Ephremides, "Minimizing the age of information: NOMA or OMA?," in Proc.InfocomAoIWorkshop, 2019;custom:[[[-]]]
  • 17 A. Maatouk, M. Assaad, A. Ephremides, "Minimizing the age of information in a CSMA environment," in Proc.ofIEEE WiOPT, 2019;custom:[[[-]]]
  • 18 A. Maatouk, M. Assaad, A. Ephremides, "On the age of information in a CSMA environment," ToappearinIEEE Trans.Netw., 2020.custom:[[[-]]]
  • 19 A. M. Bedewy, Y. Sun, N. B. Shroff, "Optimizing data freshness, throughput, and delay in multi-server information-update systems," in Proc.IEEE ISIT, 2016;custom:[[[-]]]
  • 20 Y. Sun, E. Uysal-Biyikoglu, S. Kompella, "Age-optimal updates of multiple information flows," in Proc.IEEE INFOCOM, 2018;custom:[[[-]]]
  • 21 R. D. Yates, "Status updates through networks of parallel servers," in Proc. IEEE ISIT, 2018;pp. 2281-2285. custom:[[[-]]]
  • 22 A. M. Bedewy, Y. Sun, N. B. Shroff, "The age of information in multihop networks," IEEE /ACM Trans. Netw., vol. 27, no. 3, pp. 1248-1257, June, 2019.custom:[[[-]]]
  • 23 A. M. Bedewy, Y. Sun, N. B. Shroff, "Age-optimal information updates in multihop networks," in Proc.IEEE ISIT, 2017;custom:[[[-]]]
  • 24 S. Farazi, A. G. Klein, D. R. Brown, "Fundamental bounds on the age of information in general multi-hop interference networks," in Proc.IEEE INFOCOMWorkshops, 2019;custom:[[[-]]]
  • 25 A. S. B. Buyukates, S. Ulukus, "Age of information in multihop multicast networks," J.Commun.Netw., vol. 21, no. 3, pp. 256-267, 2019.custom:[[[-]]]
  • 26 R. D. Yatesetal., "Age of information: An introduction and survey," arXiv preprintarXiv:2007.08564, 2020.custom:[[[-]]]
  • 27 R. D. Yates, S. K. Kaul, "The age of information: Real-time status updating by multiple sources," IEEE Trans.Inf.Theory, vol. 65, no. 3, pp. 1807-1827, Mar, 2019.custom:[[[-]]]
  • 28 J. P. Hespanha, "Modelling and analysis of stochastic hybrid systems," IEE Proc.ControlTheoryAppl., vol. 153, no. 5, pp. 520-535, Sept, 2006.custom:[[[-]]]
  • 29 S. Ishikawa, "Fixed points by a new iteration method," in Proc. Amer. Math.Soc., vol. 44, pp. 147-150, 1974.custom:[[[-]]]

Table 1.

Summary of average AoI comparison of the systems without buffer (see Section III.A).
Without losses With losses
Single source (SERVER-ROUTING) and (SERVER-DOUBLE) have equal age for λ small and large. See Fig. 4. (SERVER-ROUTING) and (SERVER-DOUBLE). have equal age always. See Fig. 6.
Multiple sources (SERVER-ROUTING) and (SERVER-DOUBLE) have almost equal age. See Fig. 5. (SERVER-ROUTING) and (SERVER-DOUBLE). have equal age. See Fig. 7.

Table 2.

Summary of average AoI comparison of the systems with buffer (see Section III.B).
Without losses With losses
Single source QUEUE-ROUTING and QUEUE-DOUBLE have equal age for λ small and large. See Fig. 11. QUEUE-ROUTING and QUEUE-DOUBLE have equal age. See Fig. 13.
Multiple sources QUEUE-ROUTING and QUEUE-HALF have equal age for [TeX:] $$\lambda$$ small. QUEUE-ROUTING and QUEUE-DOUBLE have equal age for λ large. See Fig. 12. QUEUE-ROUTING and QUEUE-DOUBLE have equal age always. See Fig. 14.

Table 1.

Table of transitions of Fig. 1.
l [TeX:] $$q_{l} \rightarrow q_{l^{\prime}}$$ [TeX:] $$\lambda^{l}$$ [TeX:] $$x^{\prime}=x A_{l}$$ [TeX:] $$\bar{v}_{q_{l}} A_{l}$$

0

1

2

3

4

5

[TeX:] $$0 \rightarrow 1$$

[TeX:] $$0 \rightarrow 1$$

[TeX:] $$1 \rightarrow 1$$

[TeX:] $$1 \rightarrow 0$$

[TeX:] $$1 \rightarrow 0$$

[TeX:] $$1 \rightarrow 1$$

[TeX:] $$1 \rightarrow 1$$

[TeX:] $$\lambda_{1}$$

[TeX:] $$\sum_{k>1} \lambda_{k}$$

[TeX:] $$\mu$$

[TeX:] $$\theta$$

[TeX:] $$\lambda_{1}$$

[TeX:] $$\sum_{k \geq 1} \lambda_{k}$$

[TeX:] $$\left[\begin{array}{ll} x_{0} &0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{ll} x_{0}& x_{0} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{ll} x_{1} &0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{ll} x_{0} &0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{ll} x_{0} &0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{ll} x_{0} &0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{ll} x_{0} &x_{0} \end{array}\right]$$

[TeX:] $$\left[v_{0}(0) 0\right]$$

[TeX:] $$\left[v_{0}(0) v_{0}(0)\right]$$

[TeX:] $$\left[v_{1}(1) 0\right]$$

[TeX:] $$\left[v_{1}(0) 0\right]$$

[TeX:] $$\left[\begin{array}{ll} v_{1}(0)& 0 \end{array}\right]$$

[TeX:] $$\left[v_{1}(0) v_{1}(0)\right]$$

Table 4.

Table of transitions of Fig. 2.
l [TeX:] $$q l \rightarrow q l^{\prime}$$ [TeX:] $$\lambda^{l}$$ [TeX:] $$x^{\prime}=x A_{l}$$ [TeX:] $$\bar{v}_{q_{l}} A_{l}$$

0

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

[TeX:] $$00 \rightarrow 01$$

[TeX:] $$00 \rightarrow 01$$

[TeX:] $$01 \rightarrow 00$$

[TeX:] $$01 \rightarrow 00$$

[TeX:] $$01 \rightarrow 01$$

[TeX:] $$01 \rightarrow 01$$

[TeX:] $$00 \rightarrow 10$$

[TeX:] $$00 \rightarrow 10$$

[TeX:] $$10 \rightarrow 00$$

[TeX:] $$10 \rightarrow 00$$

[TeX:] $$10 \rightarrow 10$$

[TeX:] $$10 \rightarrow 10$$

[TeX:] $$01 \rightarrow 11$$

[TeX:] $$01 \rightarrow 11$$

[TeX:] $$11 \rightarrow 01$$

[TeX:] $$11 \rightarrow 01$$

[TeX:] $$11 \rightarrow 11$$

[TeX:] $$11 \rightarrow 11$$

[TeX:] $$10 \rightarrow 11$$

[TeX:] $$10 \rightarrow 11$$

[TeX:] $$11 \rightarrow 10$$

[TeX:] $$11 \rightarrow 10$$

[TeX:] $$11 \rightarrow 11$$

[TeX:] $$11 \rightarrow 11$$

[TeX:] $$\lambda_{1} p_{12}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 2}$$

[TeX:] $$\mu_{2}$$

[TeX:] $$\theta_{2}$$

[TeX:] $$\lambda_{1} p_{12}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 2}$$

[TeX:] $$\lambda_{1} p_{11}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 1}$$

[TeX:] $$\mu_{1}$$

[TeX:] $$\theta_{1}$$

[TeX:] $$\lambda_{1} p_{11}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 1}$$

[TeX:] $$\lambda_{1} p_{11}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 1}$$

[TeX:] $$\mu_{1}$$

[TeX:] $$\theta_{1}$$

[TeX:] $$\lambda_{1} p_{11}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 1}$$

[TeX:] $$\lambda_{1} p_{12}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 2}$$

[TeX:] $$\mu_{2}$$

[TeX:] $$\theta_{2}$$

[TeX:] $$\lambda_{1} p_{12}$$

[TeX:] $$\sum_{k \geq 1} \lambda_{k} p_{k 2}$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & 0 & x_{0} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{2} & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & 0 & x_{0} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & x_{0} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{1} & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & x_{0} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & 0 & x_{2} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & x_{0} & x_{2} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{1} & 0 & x_{2} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & 0 & x_{2} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & 0 & x_{2} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & x_{0} & x_{2} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & x_{1} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & x_{1} & x_{0} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{2} & x_{1} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & x_{1} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & x_{1} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lll} x_{0} & x_{1} & x_{0} \end{array}\right]$$

[TeX:] $$\left[v_{00}(0) 00\right]$$

[TeX:] $$\left[v_{00}(0) 0 v_{00}(0)\right]$$

[TeX:] $$\left[v_{01}(2) 00\right]$$

[TeX:] $$\left[\begin{array}{lll} v_{01}(0) & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} v_{01}(0) & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[v_{01}(0) 0 v_{01}(0)\right]$$

[TeX:] $$\left[v_{00}(0) 00\right]$$

[TeX:] $$\left[v_{00}(0) v_{00}(0) 0\right]$$

[TeX:] $$\left[v_{10}(1) 00\right]$$

[TeX:] $$\left[v_{10}(0) 00\right]$$

[TeX:] $$\left[v_{10}(0) 00\right]$$

[TeX:] $$\left[v_{10}(0) v_{10}(0) 0\right]$$

[TeX:] $$\left[v_{01}(0) 0 v_{01}(2)\right]$$

[TeX:] $$\left[v_{01}(0) v_{01}(0) v_{01}(2)\right]$$

[TeX:] $$\left[v_{11}(1) 0 v_{11}(2)\right]$$

[TeX:] $$\left[v_{11}(0) 0 v_{11}(2)\right]$$

[TeX:] $$\left[v_{11}(0) v_{11}(0) v_{11}(2)\right]$$

[TeX:] $$\left[v_{11}(0) v_{11}(0) v_{11}(2)\right]$$

[TeX:] $$\left[v_{10}(0) v_{10}(1) 0\right]$$

[TeX:] $$\left[v_{10}(0) v_{10}(1) v_{10}(0)\right]$$

[TeX:] $$\left[v_{11}(2) v_{11}(1) 0\right]$$

[TeX:] $$\left[v_{11}(0) v_{11}(1) 0\right]$$

[TeX:] $$\left[v_{11}(0) v_{11}(1) 0\right]$$

[TeX:] $$\left[v_{11}(0) v_{11}(1) v_{11}(0)\right]$$

Table 5.

Table of transitions of Fig. 8.
l [TeX:] $$q_{l} \rightarrow q_{l^{\prime}}$$ [TeX:] $$\lambda^{l}$$ [TeX:] $$x^{\prime}=x A_{l}$$ [TeX:] $$\bar{v}_{q_{l}} A_{l}$$

0

1

2

3

4

5

6

7

8

9

10

11

12

13

[TeX:] $$0 \rightarrow 1$$

[TeX:] $$0 \rightarrow 1$$

[TeX:] $$1 \rightarrow 0$$

[TeX:] $$1 \rightarrow 0$$

[TeX:] $$1 \rightarrow 2$$

[TeX:] $$1 \rightarrow 2$$

[TeX:] $$2 \rightarrow 1$$

[TeX:] $$2 \rightarrow 1$$

[TeX:] $$2 \rightarrow 3$$

[TeX:] $$2 \rightarrow 3$$

[TeX:] $$3 \rightarrow 2$$

[TeX:] $$3 \rightarrow 2$$

[TeX:] $$3 \rightarrow 3$$

[TeX:] $$3 \rightarrow 3$$

[TeX:] $$\lambda_{1}$$

[TeX:] $$\sum_{k>1} \lambda_{k}$$

[TeX:] $$\mu$$

[TeX:] $$\theta$$

[TeX:] $$\lambda_{1}$$

[TeX:] $$\sum_{k>1} \lambda_{k}$$

[TeX:] $$\mu$$

[TeX:] $$\theta$$

[TeX:] $$\lambda_{1}$$

[TeX:] $$\sum_{k>1} \lambda_{k}$$

[TeX:] $$\mu$$

[TeX:] $$\theta$$

[TeX:] $$\lambda_{1}$$

[TeX:] $$\sum_{k>1} \lambda_{k}$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & 0 & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{0} & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{1} & 0 & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & 0 & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{1} & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{1} & x_{1} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{1} & x_{2} & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{2} & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{1} & x_{2} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{1} & x_{2} & x_{2} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{1} & x_{2} & x_{3} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{2} & x_{3} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{1} & x_{2} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{1} & x_{2} & x_{2} \end{array}\right]$$

0 [TeX:] $$\left[v_{0}(0) 000\right]$$

1 [TeX:] $$\left[v_{0}(0) v_{0}(0) 00\right]$$

2 [TeX:] $$\left[v_{1}(1) 000\right]$$

3 [TeX:] $$\left[\begin{array}{lllll} v_{1} & (0) & 0 & 0 & 0 \end{array}\right]$$

4 [TeX:] $$\left[v_{1}(0) v_{1}(1) 00\right]$$

5 [TeX:] $$\left[\begin{array}{lll} v_{1}(0) & v_{1}(1) & v_{1}(1) & 0] \end{array}\right.$$

6 [TeX:] $$\left[v_{2}(1) v_{2}(2) 00\right]$$

7 [TeX:] $$\left[v_{2}(0) v_{2}(2) 00\right]$$

8 [TeX:] $$\left[v_{2}(0) v_{2}(1) v_{2}(2) 0\right]$$

9 [TeX:] $$\left[v_{2}(0) v_{2}(1) v_{2}(2) v_{2}(2)\right]$$

10 [TeX:] $$\left[v_{3}(1) v_{3}(2) v_{3}(3) 0\right]$$

11 [TeX:] $$\left[v_{3}(0) v_{3}(2) v_{3}(3) 0\right]$$

12 [TeX:] $$\left[v_{3}(0) v_{3}(1) v_{3}(2) 0\right]$$

13 [TeX:] $$\left[v_{3}(0) v_{3}(1) v_{3}(2) v_{3}(2)\right]$$

Table 6.

Table of SHS transitions of Fig. 22.
l [TeX:] $$\lambda^{l}$$ x [TeX:] $$x^{\prime}=x A_{l}$$ [TeX:] $$\bar{v}_{q_{l}} A_{l}$$

0

1

2

[TeX:] $$\lambda_{1} p_{1 j}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k j}$$

[TeX:] $$\mu_{j}$$

[TeX:] $$\left[x_{0} \cdots x_{j 1} \cdots x_{j N} x_{j M} \cdots\right]$$

[TeX:] $$\left[x_{0} \cdots x_{j 1} \cdots x_{j N} x_{j M} \cdots\right]$$

[TeX:] $$\left[x_{0} \cdots x_{j 1} \cdots x_{j N} x_{j M} \cdots\right]$$

[TeX:] $$\left[x_{0} \cdots x_{j 1} \cdots x_{j N} x_{j M} \cdots\right]$$

[TeX:] $$\left[x_{0} \cdots x_{j 1} \cdots x_{j N} x_{j N} \cdots\right]$$

[TeX:] $$\left[x_{j 1} \cdots x_{j 2} \cdots x_{j M} x_{j M} \cdots\right]$$

[TeX:] $$\left[v_{0} \cdots v_{j 1} \cdots v_{j N} 0 \cdots\right]$$

[TeX:] $$\left[v_{0} \cdots v_{j 1} \cdots v_{j N} v_{j N} \cdots\right]$$

[TeX:] $$\left[\begin{array}{llvl} v_{0} & \cdots v_{j 1} \cdots v_{j M} v_{j M} \cdots \end{array}\right]$$

Table 7.

Table of transitions of Fig. 9.
l [TeX:] $$q_{l} \rightarrow q_{l^{\prime}}$$ [TeX:] $$\lambda^{l}$$ [TeX:] $$x^{\prime}=x A_{l}$$ [TeX:] $$\bar{v}_{q l} A_{l}$$
0 [TeX:] $$00 \rightarrow 01$$

[TeX:] $$\lambda_{1} p_{12}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 2}$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & 0 & 0 & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & 0 & 0 & x_{0} & 0 \end{array}\right]$$

[TeX:] $$\left[v_{00}(0) 0000\right.$$

[TeX:] $$\left[v_{00}(0) 00 v_{00}(0) 0\right]$$

1 [TeX:] $$01 \rightarrow 00$$

[TeX:] $$\mu_{2}$$

[TeX:] $$\theta_{2}$$

[TeX:] $$\left[\begin{array}{lllll} x_{21} & 0 & 0 & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & 0 & 0 & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[v_{01}(3) 00000\right]$$

[TeX:] $$\left[v_{01}(0) 00000\right]$$

2 [TeX:] $$01 \rightarrow 02$$

[TeX:] $$\lambda_{1} p_{12}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 2}$$

[TeX:] $$\left[x_{0} 00 x_{21} 0\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & 0 & 0 & x_{21} & x_{21} \end{array}\right]$$

[TeX:] $$\left[v_{01}(0) 0000\right]$$

[TeX:] $$v_{01}(0) 00 v_{00}(0) 0$$

3 [TeX:] $$02 \rightarrow 01$$

[TeX:] $$\mu_{2}$$

[TeX:] $$\theta_{2}$$

[TeX:] $$\left[\begin{array}{llll} x_{21} & 0 & 0 & x_{22} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & 0 & 0 & x_{22} & 0 \end{array}\right]$$

[TeX:] $$\left[v_{02}(3) 00 v_{02}(4) 0\right]$$

[TeX:] $$\left[v_{02}(0) 00 v_{02}(4) 0\right]$$

4 [TeX:] $$02 \rightarrow 02$$

[TeX:] $$\lambda_{1} p_{12}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 2}$$

[TeX:] $$\left[x_{0} 00 x_{21} 0\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & 0 & 0 & x_{21} & x_{21} \end{array}\right]$$

[TeX:] $$\left[v_{02}(0) 00 v_{02}(3) 0\right]$$

[TeX:] $$\left[v_{02}(0) 00 v_{02}(3) v_{02}(3)\right]$$

5 [TeX:] $$00 \rightarrow 10$$

[TeX:] $$\lambda_{1} p_{11}$$

[TeX:] $$\sum_{k \geq 1} \lambda_{k} p_{k 1}$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & 0 & 0 & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{0} & 0 & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[v_{00}(0) 00000\right]$$

[TeX:] $$\left[v_{00}(0) v_{00}(0) 0000\right]$$

6 [TeX:] $$10 \rightarrow 00$$

[TeX:] $$\mu_{1}$$

[TeX:] $$\theta_{1}$$

[TeX:] $$\left|x_{12} 0000\right|$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & 0 & 0 & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[v_{01}(1) 0000\right]$$

[TeX:] $$\left[v_{01}(0) 0000\right]$$

7 [TeX:] $$01 \rightarrow 11$$

[TeX:] $$\lambda_{1} p_{11}$$

[TeX:] $$\sum_{k \geq 1} \lambda_{k} p_{k 1}$$

[TeX:] $$\left[x_{0} 00 x_{21} 0\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{0} & 0 & x_{21} & 0 \end{array}\right]$$

[TeX:] $$\left[v_{01}(0) 00 v_{01}(3) 0\right]$$

[TeX:] $$\left[v_{01}(0) v_{01}(0) 00 v_{01}(3) 0\right]$$

8 [TeX:] $$11 \rightarrow 01$$

[TeX:] $$\mu_{1}$$

[TeX:] $$\theta_{1}$$

[TeX:] $$\left[\begin{array}{lllll} x_{11} & 0 & 0 & x_{21} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{11} & 0 & 0 & x_{21} & 0 \end{array}\right]$$

[TeX:] $$\left[v_{11}(1) 00 v_{11}(3) 0\right]$$

[TeX:] $$\left[v_{11}(1) 00 v_{11}(3) 0\right]$$

9 [TeX:] $$02 \rightarrow 12$$

[TeX:] $$\lambda_{1} p_{11}$$

[TeX:] $$\sum_{k \geq 1} \lambda_{k} p_{k 1}$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & 0 & 0 & x_{21} & x_{22} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{0} & 0 & x_{21} & x_{22} \end{array}\right]$$

[TeX:] $$\left[v_{02}(0) 00 v_{02}(3) v_{02}(4)\right]$$

[TeX:] $$\left[v_{02}(0) v_{02}(0) 00 v_{02}(3) v_{02}(4)\right]$$

10 [TeX:] $$12 \rightarrow 02$$

[TeX:] $$\mu_{1}$$

[TeX:] $$\theta_{1}$$

[TeX:] $$\left[\begin{array}{llll} x_{11} & 0 & 0 & x_{21} & x_{22} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & 0 & 0 & x_{21} & x_{22} \end{array}\right]$$

[TeX:] $$\left[v_{12}(1) 00 v_{12}(3) v_{12}(4)\right]$$

[TeX:] $$\left[v_{12}(0) 00 v_{12}(3) v_{12}(4)\right]$$

11 [TeX:] $$10 \rightarrow 11$$

[TeX:] $$\lambda_{1} p_{12}$$

[TeX:] $$\sum_{k \geq 1} \lambda_{k} p_{k 2}$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{11} & 0 & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{11} & 0 & x_{0} & 0 \end{array}\right]$$

[TeX:] $$\left[v_{10}(0) v_{10}(1) 000\right]$$

[TeX:] $$\left[v_{10}(0) v_{10}(1) 0 v_{10}(0) 0\right]$$

12 [TeX:] $$11 \rightarrow 10$$

[TeX:] $$\mu_{2}$$

[TeX:] $$\theta_{2}$$

[TeX:] $$\left[\begin{array}{llll} x_{21} & x_{11} & 0 & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{11} & 0 & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[v_{11}(3) v_{11}(1) 000\right]$$

[TeX:] $$\left[v_{11}(0) v_{11}(1) 000\right.$$

13 [TeX:] $$11 \rightarrow 12$$

[TeX:] $$\lambda_{1} p_{12}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 2}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 2}$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{11} & 0 & x_{21} & x_{21} \end{array}\right]$$

[TeX:] $$\left[v_{11}(0) v_{11}(1) 0 v_{11}(3) 0\right]$$

[TeX:] $$\left[v_{11}(0) v_{11}(1) 0 v_{11}(3) v_{11}(3)\right]$$

14 [TeX:] $$12 \rightarrow 11$$

[TeX:] $$\mu_{2}$$

[TeX:] $$\theta_{2}$$

[TeX:] $$\left[\begin{array}{lllll} x_{21} & 0 & 0 & x_{22} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & 0 & 0 & x_{22} & 0 \end{array}\right]$$

[TeX:] $$\left[v_{12}(3) 00 v_{12}(4) 0\right]$$

[TeX:] $$\left[v_{12}(0) 00 v_{12}(4) 0\right]$$

15 [TeX:] $$12 \rightarrow 12$$

[TeX:] $$\lambda_{1} p_{12}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 2}$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{11} & 0 & x_{21} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{11} & 0 & x_{21} & x_{21} \end{array}\right]$$

[TeX:] $$\left[v_{12}(0) v_{12}(1) 0 v_{12}(3) 0\right]$$

[TeX:] $$\left[v_{12}(0) v_{12}(1) 0 v_{12}(3) v_{12}(3)\right]$$

16 [TeX:] $$10 \rightarrow 20$$

[TeX:] $$\lambda_{1} p_{11}$$

[TeX:] $$\sum_{k \geq 1} \lambda_{k} p_{k 1}$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{11} & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{11} & x_{11} & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[v_{10}(0) x_{10}(1) 000\right]$$

[TeX:] $$\left[v_{10}(0) v_{10}(1) v_{10}(1) 000\right]$$

17 [TeX:] $$20 \rightarrow 10$$

[TeX:] $$\mu_{1}$$

[TeX:] $$\theta_{1}$$

[TeX:] $$\left[\begin{array}{llll} x_{12} & x_{22} & 0 & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{22} & 0 & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[v_{20}(1) v_{20}(2) 000\right]$$

[TeX:] $$\left[v_{20}(0) x_{20}(2) 000\right]$$

18 [TeX:] $$11 \rightarrow 21$$

[TeX:] $$\lambda_{1} p_{11}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 1}$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{11} & 0 & x_{21} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{11} & x_{11} & x_{21} & 0 \end{array}\right]$$

[TeX:] $$\left[v_{11}(0) v_{11}(1) 0 v_{11}(3) 0\right]$$

[TeX:] $$\left[v_{11}(0) v_{11}(1) v_{11}(1) v_{11}(3) 0\right]$$

19 [TeX:] $$21 \rightarrow 11$$

[TeX:] $$\mu_{1}$$

[TeX:] $$\theta_{1}$$

[TeX:] $$\left[\begin{array}{llll} x_{11} & x_{12} & 0 & x_{21} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{12} & 0 & x_{21} & 0 \end{array}\right]$$

[TeX:] $$\left[v_{21}(1) v_{21}(2) 0 v_{21}(3) 0\right]$$

[TeX:] $$\left[v_{21}(0) v_{21}(2) 0 v_{11}(3) 0\right]$$

20 [TeX:] $$12 \rightarrow 22$$

[TeX:] $$\lambda_{1} p_{11}$$

[TeX:] $$\sum_{k \geq 1} \lambda_{k} p_{k 1}$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{11} & 0 & x_{21} & x_{22} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{11} & x & x_{21} & x_{22} \end{array}\right]$$

[TeX:] $$\left[v_{12}(0) v_{12}(1) 0 v_{12}(3) v_{12}(4)\right]$$

[TeX:] $$\left[v_{12}(0) v_{12}(1) v_{12}(1) v_{12}(3) v_{12}(4)\right]$$

21 [TeX:] $$22 \rightarrow 12$$

[TeX:] $$\mu_{1}$$

[TeX:] $$\theta_{1}$$

[TeX:] $$\left[\begin{array}{lllll} x_{11} & x_{12} & 0 & x_{21} & x_{22} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & 0 & 0 & x_{21} & x_{22} \end{array}\right]$$

[TeX:] $$\left[v_{22}(1) v_{22}(2) 0 v_{22}(3) v_{12}(4)\right]$$

[TeX:] $$\left[v_{22}(0) v_{22}(2) 0 v_{22}(3) v_{22}(4)\right]$$

22 [TeX:] $$20 \rightarrow 20$$

[TeX:] $$\lambda_{1} p_{11}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 1}$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{11} & 0 & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{11} & x_{11} & x_{21} & 0 \end{array}\right]$$

[TeX:] $$\left[v_{20}(0) v_{20}(1) 000\right.$$

[TeX:] $$\left[v_{22}(0) v_{20}(1) v_{20}(1) 000\right]$$

23 [TeX:] $$20 \rightarrow 21$$

[TeX:] $$\lambda_{1} p_{12}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 2}$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{11} & x_{12} & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{11} & x_{12} & x_{0} & 0 \end{array}\right]$$

[TeX:] $$\left[v_{20}(0) v_{20}(1) v_{20}(2) 00\right]$$

[TeX:] $$\left[v_{20}(0) v_{20}(1) v_{20}(2) v_{20}(0) 0\right]$$

24 [TeX:] $$21 \rightarrow 20$$

[TeX:] $$\mu_{2}$$

[TeX:] $$\theta_{2}$$

[TeX:] $$\left[\begin{array}{llll} x_{21} & x_{11} & x_{12} & 0 \\ & 0 & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{11} & x_{12} & 0 & 0 \end{array}\right]$$

[TeX:] $$\left.\mid v_{21}(3) v_{21}(1) v_{21}(2) 00\right]$$

[TeX:] $$\left[v_{21}(0) v_{21}(1) v_{21}(2) 00\right]$$

25 [TeX:] $$21 \rightarrow 21$$

[TeX:] $$\lambda_{1} p_{11}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 1}$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{11} & 0 & x_{21} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{11} & x_{11} & x_{21} & 0 \end{array}\right]$$

[TeX:] $$\left[v_{21}(0) v_{21}(1) 0 v_{21}(3) 0\right]$$

[TeX:] $$\left[v_{21}(0) v_{21}(1) v_{21}(1) v_{21}(3) 0\right]$$

26 [TeX:] $$21 \rightarrow 22$$

[TeX:] $$\lambda_{1} P_{12}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 2}$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{11} & x_{12} & x_{21} & 0 \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{11} & x_{12} & x_{21} & x_{21} \end{array}\right]$$

[TeX:] $$\left[v_{21}(0) v_{21}(1) v_{21}(2) x_{21}(3) 0\right]$$

[TeX:] $$\left[v_{21}(0) v_{21}(1) v_{21}(2) v_{21}(3) v_{21}(3)\right]$$

27 [TeX:] $$22 \rightarrow 21$$

[TeX:] $$\mu_{2}$$

[TeX:] $$\theta_{2}$$

[TeX:] $$\left[\begin{array}{lllll} x_{21} & x_{11} & x_{12} & x_{21} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{llll} x_{0} & x_{11} & x_{12} & x_{21} & 0 \end{array}\right]$$

[TeX:] $$\left[v_{22}(3) v_{22}(1) v_{22}(2) x_{22}(4) 0\right]$$

[TeX:] $$\left[v_{22}(0) v_{22}(1) v_{22}(2) v_{22}(4) 0\right]$$

28 [TeX:] $$22 \rightarrow 22$$

[TeX:] $$\lambda_{1} p_{11}$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 1}$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{11} & 0 & x_{21} & x_{22} \end{array}\right]$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{11} & x_{11} & x_{21} & x_{22} \end{array}\right]$$

[TeX:] $$\left[v_{22}(0) v_{22}(1) 0 v_{22}(3) v_{22}(4)\right]$$

[TeX:] $$\left[v_{22}(0) v_{22}(1) v_{22}(1) v_{22}(3) v_{22}(4)\right]$$

29 [TeX:] $$22 \rightarrow 22$$

[TeX:] $$22 \rightarrow 22$$

[TeX:] $$\sum_{k>1} \lambda_{k} p_{k 2}$$

[TeX:] $$\left|x_{0} x_{11} x_{12} x_{21} 0\right|$$

[TeX:] $$\left[\begin{array}{lllll} x_{0} & x_{11} & x_{12} & x_{21} & x_{21} \end{array}\right]$$

[TeX:] $$\left[v_{22}(0) v_{22}(1) v_{22}(2) v_{22}(3) 0\right]$$

[TeX:] $$\left[v_{22}(0) v_{22}(1) v_{22}(2) v_{22}(3) v_{22}(3)\right]$$

The SHS Markov chain for the one M/M/1/1 queue system with multiple sources and losses of packets in service.
The SHS Markov chain for system with two parallel M/M/1/1 queues with multiple sources and losses of packets in service
Representation of the models under comparison in Section III.A.3 for two sources: (a) M/M/1/1 queue with arrival rate [TeX:] $$\lambda / 2,$$ (b) M/M/1/1 queue with service rate [TeX:] $$2 \mu,$$ and (c) Two parallel M/M/1/1 queues.
Average AoI of source 1 when [TeX:] $$\lambda_{1}$$ varies from 0.1 to [TeX:] $$10^{3}$$ with a single source and without losses [TeX:] $$\left(\lambda_{k}=0 \text { for all } k>1 \text { and } \theta=0\right) . \mu=1.$$
Average AoI of source 1 when [TeX:] $$\lambda_{1}$$ varies from [TeX:] $$0.1 \text { to } 10^{3}$$ with multiple sources and without losses [TeX:] $$\left(\sum_{k>1} \lambda_{k}=10 \text { and } \theta=0\right) . \mu=1.$$
Average AoI of source 1 when [TeX:] $$\lambda_{1}$$ varies from [TeX:] $$0.1 \text { to } 10^{3}$$ with a single source and losses [TeX:] $$\left(\lambda_{k}=0 \text { for all } k>1 \text { and } \theta=10\right) . \mu=1.$$
Average AoI of source 1 when [TeX:] $$\lambda_{1}$$ varies from 0.1 to [TeX:] $$10^{3}$$ with multiple sources and losses [TeX:] $$\left(\sum_{k>1} \lambda_{k}=10 \text { and } \theta=10\right) . \mu=1$$
The SHS Markov Chain for the M/M/1/3* queue with multiple sources and losses of packets in service.
The SHS Markov chain for system with two parallel M/M/1/2* queues with multiple sources and losses of packets in service.
Representation of the models under comparison in Section III.B.3 for two sources: (a) M/M/1/2 queue with arrival rate [TeX:] $$\lambda / 2,$$ (b) M/M/1/2 queue with service rate [TeX:] $$2 \mu,$$ and (c) Two parallel M/M/1/2 queues.
Average AoI comparison when [TeX:] $$\lambda_{1}$$ varies from 0.1 to [TeX:] $$10^{3}$$ with a single source and without losses [TeX:] $$\left(\lambda_{k}=0 \text { for all } k>1 \text { and } \theta=0\right) \cdot \mu=1.$$
Average AoI comparison when [TeX:] $$\lambda_{1}$$ varies from 0.1 to [TeX:] $$10^{3}$$ with multiple sources and without losses [TeX:] $$\left(\sum_{k>1} \lambda_{k}=10 \text { and } \theta=0\right) . \mu=1.$$
Average AoI comparison when [TeX:] $$\lambda_{1}$$ varies from 0.1 to [TeX:] $$10^{3}$$ with a single source and losses [TeX:] $$\left(\lambda_{k}=0 \text { for all } k>1 \text { and } \theta=10\right) . \mu=1.$$
Average AoI comparison when [TeX:] $$\lambda_{1}$$ varies from 0.1 to [TeX:] $$10^{3}$$ with multiple sources and losses [TeX:] $$\left(\sum_{k>1} \lambda_{k}=10 \text { and } \theta=10\right) . \mu=1.$$
Upper bound and real average AoI comparison when [TeX:] $$\lambda$$ varies from 0.1 to [TeX:] $$10^{3}$$ for two parallel M/M/1/2* queues with a single source [TeX:] $$\left(\lambda_{2}=0\right).$$
Upper bound and real average AoI comparison when [TeX:] $$\lambda$$ varies from 0.1 to [TeX:] $$10^{3}$$ for two parallel M/M/1/2* queues with multiple sources [TeX:] $$\left(\lambda_{2}=10\right). \mu=1.$$
Upper bound and real average AoI comparison when [TeX:] $$\lambda$$ varies from 0.1 to [TeX:] $$10^{3}$$ for two parallel M/M/1/2* queues with a single source [TeX:] $$\left(\lambda_{2}=0\right). \mu=0.1.$$
Upper bound and real average AoI comparison when [TeX:] $$\lambda$$ varies from 0.1 to [TeX:] $$10^{3}$$ for two parallel M/M/1/2* queues with a single source [TeX:] $$\left(\lambda_{2}=0\right). \mu=10.$$
Upper bound and real average AoI comparison when [TeX:] $$\lambda$$ varies from 0.1 to [TeX:] $$10^{3}$$ for two parallel M/M/1/2* queues with a single source [TeX:] $$\left(\lambda_{2}=0\right). \mu=100.$$
Upper bound and real average AoI comparison when [TeX:] $$\lambda$$ varies from 0.1 to [TeX:] $$10^{3}$$ for one M/M/1/3* queue with multiple source [TeX:] $$\left(\lambda_{2}=10\right) \cdot \mu=1.$$
Convergence of the fixed point algorithm defined from (25).
The SHS Markov Chain for a system formed by K parallel M/M/1/(N+1)* queues with multiple sources.
Simulation results of the M/M/1/1 queue with different values of arrival rates; x-axis in logarithmic scale.
Simulation results of the M/M/1/3* queue with different values of arrival rates; x-axis in logarithmic scale.