PDF  PubReader

Yuan , Hu , and Jiang: Analyzing Age of Information in Multiaccess Networks with Time-Varying Channels: A Fluid Limits Approach

Feng Yuan , Zeyu Hu and Zhiyuan Jiang

Analyzing Age of Information in Multiaccess Networks with Time-Varying Channels: A Fluid Limits Approach

Abstract: In this paper, we adopt the fluid limits approach to analyze the age of information (AoI) in a wireless multiaccess network where users share the channel and transmissions are unreliable. We prove the convergence of the AoI occupancy measure to the fluid limit, represented by a partial differential equation (PDE). Furthermore, we demonstrate the global convergence to the equilibrium of the PDE, i.e., the stationary AoI distribution. Within this framework, we first consider the case of i.i.d. channel conditions and generate-at-will statuses for users. We demonstrate that a previously established AoI lower bound in the literature is asymptotically accurate, and a straightforward threshold-based access policy can be asymptotically optimal. Next, we consider the case where the channel states are time-varying, i.e., the Gilbert-Elliott channel model. We assume partial channel state information (CSI) is available due to channel probing singals. Theoretical analysis reveals that only a fraction of CSI is required to approach the optimal performance. Additionally, we numerically evaluate the performance of the proposed policy and the existing Whittle’s index policy under time-varying channels. Simulation results demonstrate that the proposed policy outperforms the Whittle’s index policy since the latter cannot adapt to time-varying channels.

Keywords: Age of information , fluid limits , time-varying channels , wireless multiaccess networks

I. INTRODUCTION

AGE of information (AoI) [2] is defined as a timeliness metric to quantify the time elapsed since the generation of the freshest information received at a destination. In a timecritical networked system, it is desirable to maintain a low level of AoI at a node that depends heavily on the specific information [3], [4]. Therefore, the minimization of AoI is a systematic task, encompassing lowering the communication delay, sensing delay, data processing delay and considering their interplay. In this paper, we focus on the part of AoI that is attributed to wireless network scheduling, especially scenarios with a large number of users and time-varying channels.

In wireless networks, two fundamental challenges exist for AoI optimization: Collisions and channel fading. To circum- vent collisions among user transmissions, a carefully designed scheduling algorithm is necessary. Assuming only one user can occupy the channel at any time instance, the scheduler needs to arrange the order and frequency of user transmissions, which inevitably introduces aging of information maintained by each user. On the other hand, channel fading results in a time-varying transmission environment, wherein not only the user packets can be lost due to erroneous transmissions, but also the error probability changes over time. Regarding these, many works have been dedicated to analyzing and optimizing AoI in wireless networks, which are discussed below.

A. Related Work

Considering active sources in a wireless multiaccess network, i.e., the transmitted information is always fresh, the AoI is in fact identical with time-since-last-service [5]. With users having heterogeneous i.i.d. transmission successful probabilities, denoted by [TeX:] $$q_n \forall n \in[1, \cdots, N]$$, and one of them is scheduled each time, Kadota et al. [6] found that the timeaverage AoI is lower bounded by

(1)
[TeX:] $$\bar{\Delta}_\pi \geq \frac{1}{2 N}\left(\sum_{n=1}^N \frac{1}{\sqrt{q_n}}\right)^2$$

The authors subsequently propose several optimization algorithms. The age-greedy policy is shown to be optimal in homogeneous networks when [TeX:] $$q_n=q, \forall .$$ The stationary randomized policy with optimal randomization is shown to be within 2-times the optimum. Neither greedy and stationary policy achieves as good performance as the Whittle’s index policy and Lyapunov-based max-weight policy, which are essentially identical regarding their scheduler criterion— they achieve very close-to-optimal performance in simulations, however can only be proven with loose bounds due to their non-renewal policy nature. Several other works also derive the Whittle’s index policy in different scenarios that show similarly good performance, and again, without theoretical optimality guarantees [7], [8]. A recent work by Maatouk et al. [9] proved that Whittle’s index policy is asymptotically optimal, when the number of scheduled users scales with the total number of users and they both go to infinity. Talak et al.showed that the stationary randomized policy is peak-AoI optimal even in a general network topology, wherein peak-AoI denotes the AoI right before delivery [10].

The majority of existing AoI research focuses on the case with complete channel state information. However, in real- world communication systems, these are challenging to meet. Ref. [11] and [12] consider the AoI minimization problem for a network with general interference constraints and time varying channels, with perfect CSI and unknown channel state, respectively. The case with perfect CSI demonstrates a significant age improvement compared to the case with unknown channel state. Ref. [13] proposes a scheduling policy for heterogeneous and unreliable multi-channel systems where the channel states are unknown when scheduling decisions are made. Ref. [14] studies the minimum-age scheduling problem, wherein AoI is not directly observable to the scheduler, and proposes a policy that is an approximation to the greedy policy. Ref. [15] analyzes a class of information freshness metrics in the setting where a large number of terminals access the channel via a slotted ALOHA (SA) policy and each wireless user-sink link is modelled as a Gilbert-Elliot channel. A relevant work is presented in [16] and [17]. The work in [16] derives the Whittle index for AoI minimization in a system having iid/Markovian channels and multiple information settings (without CSI/ with CSI/ delayed CSI). Ref. [17] analyzes three scenarios with an i.i.d. ON-OFF channel, namely systems without CSI, systems with CSI, and systems with partial CSI, and derives the corresponding Whittle index-based scheduling policies. Nevertheless, neither of these two papers considers the AoI minimization problem in the scenario with Markov channels and partial CSI, and it has not been demonstrated theoretically that the given policies can achieve optimal performance.

Summarizing existing works, four key problems still remain to be solved. P1: Is the lower bound in (1) tight or at least asymptotically tight? What policy can be proved to achieve the optimum? P2: Most works focus on AoI scheduling policy design but analytical results are less available. How to analyze the AoI performance of scheduling policies? P3: Considering practical signaling concerns, how to decentralize the scheduler operation? P4: In a time-varying channel environment, how to design a scheduling policy. Which percentage of CSI should be known to achieve optimum?

B. Our Contributions

In this paper, we address P1–P4 by adopting the fluid limits approach. Our contributions include:

· We demonstrate that the AoI occupancy measure of a broad range of threshold-based policies with a large number of users converges to a fluid limit. This limit is expressed as a partial derivative equation (PDE). We also prove that the PDE has a globally stable equilibrium point, which can be used to describe the stationary distribution of AoI.

· We demonstrate that the lower bound in (1) is asymptotically accurate, and a straightforward threshold-based policy can achieve the optimal result, with the thresholds explicitly determined from the fluid limits. We also derive the AoI probability distribution function under the threshold-based policy, which allows us to quickly calculate AoI statistics such as moments and tail distribution.

· We further investigate the age performance of wireless multiaccess networks with time-varying channels. By utilizing partial CSI available at the start of each time slot, we derive the optimal thresholds and long-term average AoI. Our theoretical and experimental results demonstrate that optimal AoI performance can be achieved even with only a limited amount of channel state information.

C. Organization

The rest of the paper is organized as follows. In Section II, an illustration example is given to highlight the idea of fluid limits. Section III introduces the system model and modeling language. Section IV lays down the mathematical tool that is used. Section V and VI use the tool to analyze the timeinvariant and time-varying channel, respectively. The simulation results are shown in Section VII. Finally, in Section VIII, conclusions are drawn with future work discussed.

II. ILLUSTRATING EXAMPLE OF AOI FLUID LIMITS

Consider a network of N users where each user is scheduled with a constant rate of μ, and the scheduling events happen based on a Poisson process in continuous time. The transmission is reliable. Fig. 1 shows an exemplary occupancy measure of AoI at a certain time t, which is assumed to admit a probability distribution function. Assume that the system is in equilibrium and let us consider the balance equation of age flows, the left-hand side of which represents the expected inflow to a specific AoI h during a time duration of dh, and the right-hand side the expected out-flow from h:

(2)
[TeX:] $$f(h-\mathrm{d} h)=f(h)+\mu f(h) \mathrm{d} h .$$

Fig. 1.

An exemplary AoI flow diagram assuming the fluid limit exists.
1.png
The in-flow denotes the aging from h−dh, and the first term of the out-flow is due to aging from h. The second term of out-flow represents the expected amount of agents that are scheduled and returns to AoI of zero. This formulation resembles fluids flow in continuous time and the dynamic is characterized by an Ordinary Differential Equation (ODE). When dh is sufficiently small, we obtain

(3)
[TeX:] $$f^{\prime}=-\mu f, f(0)=\mu .$$

The initial condition is obtained by considering the flow at h = 0. This yields a solution which reads

(4)
[TeX:] $$f=\mu e^{-\mu h}, h\gt 0 .$$

Alternatively, one can solve for the solution of an embedded Markov chain describing the evolution of AoI and obtain the exactly same AoI stationary distribution, which is omitted here due to brevity. The rationality of this fluid approach is that, with growing number of agents, one expects that the law of large numbers takes effect, such that the expected flow in (2), which takes expectation in the probability sense, coincides with the actual fraction of agents that are scheduled. This is called the fluid limits. Consequently, the calculation of AoI statistics follows immediately.

Three key questions exist in applying this idea. Q1: What are the conditions allowing the limiting ODE or PDE (partial DE), as well as its equilibrium, to exist? Q2: In this example, the expected scheduled agents per time unit is [TeX:] $$\mu N$$ which also goes to infinity. In practice, it is more reasonable to consider a finite number of scheduled agents. Does the fluid limit also exist? Q3: Can we use this to solve particular problems in AoI? In following sections, we endeavor to answer these questions. which also goes to infinity. In practice, it is more reasonable to consider a finite number of scheduled agents. Does the fluid limit also exist? Q3: Can we use this to solve particular problems in AoI? In following sections, we endeavor to answer these questions.

III. FLUID LIMIT MODELING LANGUAGE FOR AOI

In this section, fluid limit modeling specifications for AoI evolution of many users (hereinafter we use agents) are introduced, which are described in a general setting such that they can be adopted in as many scenarios as possible. We first describe a continuous time Markov chain (CTMC)-based formulation, which is also known as the Markov processes of pure jump type. However, this formulation is problematic in considering the convergence of the AoI occupancy measure. Therefore, it is transformed to a rescaled time domain. Connections to the widely-used discrete time Markov chain (DTMC)-based AoI model [ 6], [ 18], which is inconvenient to apply the fluid limits, are also discussed.
A. Multi-Class CTMC-Based AoI Model

We assume that agents in the system belong to a finite number of classes, which are prescribed based on their features, e.g., channel conditions, packet arrival patterns, performance metrics. Agents within each class are considered exactly the same and hence interchangeable, i.e., agents having identical state evolution dynamics. In practice, this assumption can be relaxed to be approximately the same. The total number of agents and classes are denoted by N and C respectively, with [TeX:] $$N_c$$ denotes the total number of agents in class-c, hence [TeX:] $$\sum_{c=1}^C N_c=N .$$ The evolution of the system is captured by a CTMC, which is described as follows.

Definition 1: A CTMC model for AoI is a tuple [TeX:] $$\mathcal{H}=\left(\boldsymbol{H}, \mathcal{T}, \boldsymbol{d}_0\right),$$ where:

1) [TeX:] $$\boldsymbol{H}=\left(\boldsymbol{h}_1, \cdots, \boldsymbol{h}_C\right)$$ denotes the vector of AoI of N terminals categorized by C classes, wherein [TeX:] $$\boldsymbol{h}_c=\left(h_{c, 1}, \cdots, h_{c, N_c}\right).$$ Each [TeX:] $$h_{c, i}$$ takes value in [TeX:] $$\mathbb{R}_{\geq 0} . \boldsymbol{d}_0 \in \mathbb{R}_{\geq 0}$$ is the initial state of the model.

2) [TeX:] $$\mathcal{T}=\left\{\tau_1, \cdots, \tau_M\right\}$$ denotes the set of transitions of the form [TeX:] $$\tau_j=(a, \phi(\boldsymbol{H}), \boldsymbol{v}(\boldsymbol{H}), r(\boldsymbol{H})),$$ where:

a) The label of the transition is denoted by a.

b) [TeX:] $$\phi(\boldsymbol{H})$$ denotes the set of inequalities that need to be satisfied by the transition, indicating the conditions that such a transition happens. c) The update vector is denoted by [TeX:] $$\boldsymbol{v}(\boldsymbol{H}) \in \mathbb{R}^N,$$ representing the net change on the state by the transition. It is required that [TeX:] $$\boldsymbol{H}+\boldsymbol{v}(\boldsymbol{H}) \in \mathbb{R}_{\geq 0}$$ whenever [TeX:] $$\phi(\boldsymbol{H})$$ is true. d) The transition rate function is denoted by [TeX:] $$r(\boldsymbol{H}):\mathbb{R}_{\geq 0} \rightarrow \mathbb{R}_{\geq 0},$$ which specifies the transition as a function of the current state and is required to be Lipschitz continuous and bounded in general. The time interval between successive transitions is assumed to be exponentially distributed with rate [TeX:] $$r(\boldsymbol{H}).$$

This definition is described in a quite general way. To give a concrete example, let us consider an AoI scheduling problem based on the Whittle’s index [7], [8], wherein the index for agent-n at time-t is [TeX:] $$w_n(t)$$ and the agent with the highest index is scheduled at every time slot. The transmission is assumed to be successful with probability [TeX:] $$p_{\mathrm{s}} \in(0,1]$$ due to channel error, and in the case of success, the AoI decreases to zero. The mean time interval between successive scheduling events is set to [TeX:] $$1 / r_0.$$ The corresponding transitions can be described as (using agent-1 as a representative):

1) [TeX:] $$\tau_1$$ = (Scheduled and successful update, [TeX:] $$w_1(t)\gt \left.w_i(t)(\forall i \neq 1),\left(-h_{1,1}(t), \Delta(t), \cdots, \Delta(t)\right), p_{\mathrm{s}} r_0\right);$$

2) [TeX:] $$\tau_1$$ = (Scheduled and unsuccessful update, [TeX:] $$w_1(t) \gt \left.w_i(t)(\forall i \neq 1),(\Delta(t), \cdots, \Delta(t)),\left(1-p_{\mathrm{s}}\right) r_0\right) \text {, }$$

where [TeX:] $$\Delta(t)$$ denotes the time elapsed since the last scheduling event.

Define the occupancy measure among all agents, with AoI of [TeX:] $$\boldsymbol{H}(t)$$ in class-c by

(5)
[TeX:] $$X_{c, t}^N(h)=\frac{1}{N} \sum_{i=1}^{N_c} \delta_{h_{c, i}(t)}(h),$$

where [TeX:] $$\delta_y(x)$$ denotes the Dirac delta distribution which is a linear mapping: [TeX:] $$D(\mathbb{R}) \rightarrow \mathbb{R}$$ that satisfies [TeX:] $$\int \delta_y(x) f(x) \mathrm{d} x=f(y) \text { and } D(\mathbb{R})$$ denotes the set of test functions. Denote the faction of agents that are in class c and with AoI lower than h as

(6)
[TeX:] $$F_{c, t}^N(h)=\int_0^h X_{c, t}^N(x) \mathrm{d} x .$$

Remark 1: One critical issue of this formulation, making it inapplicable for the fluid limits, is that when the system size grows large, i.e., N is large, the resultant occupancy measure of agents’ AoI is not tight (definition of tightness of measure is given in (27)), in fact it has support approaching infinity, and therefore not convergent to any probability measure. This shall be made evident in the proof of the main theorem. To address this, a time-rescaled CTMC formulation is introduced as follows.
B. Transformation to Rescaled CTMC
Definition 2: A time-rescaled CTMC model for AoI is a tuple [TeX:] $$\hat{\mathcal{H}}=\left(\hat{\boldsymbol{H}}, \hat{\mathcal{T}}, \hat{\boldsymbol{d}_0}\right),$$ which is defined identically with Definition 1, except that [TeX:] $$\hat{r}(\boldsymbol{H})=N r(\boldsymbol{H}), \forall \boldsymbol{H} .$$ In essence, [TeX:] $$\hat{\mathcal{H}}$$ can be viewed as an accelerated version of [TeX:] $$\mathcal{H},$$ in the sense that time intervals between scheduling events are scaled down with the number of agents. By a stochastic coupling argument, it is clear that the following lemma is true because everything happens sooner.

Lemma 1: The occupancy measure of [TeX:] $$\hat{\mathcal{H}}$$ satisfies

(7)
[TeX:] $$\hat{F}_{c, t}^N(h)=F_{c, t}^N(N h),$$

and consequently, if [TeX:] $$\hat{X}_{c, t}^N(h)$$ converges weakly to a probability measure [TeX:] $$\hat{x}_{c, t}(h)$$ with [TeX:] $$N \rightarrow \infty$$ (later formalized), the mean AoI of [TeX:] $$\hat{\mathcal{H}} \text { (i.e., } \mathbb{E}[\hat{h}] \text {) and } \mathcal{H} \text { (i.e., } \mathbb{E}[h] \text {) }$$ satisfy

(8)
[TeX:] $$\mathbb{E}[\hat{h}]=\sum_{c=1}^C \int_0^{\infty} h \hat{x}_{c, t}(h) \mathrm{d} h=\frac{\mathbb{E}[h]}{N}.$$

Fortunately, by this definition, we can prove in many practical cases the occupancy measure [TeX:] $$\hat{X}_{c, t}^N(h)$$ is tight in the fluid limit regime, and hence converges weakly to a continuous probability measure, which facilitates our subsequent analysis.

On the flip side, by making this time rescaling, we are unable to analyze the AoI exactly. As we shall see in the next section, by adopting the fluid limit approximation, the approximation error is [TeX:] $$\mathcal{O}(1 / N).$$ Combined with (8), it is clear that we can only solve for the linear scaling factor with N, i.e., asymptotic optimality—which is sufficient in many cases because AoI indeed scales linearly with N, and when N gets large, the error becomes infinitesimal.

C. Connection with DTMC Formulation

In most previous works on AoI scheduling, e.g., [6], [8], [9], [18], the AoI evolution is modeled by a DTMC. In this paper, we adopt CTMC, whereas it is desired to draw the connection between them. In fact, the DTMC can be viewed as the embedded DTMC of the CTMC. More specifically, denote the infinitesimal generator matrix [TeX:] $$Q \text { of } \mathcal{H}$$ as

(9)
[TeX:] $$q_{\boldsymbol{H}, \boldsymbol{H}^{\prime}}=\sum\left\{\boldsymbol{r}(\boldsymbol{H}) \mid \tau \in \mathcal{T}, \phi(\boldsymbol{X}) \text { is true, } \boldsymbol{H}^{\prime}=\boldsymbol{H}+\boldsymbol{v}(\boldsymbol{H})\right\},$$

and set the transition rate out of each state to one, i.e., the mean scheduling interval is 1, that is

(10)
[TeX:] $$r_{-\boldsymbol{H}}=\sum_{\boldsymbol{H}^{\prime} \neq \boldsymbol{H}} q_{\boldsymbol{H}, \boldsymbol{H}^{\prime}}=1.$$

Leveraging the rescaling technique, we obtain [TeX:] $$\hat{\mathcal{H}}.$$ The total number of update attempts (i.e., scheduling events) with rescaled CTMC is hence [TeX:] $$\mathcal{N}(t N)$$ where [TeX:] $$\mathcal{N}$$ is a Poisson random variable with mean rate of [TeX:] $$r_{-\boldsymbol{H}},$$ and the number is [TeX:] $$\lfloor t N\rfloor$$ for the DTMC. We obtain

(11)
[TeX:] $$\lim _{N \rightarrow \infty} \frac{\mathcal{N}(t N)}{\lfloor t N\rfloor}=\lim _{N \rightarrow \infty} \frac{\mathcal{N}(t N) / t N}{\lfloor t N\rfloor / t N}=1,$$

which is attributed to the law of large numbers. In other words, this equation shows that by adjusting the rate functions of CTMC, with N growing large, the same number of scheduling events have happened for both CTMC and DTMC at any time t, making them statistically equivalent. Using the uniformization technique [19] can also leads to this conclusion.

IV. DETERMINISTIC FLUID LIMIT OF AOI AND STATIONARY REGIME

In this section, we develop our main result on the fluid limit of AoI based on the time-rescaled CTMC model. We consider a specific but widely-adopted type of scheduling policies for AoI, which is threshold-type policies. In particular, we assume

1) At every scheduling event, one agent 1 is randomly chosen from the agents with AoI greater than their (possibly different) thresholds with equal probabilities. In the case of no agent has AoI above its threshold, the system stays idle. 1 Generalization to a finite number of agents is straightforward. However, an infinite number of scheduled agents [ 9], [ 20] needs a different formulation.

2) Agents in each class have the same threshold [TeX:] $$H_c.$$

Note that the threshold-type policy is adopted by many papers [9], [21]–[23]. It is both intuitive, in the sense that AoI higher than a threshold should be prioritized, and proven optimal in many scenarios. However, usually the exact thresholds cannot be solved explicitly. In fact, we will show that even in scenarios wherein threshold-type polices are not considered optimal, and index-based policies [6]–[8] have superior performance, an optimized threshold-type policy can achieve asymptotically optimal AoI with proven and closedform performance expressions.

A. Fluid Limit

When N gets large, the evolution of the time-rescaled CTMC becomes close to a deterministic fluid limit, characterized by differential equations.

Theorem 1: Assume that the initial occupancy measure c converges weakly to a deterministic distribution [TeX:] $$\hat{\boldsymbol{x}}_0$$ which admits a density [TeX:] $$\left\{\hat{f}_{c, 0} \mid c=1, \cdots, C\right\} .$$ Then as N approaches infinity, [TeX:] $$\hat{X}_{c, t}^N$$ of each class converges in distribution to a deterministic fluid limit processes [TeX:] $$\left\{\hat{x}_{c, t} \mid t \in \mathbb{R}_{\geq 0}\right\}$$ which admits a density for all t denoted by c and CDF by [TeX:] $$\hat{F}_c(t, h),$$ is the unique solution of the following PDE:

(12)
[TeX:] $$\begin{aligned} & \frac{\partial \hat{f}_c(t, h)}{\partial t}= \\ & \begin{cases}-\frac{\partial \hat{f}_c(t, h)}{\partial h}-\frac{p_{\mathrm{s}, c} \hat{f}_c(t, h)}{1-\sum_{c=1}^C \hat{F}_j\left(t, \hat{H}_j\right)}, & \text { if } h>\hat{H}_c; \\ -\frac{\partial \hat{f}_c(t, h)}{\partial h}, & \text { if } 0 \leq h \leq \hat{H}_c,\end{cases} \end{aligned}$$

(13)
[TeX:] $$\forall h \geq 0, \forall c, \hat{f}_c(0, h)=\hat{f}_{c, 0},$$

(14)
[TeX:] $$\forall t \geq 0, \forall c, \hat{f}_c(t, 0)=\frac{\eta_c-\hat{F}_c\left(t, \hat{H}_c\right)}{1-\sum_{j=1}^C \hat{F}_j\left(t, \hat{H}_j\right)} p_{\mathrm{s}, c},$$

where [TeX:] $$\eta_c=N_c / N.$$

Proof: The proof is based on previous works on fluid limits and mean-field approximations [24]–[27]. First, define the characteristic function, i.e., [TeX:] $$\\varphi_{\hat{H}, c}^N(\omega, t): \mathbb{R} \times \mathbb{R}_{\geq 0} \rightarrow \mathbb{C},$$ of the occupancy measure as

(15)
[TeX:] $$\varphi_{\hat{H}, c}^N(\omega, t) \triangleq \int_0^{\infty} e^{j \omega h} \mathrm{~d} \hat{F}_{c, t}^N(h)=\int_0^{\infty} e^{j \omega h} \hat{X}_{c, t}^N(h) \mathrm{d} h.$$

For ease of notation, we use [TeX:] $$\varphi_c^N(t) \text { for } \varphi_{\hat{H}, c}^N(\omega, t).$$ Note that by definition and (5)–(7), [TeX:] $$\varphi_c^N(t)=\frac{1}{N} \sum_{i=1}^{N_c} e^{j \omega h_{c, i}(t)},$$ which is the sample average. Consider the generator of [TeX:] $$\varphi_c^N(t)$$:

(16)
[TeX:] $$\mathcal{G}\left(\varphi_c^N(t)\right) \triangleq \lim _{\mathrm{d} t \rightarrow 0} \frac{\mathbb{E}\left[\varphi_c^N(t+\mathrm{d} t)-\varphi_c^N(t) \mid \mathcal{F}_t\right]}{\mathrm{d} t},$$

where [TeX:] $$\mathcal{F}_t$$ is the natural filtration of [TeX:] $$\hat{X}_{c, t}^N(h) .$$

Lemma 2: Define

(17)
[TeX:] $$M_{t, c}^N \triangleq \varphi_c^N(t)-\varphi_c^N(0)-\int_0^t \mathcal{G}\left(\varphi_c^N(s)\right) \mathrm{d} s,$$

then [TeX:] $$M_t^N$$ is a zero-mean [TeX:] $$\mathcal{F}_t$$-martingale.

Proof: For any t > s,

(18)
[TeX:] $$\begin{aligned} & \mathbb{E}\left[M_{t, c}^N \mid \mathcal{F}_s\right] \\ = & M_{s, c}^N+\mathbb{E}\left[\varphi_c^N(t)-\varphi^N(s)-\int_s^t \mathcal{G}\left(\varphi_c^N(s)\right) \mathrm{d} s \mid \mathcal{F}_s\right] \\ = & M_s^N+\mathbb{E}\left[\varphi_c^N(t)-\varphi_c^N(s) \mid \mathcal{F}_s\right]-\left(\mathbb{E}\left[\varphi_c^N(t) \mid \mathcal{F}_s\right]-\varphi_c^N(s)\right) \\ = & M_s^N . \end{aligned}$$

The mean follows directly from the definition.

Then we calculate [TeX:] $$\mathcal{G}\left(\varphi^N(t)\right)$$ specifically as:

(19)
[TeX:] $$\mathcal{G}\left(\varphi_c^N(t)\right)=G_{c, 1}+G_{c, 2},$$

(20)
[TeX:] $$\begin{aligned} G_{c, 1}= & \lim _{\mathrm{d} t \rightarrow 0} \frac{1}{\mathrm{~d} t}\left[\int_0^{\infty} e^{j \omega h} \frac{1}{N} \sum_{i=1}^{N_c} \delta_{h_{c, i}(t)+\mathrm{d} t} \mathrm{~d} h\right. \\ & \left.-\int_0^{\infty} e^{j \omega h} \frac{1}{N} \sum_{i=1}^{N_c} \delta_{h_{c, i}(t)} \mathrm{d} h\right], \\ = & \lim _{\mathrm{d} t \rightarrow 0} \frac{1}{\mathrm{~d} t}\left[\frac{1}{N} \sum_{i=1}^{N_c} e^{j \omega\left(h_{c, i}(t)+\mathrm{d} t\right)}-\frac{1}{N} \sum_{i=1}^{N_c} e^{j \omega\left(h_{c, i}(t)\right)}\right], \\ = & j \omega \frac{1}{N} \sum_{i=1}^{N_c} e^{j \omega h_{c, i}(t)}, \\ G_{c, 2}= & \lim _{\mathrm{d} t \rightarrow 0} \frac{p_{\mathrm{s}, c}}{\beta_t N \mathrm{~d} t} \mathbb{E}[\Lambda(N, \mathrm{~d} t)] \\ & \cdot \sum_{\substack{j=1, h_{c, j}(t)>H_c}}^{N_c}\left[\int _ { 0 } ^ { \infty } e ^ { j \omega h } \frac { 1 } { N } \left(\sum_{i=1}^{N_c} \delta_{h_{c, i}(t)}+\delta_0\right.\right. \\ & \left.\left.-\delta_{h_{c, j}(t)}\right) \mathrm{d} h-\int_0^{\infty} e^{j \omega h} \frac{1}{N} \sum_{i=1}^{N_c} \delta_{h_{c, i}(t)} \mathrm{d} h\right], \\ = & \frac{p_{\mathrm{s}, c}}{\beta_t} \int_0^{\infty} e^{j \omega h} \sum_{\substack{j=1, h_{c, j}(t)>H_c}}^{N_c} \frac{1}{N}\left(\delta_0-\delta_{h_{c, j}(t)}\right) \mathrm{d} h, \end{aligned}$$

(21)
[TeX:] $$\beta_t=\int_0^{\infty} \sum_{\left\{c, i \mid h_{c, i}(t)>H_c\right\}} \delta_{h_{c, i}(t)} \mathrm{d} h=\sum_{c=1}^C \frac{N_{h_t>H_c}}{N},$$

wherein [TeX:] $$\Lambda(N, \mathrm{~d} t)$$ denotes a Poisson distributed random variable with rate N and time of dt, and [TeX:] $$N_{h_t>H_c}$$ denotes the number of agents in class c that have AoI larger than the threshold. The last equality in (20) is because when [TeX:] $$\mathrm{d} t \rightarrow 0,$$ the Poisson distributed [TeX:] $$\Lambda(N, \mathrm{~d} t)$$ is closely approximated by a Bernoulli random variable with parameter c. The calculation above derives the expected infinitesimal change in [TeX:] $$\varphi_c^N(t).$$ Specifically, with probability [TeX:] $$\frac{p_{\mathrm{s}, c}}{\beta_t N \mathrm{~d} t}$$ which represents that an agent is chosen uniformly randomly from all eligible agents with AoI larger than their thresholds, an agent is scheduled and updates successfully with AoI returning to zero—such an event happens with a mean rate of N in the time-rescaled CTMC.

Now let us consider the quadratic variation of [TeX:] $$M_{t, c}^N.$$ The generator of [TeX:] $$\left\|\varphi_c^N(t)\right\|^2$$ is

(22)
[TeX:] $$\mathcal{G}\left(\left\|\varphi_c^N(t)\right\|^2\right)=G_{c, 1}^{\prime}+G_{c, 2}^{\prime},$$

(23)
[TeX:] $$\begin{aligned} G_{c, 1}^{\prime}= & \lim _{\mathrm{d} t \rightarrow 0} \frac{1}{\mathrm{~d} t}\left[\left(\int_0^{\infty} e^{j \omega h} \frac{1}{N} \sum_{i=1}^{N_c} \delta_{h_{c, i}(t)+\mathrm{d} t} \mathrm{~d} h\right)^2\right. \\ & \left.-\left(\int_0^{\infty} e^{j \omega h} \frac{1}{N} \sum_{i=1}^{N_c} \delta_{h_{c, i}(t)} \mathrm{d} h\right)^2\right], \\ = & 2 j \omega\left(\frac{1}{N} \sum_{i=1}^{N_c} e^{j \omega h_{c, i}(t)}\right)^2 \\ G_{c, 2}^{\prime}= & \lim _{\mathrm{d} t \rightarrow 0} \frac{p_{\mathrm{s}, c}}{\beta_t N \mathrm{~d} t} \mathbb{E}[\Lambda(N, \mathrm{~d} t)] \\ & \cdot \sum_{\substack{j=1, h_{c, j}(t)>H_c}}^{N_c}\left[\left(\int _ { 0 } ^ { \infty } e ^ { j \omega h } \frac { 1 } { N } \left(\sum_{i=1}^{N_c} \delta_{h_{c, i}(t)}+\delta_0\right.\right.\right. \\ & \left.\left.\left.-\delta_{h_{c, j}(t)}\right) \mathrm{d} h\right)^2-\left(\int_0^{\infty} e^{j \omega h} \frac{1}{N} \sum_{i=1}^{N_c} \delta_{h_{c, i}(t)} \mathrm{d} h\right)^2\right], \\ = & \frac{2 p_{\mathrm{s}, c}}{\beta_t} \frac{1}{N} \sum_{\substack{j=1, h_{c, j}(t)>H_c}}^{N_c}\left(1-e^{j \omega h_{c, j}(t)}\right) \frac{1}{N} \sum_{i=1}^{N_c} e^{j \omega h_{c, i}(t)} \\ & +\frac{p_{\mathrm{s}, c}}{\beta_t} \frac{1}{N^2} \sum_{\substack{j=1, h_{c, j}(t)>H_c}}^{N_c}\left(1-e^{j \omega h_{c, j}(t)}\right)^2 . \end{aligned}$$

Note that we obtain the second term in (23) as

(24)
[TeX:] $$\frac{p_{\mathrm{s}, c}}{\beta_t} \frac{1}{N^2} \sum_{\substack{j=1, h_{c, j}(t)>H_c}}^{N_c}\left(1-e^{j \omega h_{c, j}(t)}\right)^2 \leq \frac{4 p_{\mathrm{s}, c} N_{h_t>H_c}}{\beta_t N^2} \leq \frac{4 p_{\mathrm{s}, c}}{N},$$

where the last inequality follows from [TeX:] $$\beta_t\gt \frac{N_{h_t>H_c}}{N}, \forall c .$$ The term goes to zero with N getting large, i.e., define

(25)
[TeX:] $$\bar{G}_{c, 2}^{\prime}=\frac{2 p_{\mathrm{s}, c}}{\beta_t} \frac{1}{N} \sum_{\substack{j=1, h_{c, j}(t)>H_c}}^{N_c}\left(1-e^{j \omega h_{c, j}(t)}\right) \frac{1}{N} \sum_{i=1}^{N_c} e^{j \omega h_{c, i}(t)}.$$

It follows that

(26)
[TeX:] $$M_{t, c}^{\prime N} \triangleq\left\|\varphi_c^N(t)\right\|^2-\left\|\varphi_c^N(0)\right\|^2-\int_0^t \mathcal{G}\left(\left\|\varphi_c^N(s)\right\|^2\right) \mathrm{d} s$$

is also a [TeX:] $$\mathcal{F}_t$$-martingale. The following lemma establishes the limiting behavior of the occupancy measure [TeX:] $$X_{c, t}^N(h).$$

Lemma 3: When N goes to infinity, any sequence [TeX:] $$X_{c, t}^N(h)$$ has a convergent subsequence whose limit is denoted by [TeX:] $$\hat{f}_{c, t}(h), \text { and } \hat{f}_{c, t}(h)$$ has continuous sample path.

Proof: Let us first prove the tightness of [TeX:] $$X_{c, t}^N(h).$$ From [27, Lemma 4.6], it is sufficient to show the tightness of AoI measure of agent-1, i.e., [TeX:] $$\hat{h}_{c, 1}(t)=h_{c, 1}(t) / N.$$ Based on [28, Corollary 4.3], the following needs to uphold: For every [TeX:] $$\epsilon, T\gt 0,$$ there exists H > 0 such that

(27)
[TeX:] $$\inf _N \operatorname{Pr}\left(\hat{h}_{c, 1}(t) \leq H, \forall t \leq T\right)\gt1-\epsilon .$$

In the threshold-based algorithm, agent-1 is scheduled and the update is successful with a rate no less than [TeX:] $$p_{\mathrm{s}, c} / N$$ when its AoI is larger than the threshold, i.e., [TeX:] $$\hat{h}_{c, 1}(t)\gt H_c .$$ Denote [TeX:] $$T_0(t)$$ as the last time agent-1 is scheduled and the transmission is successful. Then [TeX:] $$\hat{h}_{c, 1}(t)=t-T_0(t)$$ is stochastic-dominated by a random variable [TeX:] $$\hat{h}^{\prime}=H_c+\hat{z}$$ where [TeX:] $$\hat{z} \sim \exp \left(p_{\mathrm{s}, c}\right)$$ (in the rescaled time domain) because the scheduling event happens with a rate of N. Therefore,

(28)
[TeX:] $$\operatorname{Pr}\left(\hat{h}_{c, 1}(t) \leq H\right) \geq \operatorname{Pr}\left(\hat{h}^{\prime} \leq H\right)=1-e^{-p_{\mathrm{s}, c}\left(H-H_c\right)}.$$

Letting the right-hand side be larger than [TeX:] $$1-\epsilon$$ gives us the conclusion that [TeX:] $$X_{c, t}^N(h)$$ is tight. Then based on the Prokhorov’s Theorem, [TeX:] $$X_{c, t}^N(h)$$ is relatively tight, which means that for any sequence [TeX:] $$X_{c, t}^N(h)$$, there exists a subsequence that is convergent. Let us denote such a subsequence by [TeX:] $$X_{c, t}^{N_k}(h) .$$ Consider its characteristic function,

(29)
[TeX:] $$\varphi_c(t) \triangleq \int_0^{\infty} e^{j \omega h} X_{c, t}^{N_k}(h) \mathrm{d} h .$$

When the time changes from t to t + dt, the change in characteristic function is

(30)
[TeX:] $$\begin{aligned} \left|\varphi_c(t+\mathrm{d} t)-\varphi_c(t)\right| & =\int_0^{\infty} e^{j \omega h}\left|X_{c, t+\mathrm{d} t}^{N_k}(h)-X_{c, t}^{N_k}(h)\right| \mathrm{d} h . \\ & \leq \int_0^{\infty}\left|X_{c, t+\mathrm{d} t}^{N_k}(h)-X_{c, t}^{N_k}(h)\right| \mathrm{d} h . \end{aligned}$$

Choose an arbitrary [TeX:] $$\varepsilon \gt 0.$$ Based on (27), choose H so large that [TeX:] $$\operatorname{Pr}\left(\hat{h}_{c, 1}(t)\gt H\right)\lt\frac{\varepsilon}{4},$$ and [TeX:] $$N_k\gt \frac{4 H}{\varepsilon},$$ then

(31)
[TeX:] $$\begin{aligned} \left|\varphi_c(t+\mathrm{d} t)-\varphi_c(t)\right| & \leq \int_0^H\left|X_{c, t+\mathrm{d} t}^{N_k}(h)-X_{c, t}^{N_k}(h)\right| \mathrm{d} h+\frac{\varepsilon}{2} \\ & \leq \frac{2 H}{N_k}+\frac{\varepsilon}{2}=\varepsilon. \end{aligned}$$

The last inequality is based on the fact that the jump of [TeX:] $$X_{c, t}^{N_k}(h)$$ in a small time interval is at most [TeX:] $$\frac{\delta_x}{N_k}.$$ This concludes the proof.

Now we know that any sequence [TeX:] $$X_{c, t}^N(h)$$ has a convergent subsequence with continuous sample path. Let us see if such a limit, i.e., [TeX:] $$\hat{f}_{c, t}(h),$$ is uniquely determined. It follows from (26) that when N is large,

(32)
[TeX:] $$\left\|\varphi_c^{\infty}(t)\right\|^2=\left\|\varphi_c^{\infty}(0)\right\|^2+\int_0^t\left(G_{c, 1}^{\prime}+\bar{G}_{c, 2}^{\prime}\right) \mathrm{d} s+M_{t, c}^{\prime \infty} .$$

Applying the Ito’s formula to [TeX:] $$f\left(\varphi_c^{\infty}(t)\right)$$ wherein [TeX:] $$f(x)=x^2$$ is twice continuously differentiable and [TeX:] $$\varphi_c^{\infty}(t)$$ is a continuous semimartingale, we obtain

(33)
[TeX:] $$\begin{aligned} & f\left(\varphi_c^{\infty}(t)\right)=\left\|\varphi_c^{\infty}(t)\right\|^2 \\ = & f\left(\varphi_c^{\infty}(0)\right)+\int_0^t f^{\prime}\left(\varphi_c^{\infty}(s)\right) \mathrm{d} \varphi_c^{\infty}(s) \\ & +\frac{1}{2} \int_0^t f^{\prime \prime}\left(\varphi_c^{\infty}(s)\right) \mathrm{d}\left\langle M_{s, c}^{\infty}\right\rangle \\ = & \left\|\varphi_c^{\infty}(0)\right\|^2+\int_0^t \varphi_c^{\infty}(s) \mathrm{d} \varphi_c^{\infty}(s)+\left\langle M_{s, c}^{\infty}\right\rangle \\ = & \left\|\varphi_c^{\infty}(0)\right\|^2+2 \int_0^t \varphi_c^{\infty}(s) \mathcal{G}\left(\varphi_c^{\infty}(s)\right) \mathrm{d} s+\left\langle M_{s, c}^{\infty}\right\rangle \\ & +2 \int_0^t \varphi_c^{\infty}(s) \mathrm{d} M_{s, c}^{\infty} \\ = & \left\|\varphi_c^{\infty}(0)\right\|^2+2 \int_0^t \varphi_c^{\infty}(s)\left(G_{c, 1}+G_{c, 2}\right) \mathrm{d} s+\left\langle M_{s, c}^{\infty}\right\rangle \\ & +2 \int_0^t \varphi_c^{\infty}(s) \mathrm{d} M_{s, c}^{\infty} \\ = & \left\|\varphi_c^{\infty}(0)\right\|^2+\int_0^t\left(G_{c, 1}^{\prime}+\bar{G}_{c, 2}^{\prime}\right) \mathrm{d} s+\left\langle M_{s, c}^{\infty}\right\rangle \\ & +2 \int_0^t \varphi_c^{\infty}(s) \mathrm{d} M_{s, c}^{\infty}, \end{aligned}$$

wherein [TeX:] $$\langle X(t)\rangle \triangleq \lim _{\|P\| \rightarrow 0} \sum_{k=1}^n\left(X_{t_k}-X_{t_{k-1}}\right)^2$$ denotes the quadratic variation of X(t) and [TeX:] $$P \triangleq \sup _k\left|t_k-t_{k-1}\right| .$$ Comparing (32) and (33), and by the uniqueness of the Doob–Meyer decomposition, we know that

(34)
[TeX:] $$\left\langle M_{s, c}^{\infty}\right\rangle=0, \forall t.$$

Based on the definition,

(35)
[TeX:] $$\begin{aligned} & \mathbb{E}\left[\left\langle M_{s, c}^{\infty}\right\rangle\right]=0 \\ = & \lim _{\|P\| \rightarrow 0} \sum_{k=1}^n 2 \mathbb{E}\left\|M_{t_k, c}^{\infty}\right\|^2-2 \mathbb{E}\left[M_{t_k, c}^{\infty} M_{t_{k-1}, c}^{\infty}\right] \\ = & \lim _{\|P\| \rightarrow 0} \sum_{k=1}^n 2 \mathbb{E}\left\|M_{t_k, c}^{\infty}\right\|^2-2 \mathbb{E}\left\|M_{t_{k-1}, c}^{\infty}\right\|^2 \\ = & 2 \mathbb{E}\left\|M_{t_k, c}^{\infty}\right\|^2 . \end{aligned}$$

Therefore [TeX:] $$M_{s, c}^{\infty}$$ is a zero-mean martingale with zero variance, hence it is zero almost surely. Thus we obtain

(36)
[TeX:] $$\varphi_c^{\infty}(t)=\varphi_c^{\infty}(0)+\int_0^t\left(G_{c, 1}+G_{c, 2}\right) \mathrm{d} s.$$

Take the inverse characteristic function transform and derivative with t on both ends, we obtain exactly the PDE in (12). Based on the Cauchy-Lipschitz theorem, (12) has one unique solution. Because any solution of (36) must also satisfy (12), hence [TeX:] $$\hat{f}_{c, t}(h)$$ is uniquely determined. Based on the following corollary of the Prokhorov’s theorem, we have concluded the proof.

Lemma 4: If [TeX:] $$\mu_n$$ is a tight sequence of probability measure such that every weakly convergent subsequence [TeX:] $$\mu_{n_k}$$ has the same limit μ, then the sequence [TeX:] $$\mu_n$$ converges weakly to μ.

The intuition behind (12) is clear. As the time passes, the AoI gets larger which is denoted by the first term on the righthand side of the equation. When the AoI is below the threshold of this class, the agents are never scheduled, resulting in the bottom equation; otherwise, the agents with AoI higher than the threshold are scheduled and flow back to zero AoI with rate the proportional between the number of agents with this AoI and the total number of agents with AoI beyond their individual thresholds.
B. Stationary Regime

In (12), the AoI distribution and evolution of the system is characterized by a PDE. Let us first search for its solution. Define [TeX:] $$\hat{g}_c(x, y) \triangleq \hat{f}_c(x+y, x),$$ wherein we use the change of variable x = h and y = t − h.

(37)
[TeX:] $$\frac{\partial \hat{g}_c(x, y)}{\partial x}=\frac{\partial \hat{f}_c(t, h)}{\partial t}+\frac{\partial \hat{f}_c(t, h)}{\partial h}$$

Therefore the PDE in (12) is rewritten as

(38)
[TeX:] $$\begin{aligned} & \frac{\partial \hat{g}_c(x, y)}{\partial x}= \\ & \begin{cases}-\frac{p_{\mathrm{s}, c} \hat{g}_c(x, y)}{1-\sum_{c=1}^C \hat{G}_j\left(\hat{H}_j, y\right)}, & \text { if } x>\hat{H}_c ; \\ 0, & \text { if } 0 \leq x \leq \hat{H}_c,\end{cases} \end{aligned}$$

(39)
[TeX:] $$\forall y \leq 0, \forall c, \hat{g}_c(-y, y)=\hat{f}_{c, 0}(-y),$$

(40)
[TeX:] $$\forall y\gt 0, \forall c, \hat{g}_c(0, y)=\frac{\eta_c-\hat{G}_c\left(\hat{H}_c, y\right)}{1-\sum_{j=1}^C \hat{G}_j\left(\hat{H}_j, y\right)} p_{\mathrm{s}, c},$$

wherein [TeX:] $$\hat{G}_c(x, y) \triangleq \int_0^x \hat{g}_c(s, y) \mathrm{d} s .$$ The derivative of [TeX:] $$\hat{g}_c(s, y)$$ is hence shown to be only related to x and y affects the initial conditions for various x. This helps us to derive the solution of (12). The following theorem describes the limiting behavior of AoI.

Theorem 2: With probability 1, when [TeX:] $$t \rightarrow \infty,$$ the solution of (12) converges to a globally-stable equilibrium that is irrelevant with t and [TeX:] $$\hat{f}_{c, 0},$$ which reads

[TeX:] $$\hat{d}_c(h)= \begin{cases}\eta_c \frac{1-\frac{\hat{H}_c p_{\mathrm{s}, c}}{\beta+\hat{H}_c p_{\mathrm{s}, c}}}{\beta} p_{\mathrm{s}, c} e^{-\frac{p_{\mathrm{s}, c}\left(h-\hat{H}_c\right)}{\beta}}, & \text { if } h\gt\hat{H}_c ; \\ \eta_c \frac{1-\frac{\hat{H}_c p_{\mathrm{s}, c}}{\beta+\hat{H}_c p_{\mathrm{s}, c}}}{\beta} p_{\mathrm{s}, c}, & \text { if } 0 \leq h \leq \hat{H}_c,\end{cases}$$

wherein β is the unique positive solution of

(41)
[TeX:] $$\nu(\beta) \triangleq \beta+\sum_{c=1}^C \frac{\eta_c \hat{H}_c p_{\mathrm{s}, c}}{\beta+\hat{H}_c p_{\mathrm{s}, c}}-1=0,$$

with [TeX:] $$\sum_{c=1}^C \frac{\eta_c}{\hat{H}_c p_{\mathrm{s}, c}}\gt1.$$

Proof: First, we will show that the system enters the stage wherein y = t−h > 0 eventually with probability one. Denote [TeX:] $$\hat{H}_{\mathrm{m}}=\max _c\left\{\hat{H}_c\right\}$$ and [TeX:] $$p_{\mathrm{m}}=\min _c\left\{p_{\mathrm{s}, c}\right\}.$$ Then for [TeX:] $$t\gt \hat{H}_{\mathrm{m}},$$ every agent successfully transmits an update with a rate no less than [TeX:] $$p_{\mathrm{m}}.$$ Denote the first time agent-i transmits an update successfully as [TeX:] $$t_{i, 0},$$ then [TeX:] $$\forall T\gt 0,$$

(42)
[TeX:] $$\operatorname{Pr}\left(\hat{t}_{i, 0} \leq T-\hat{H}_{\mathrm{m}}\right) \geq 1-e^{-p_{\mathrm{m}}\left(T-\hat{H}_{\mathrm{m}}\right)},$$

which approaches 1 with large T. After [TeX:] $$\hat{t}_{i, 0}$$, AoI returns to zero and we have t > h for every [TeX:] $$t\gt \hat{t}_{i, 0}$$ afterwards.

Therefore, with probability one, the solution of (38) converges to one given by the initial condition (40) with y > 0. Solving that gives

(43)
[TeX:] $$\hat{g}_c(x, y)= \begin{cases}\kappa_c e^{-\frac{p_{5, c}\left(x-\hat{H}_c\right)}{1-\sum_{j=1}^C \hat{G}_j\left(\hat{H}_j, y\right)}}, & \text { if } h\gt \hat{H}_c, \\ \kappa_c, & \text { if } 0 \leq h \leq \hat{H}_c,\end{cases}$$

where [TeX:] $$\kappa_c=\frac{\eta_c-\hat{G}_c\left(\hat{H}_c, y\right)}{1-\sum_{j=1}^C \hat{G}_j\left(\hat{H}_j, y\right)} p_{\mathrm{s}, c}, \forall c .$$ Note that

(44)
[TeX:] $$\hat{G}_c\left(\hat{H}_c, y\right)=\kappa_c \hat{H}_c, \forall c \text {, and } \sum_{c=1}^C \frac{\kappa_c}{p_{\mathrm{s}, c}}=1 \text {. }$$

Denote [TeX:] $$\beta=1-\sum_{j=1}^C \hat{G}_j\left(\hat{H}_j, y\right)\gt 0$$ as the total fraction of agents with AoI larger than thresholds, then

(45)
[TeX:] $$\hat{G}_c\left(\hat{H}_c, y\right)=\frac{\eta_c \hat{H}_c p_{\mathrm{s}, c}}{\beta+\hat{H}_c p_{\mathrm{s}, c}}, \forall c \text {, and (41). }$$

Observing (41), it follows that [TeX:] $$\nu(0)=0, \nu(\infty)=\infty,$$ and

(46)
[TeX:] $$\nu^{\prime}(\beta) \triangleq 1-\sum_{c=1}^C \frac{\eta_c \hat{H}_c p_{\mathrm{s}, c}}{\left(\beta+\hat{H}_c p_{\mathrm{s}, c}\right)^2}, \nu^{\prime \prime}(\beta)\gt 0 .$$

Hence as long as [TeX:] $$\nu^{\prime}(0)=1-\sum_{c=1}^C \frac{\eta_c}{\hat{H}_c p_{s_s, c}}$$ is strictly negative, there is a unique positive solution of (41); otherwise there is no positive solution. This concludes the proof.

The equilibrium also satisfies the traditional equation

(47)
[TeX:] $$\frac{\partial \hat{f}_c(t, h)}{\partial t}=0,$$

which gives the exact same solution as in (41). It is not difficult to check the local-stability of the solution.

Proposition 1: For every [TeX:] $$\epsilon \gt 0,$$ there exists [TeX:] $$\delta \gt 0,$$ such that, if [TeX:] $$\left\|\hat{f}_{c, 0}(h)-\hat{d}_c(h)\right\|\lt \delta,$$ then for every [TeX:] $$t \geq 0, \| \hat{f}_{c, t}(h)-\hat{d}_c(h) \|\lt\epsilon,$$ i.e., the equilibrium [TeX:] $$\hat{d}_c(h)$$ is asymptotically stable in the local sense.

Proof: Let us check the Jacobian matrix [TeX:] $$\boldsymbol{J}\left(\hat{d}_c(h)\right) \in \mathbb{H} \times \mathbb{H}$$, where [TeX:] $$\mathbb{H}$$, is a Hilbert space with inner product defined as [TeX:] $$\langle f, g\rangle=\int_{\hat{S}} f(x) g(x) \mathrm{d} x.$$ In this case [TeX:] $$J\left(\hat{d}_c(h)\right)$$ is a the derivative of the right-hand side of (12) with respect to [TeX:] $$\hat{f}_c(t, h),$$ taken on the value of [TeX:] $$\hat{d}_c(h)$$:

(48)
[TeX:] $$\boldsymbol{J}\left(\hat{d}_c(h)\right)= \begin{cases}-\frac{p_{\mathrm{s}, c}}{1-\sum_{c=1}^C \hat{F}_j\left(t, \hat{H}_j\right)}, & \text { if } 0 \leq \hat{d}_c(h)<\kappa_c; \\ 0, & \text { if } \hat{d}_c(h) \geq \kappa_c,\end{cases}$$

with the only eigenvalue being [TeX:] $$-\frac{p_{\mathrm{s}, c}}{1-\sum_{c=1}^C \hat{F}_j\left(t, \hat{H}_j\right)} \kappa_c$$ which is strictly negative. Therefore, we can conclude that the equilibrium is locally asymptotically stable based on the Hartman–Grobman Theorem [29].

V. AOI IN TIME-INVARIANT CHANNELS

Next, to compare with the time-varying channels scenario, we first apply the fluid limit tool to analyze the AoI in time-invariant channels and solve a well-known but open AoI scheduling problem.

A. System Model

Consider a one-hop wireless network wherein a central node communications with N distributed agents. The agents share the wireless channel based on a scheduling policy denoted by π. First assume a time-slotted status update system is considered. Later, we will analyze it using the fluid limits which exist in the continuous time domain. The status packet generation is assumed to be generate-at-will, i.e., a fresh status for agent-n is generated whenever it is scheduled. We are interested in average AoI. Concretely, the long time-average AoI of the system is defined by

(49)
[TeX:] $$\bar{\Delta}_\pi \triangleq \limsup _{T \rightarrow \infty} \frac{1}{T N} \sum_{t=1}^T \sum_{c=1}^C \sum_{n=1}^{N_c} \mathbb{E}\left[h_{n, \boldsymbol{\pi}}(t)\right],$$

where T is the time horizon length, and [TeX:] $$h_{n, \pi}(t)$$ denotes the AoI of terminal-n at the t-th time slot under policy π. The c-th class of agents are those who transmit with a channel reliability of [TeX:] $$p_{\mathrm{s}, c} \in(0,1], c=1, \cdots, C .$$ which denotes the successful delivery probability. In each time slot, only one agent can transmit. Over time, the AoI increases by one every time slot.

Such a system model has been studied extensively in the literature [6], [8]–[10], [23]. In [6, Theorem 6], it is found that there exist a lower bound of the average AoI, i.e.,

(50)
[TeX:] $$\bar{\Delta}_\pi \geq \frac{N}{2 C^2}\left(\sum_{c=1}^C \frac{1}{\sqrt{p_{\mathrm{s}, c}}}\right)^2 \triangleq \bar{\Delta}_{\mathrm{LB}}.^2$$

2 A slight difference with [6] is that we assume, aligned with the system model in this paper, there are a finite number of C classes of agents and, without loss of generality, each class has N/C agents. In contrast, each agent can be different in [6], i.e., C = N.

However, as far as we know, it is still an open problem whether this lower bound is (asymptotically) tight. On the other hand, several Whittle’s index based solutions, as well as Lyapunov-drift based ones which essentially gives similar algorithms and performance, have been proposed as a heuristic to solve the problem. These solutions exhibit near-optimal AoI by simulations, but only very loose performance bounds can be proved, e.g., in [6, Theorem 17], due to the non-renewal nature of the scheduler. Note that Ref. [9] considers a different scenario wherein the number of scheduled agents (i.e., M) scales with the total number of agents (i.e., N) and goes to infinity with M/N = α fixed. The authors proved asymptotic optimality without specific performance expressions.

B. Fluid Limits Results

Powered by the results in Section IV, we will solve this problem. we will show that the lower bound in (50) is indeed asymptotically tight, i.e.,

(51)
[TeX:] $$\lim _{N \rightarrow \infty} \min _\pi \frac{\bar{\Delta}_\pi}{\bar{\Delta}_{\mathrm{LB}}}=1.$$

In the meantime, we will show that a threshold-based scheduling policy derived from our analysis achieves the lower bound.

Considering the scheduling policy in Algorithm 1, the following theorem is true.

Theorem 3: Algorithm 1 achieves a long time-average AoI of [TeX:] $$\bar{\Delta}_{\text {Alg. } 1} \text {, }$$ and

(52)
[TeX:] $$\bar{\Delta}_{\mathrm{Alg} .1}=\frac{N}{2 C^2}\left(\sum_{c=1}^C \frac{1}{\sqrt{p_{\mathrm{s}, c}}}\right)^2+\mathcal{O}(1).$$

Algorithm 1
Threshold-based scheduler 1
pseudo1.png

Proof: We apply Theorem 1 to this application. Because we are interested in the time-average AoI, it is sufficient to consider the stationary regime. With (41) and notice [TeX:] $$\int_0^{\hat{H}_c} \hat{d}_c(h) \mathrm{d} h=\hat{H}_c \kappa_c\lt \eta_c,$$ the average AoI of class-c is hence

(53)
[TeX:] $$\begin{aligned} \overline{\hat{h}_c} & =\int_0^{\infty} \hat{d}_c(h) h \mathrm{~d} h \\ & =\kappa_c \int_0^{\hat{H}_c} h \mathrm{~d} h+\int_{\hat{H}_c}^{\infty} \kappa_c e^{-\frac{p_{s, c}\left(h-\hat{H}_c\right)}{\beta}} h \mathrm{~d} h \\ & =\frac{1}{2} \kappa_c \hat{H}_c^2-\eta_c \hat{H}_c+\frac{\eta_c^2}{\kappa_c}>\frac{\eta_c^2}{2 \kappa_c}, \end{aligned}$$

where the right-hand-side of the inequality can be approached with [TeX:] $$\hat{H}_c^* \rightarrow \frac{\eta_c}{\kappa_c}.$$ In other words, for any [TeX:] $$\varepsilon\gt 0,$$ let [TeX:] $$\hat{H}_c^*=\frac{\eta_c-\varepsilon}{\kappa_c},$$ then [TeX:] $$\overline{\hat{h}_c}=\frac{\eta_c^2+\varepsilon^2}{2 \kappa_c}.$$ Note that with (44), the average AoI across all classes satisfies

(54)
[TeX:] $$\begin{aligned} \bar{h} & \geq \sum_{c=1}^C \frac{\eta_c^2+\varepsilon^2}{2 \kappa_c}=\sum_{c=1}^C \frac{\eta_c^2+\varepsilon^2}{2 \kappa_c} \sum_{c=1}^C \frac{\kappa_c}{p_{\mathrm{s}, c}} \\ & \geq \frac{1}{2}\left(\sum_{c=1}^C \sqrt{\frac{\eta_c^2+\varepsilon^2}{p_{\mathrm{s}, c}}}\right)^2 . \end{aligned}$$

The last inequality is based on the Cauchy-Schwarz Inequality, and the equality holds when [TeX:] $$\frac{\kappa_c^2}{\left(\eta_c^2+\varepsilon^2\right) p_{\mathrm{s}, c}}=C_0, \forall c.$$ Together with [TeX:] $$\hat{H}_c^*=\frac{\eta_c-\varepsilon}{\kappa_c},$$ we solve for the following problem:

(55)
[TeX:] $$\hat{\boldsymbol{H}}^*=\underset{\hat{\boldsymbol{H}}_c}{\arg \min } \overline{\hat{h}},$$

with the solution of

(56)
[TeX:] $$\hat{\boldsymbol{H}}^*=\left(\frac{\eta_c-\varepsilon}{\sqrt{\left(\eta_j^2+\varepsilon^2\right) p_{\mathrm{s}, c}}} \sum_{j=1}^C \sqrt{\frac{\eta_j^2+\varepsilon^2}{p_{\mathrm{s}, j}}}\right)_{c=1, \cdots, C} .$$

Now let us check the legitimacy of the solution against the requirement [TeX:] $$\sum_{c=1}^C \frac{\eta_c}{\hat{H}_c p_{\mathrm{s}, c}}\gt 1$$ in Theorem 2. Indeed,

(57)
[TeX:] $$\sum_{c=1}^C \frac{\eta_c}{\hat{H}_c p_{\mathrm{s}, c}}=\frac{\sum_{c=1}^C \frac{\eta_c}{\eta_c-\varepsilon} \sqrt{\frac{\eta_c^2+\varepsilon^2}{p_{\mathrm{s}, c}}}}{\sum_{j=1}^C \sqrt{\frac{\eta_j^2+\varepsilon^2}{p_{\mathrm{s}, j}}}}>1 .$$

Because [TeX:] $$\varepsilon$$ can be arbitrarily small3 and the fluid limits bring in an approximation error of [TeX:] $$\mathcal{O}\left(\frac{1}{N}\right),$$ the AoI of Algorithm 1 is

2 In the sequel, we omit [TeX:] $$\varepsilon$$ for brevity since it does not affect the result.

(58)
[TeX:] $$\bar{\Delta}_{\mathrm{Alg} .1}^*=\frac{N}{2}\left(\sum_{c=1}^C \frac{\eta_c}{\sqrt{p_{\mathrm{s}, c}}}\right)^2+\mathcal{O}(1).$$

With [TeX:] $$\eta_c=1 / C,$$ we obtain the thresholds in Algorithm 1 and its average AoI, which concludes the proof.

Corollary 1: Algorithm 1 is asymptotically optimal.

Proof: Taking the limit [TeX:] $$N \rightarrow \infty$$ with (52) and (50), the conclusion follows immediately.

Remark 2: We would like to draw a connection of Algorithm 1 and the index policies in [6]. The Whittle’s index policy and Lyapunov drift-based max-weight policy share the same structure. They both schedule the agent with the maximum value of [TeX:] $$p_{\mathrm{s}, c} h_{c, i}^2(t),$$ with a slight difference in the linear term of [TeX:] $$h_{c, i}(t)$$ which is insignificant when AoI is large. Essentially, since only relative value is important, this is equivalent to an index of approximately [TeX:] $$\sqrt{p_{\mathrm{s}, c}} h_{c, i}(t).$$ Furthermore, this is analogues to having a clock for agenti that is scaled by [TeX:] $$\sqrt{p_{\mathrm{s}, c}} .$$ In comparison, the AoI threshold of agent-i in Algorithm 1 is [TeX:] $$\frac{1}{C \sqrt{p_{\mathbf{s}, c}}} \sum_{j=1}^C \frac{1}{\sqrt{p_{\mathbf{s}, j}}},$$ which in fact scales with AoI by the same coefficient. After all, only agents with AoI larger than the threshold are eligible for being scheduled. Hence, the connection between Algorithm 1 and index policies is very strong.

Remark 3 (Decentralized implementation): One additional merit of Algorithm 1, compared with index policies, is that it is convenient for decentralized protocols. Index policies need to compare the indices of all agents which are difficult to implement in a decentralized system setting. Fortunately, in Algorithm 1, the central node only needs to broadcast a set of thresholds which do not change over time, and agents can access the channel based upon the thresholds and a contentionbased multi-access protocol such as standard carrier-sensing multiple-access (CSMA).

VI. AOI IN TIME-VARYING CHANNELS

In this section, we will analyze the scenario wherein the wireless channels are time-varying, which are more practical.

A. System Model

Consider a one-hop wireless network which is similar with that in Section V, except for the following differences. The wireless channel is modelled as a Gilbert-Elliot channel, which is also called the burst-noise channel. At the end of each time slot, the wireless channel is either in a good or bad state with different transmission success probabilities, i.e., the state of the wireless channel is time-varying. The agents with a good wireless channel state can successfully deliver the status packet with a probability [TeX:] $$p_{\mathrm{s}, \mathrm{G}} \in(0,1],$$ whereas the transmission would be erased while having a bad wireless channel state, i.e., [TeX:] $$p_{\mathrm{s}, \mathrm{B}}=0 .$$ In more general terms, the channel state transition process can be described by a Markov chain which has two states: Good and Bad. Therefore, the agents can transfer from one class to another over time. The transfer probability between different channel states can be denoted by [TeX:] $$\rho_{c, c^{\prime}} .$$ In the continuous-time domain, it represents the transfer rate per unit time. Moreover, as N approaches infinity, the transition process between each class reaches stability, i.e.,

(59)
[TeX:] $$\eta_c \sum_{c \neq c^{\prime}} \rho_{c, c^{\prime}}=\sum_{c \neq c^{\prime}} \eta_{c^{\prime}} \rho_{c^{\prime}, c}.$$

In the rescaled time domain, the rescaled transfer rate is

(60)
[TeX:] $$\hat{\rho}_{c, c^{\prime}}=N \rho_{c, c^{\prime}}.$$

The transfer rate also reflects the time-varying extent of the wireless channel. For example, in the high-speed mobile communication scenario, it is obvious that the value of the transfer rate is relatively large. While in the almost static communication scenario such as indoor environment, the transfer rate can be regarded as close to 0. When the transfer rate is [TeX:] $$\hat{\rho}_{c, c^{\prime}}=0, \forall c \neq c^{\prime}$$ and all channel states are known, and this is the scenario considered in the Section V. Therefore, the system model in the Section V can be regarded as a special case of the system model in this Section.

To analyze the effects of only knowing part of the CSI on the overall performance, assume we have partial CSI at the beginning of each time slot. By combining the previous conditions, the agents can be classified into two classes: ones with good channel status and with bad channel status. In more details, each class of agents is divided into two parts: those with known channel states and with unknown channel states.

B. Fluid Limits Results

Taking into account the above time-varying channel conditions, the PDE (12) can be rewritten as:

(61)
[TeX:] $$\begin{aligned} & \frac{\partial \hat{f}_c(t, h)}{\partial t}= \\ & \begin{cases}-\frac{\partial \hat{f}_c(t, h)}{\partial h}+\hat{e}_c, & \text { if } 0 \leq h \leq \hat{H}_c, \\ -\frac{\partial \hat{f}_c(t, h)}{\partial h}+\hat{e}_c-\frac{p_{\mathrm{s}, c} \hat{f}_c(t, h)}{1-\sum_{c=1}^C \hat{F}_j\left(t, \hat{H}_j\right)}, & \text { if } h\gt \hat{H}_c .\end{cases} \end{aligned}$$

(62)
[TeX:] $$\hat{e}_c=\sum_{c \neq c^{\prime}} \hat{\rho}_{c^{\prime}, c} \hat{f}_{c^{\prime}}(t, h)-\sum_{c \neq c^{\prime}} \hat{\rho}_{c, c^{\prime}} \hat{f}_{c^{\prime}}(t, h),$$

(63)
[TeX:] $$\forall h \geq 0, \forall c, \hat{f}_c(0, h)=\hat{f}_{c, 0},$$

(64)
[TeX:] $$\forall t \geq 0, \forall c, \hat{f}_c(t, 0)=\frac{\eta_c-\hat{F}_c\left(t, \hat{H}_c\right)}{1-\sum_{j=1}^C \hat{F}_j\left(t, \hat{H}_j\right)} p_{\mathrm{s}, c} .$$

The initial conditions are the same as (14). Compared with (12), there is an additional term of [TeX:] $$\hat{e}_c$$ which greatly increases the complexity of the PDE. In (62), the first term stems the proportion of agents transferred from c-th class to other classes, and the second term stems the proportion of agents transferred from other classes to c-th class.

Powered by the results in Section IV-B, when time goes to infinity, the above PDE also exists a stationary regime, i.e., an equilibrium point of the PDE which is locally stable and [TeX:] $$\hat{f}_c(t, h)$$ satisfies (47). In this regard, the PDE can be simplified to an ODE. In the following, we will discuss the detailed PDEs in general scenarios. For extreme scenarios, such as the PDE where all channel states are unknown or all channel states are known, it can be derived from the general scenario, that is, a simplified special form.

1) Partially known channel states:

In the general scenario, i.e., partial channel states are known, the agents are divided into three classes: good, bad and unknown, which are respectively indexed by [TeX:] $$c=1, \cdots, 3.$$ It is worth noting that in this case, all packets delivered by agents with bad channel state are always erased. Therefore, for a class of agents with bad channel state, the optimal threshold should be [TeX:] $$\hat{H}_2=\infty .$$ The ratio of the number of agents with known channel states to the total number of agents is denoted by [TeX:] $$\gamma \in(0,1).$$ Note that, agents of unknown channel states, however, are still either with good or bad channels physically, therefore the actual ODE is as follows:

(65)
[TeX:] $$\begin{aligned} & \left\{\begin{array}{l} \frac{\mathrm{d} \hat{f}_1(h)}{\mathrm{d} h}=\hat{\rho}_{2,1} \hat{f}_2(h)-\hat{\rho}_{1,2} \hat{f}_1(h), \\ \frac{\mathrm{d} \hat{f}_2(h)}{\mathrm{d} h}=\hat{\rho}_{1,2} \hat{f}_1(h)-\hat{\rho}_{2,1} \hat{f}_2(h), \end{array} \quad \text { if } 0 \leq h \leq \hat{H}_1,\right. \\ & \left\{\begin{aligned} \frac{\mathrm{d} \hat{f}_1(h)}{\mathrm{d} h}= & \hat{\rho}_{2,1} \hat{f}_2(h)-\hat{\rho}_{1,2} \hat{f}_1(h) \\ & -\gamma \frac{p_{\mathrm{s}, \mathrm{G}} \hat{f}_1(h)}{1-\sum_{j=1}^4 M_j}, \quad \text { if } \hat{H}_1\lt h \leq \hat{H}_3, \\ \frac{\mathrm{d} \hat{f}_2(h)}{\mathrm{d} h}= & \hat{\rho}_{1,2} \hat{f}_1(h)-\hat{\rho}_{2,1} \hat{f}_2(h), \end{aligned}\right. \\ & \left\{\begin{array}{1} \frac{\mathrm{d} \hat{f}_1(h)}{\mathrm{d} h}= & \hat{\rho}_{2,1} \hat{f}_2(h)-\hat{\rho}_{1,2} \hat{f}_1(h) \\ & -\frac{p_{\mathrm{s}, \mathrm{G}} \hat{f}_1(h)}{1-\sum_{j=1}^4 M_j}, \\ \frac{\mathrm{d} \hat{f}_2(h)}{\mathrm{d} h}= & \hat{\rho}_{1,2} \hat{f}_1(h)-\hat{\rho}_{2,1} \hat{f}_2(h) \\ & -\frac{(1-\gamma) p_{\mathrm{s}, \mathrm{B}} \hat{f}_2(h)}{1-\sum_{j=1}^4 M_j}, \end{array} \text { if } h>\hat{H}_3,\right. \\ & \left\{\begin{array}{l} \hat{f}_1(0)=\frac{\eta_1-M_1-M_2}{1-\sum_{j=1}^4 M_j} p_{\mathrm{s}, \mathrm{G}}, \\ \hat{f}_2(0)=\frac{\eta_2-M_3-M_4}{1-\sum_{j=1}^4 M_j} p_{\mathrm{s}, \mathrm{B}}, \end{array}\right. \\ & \end{aligned} $$

where [TeX:] $$M_1=\gamma \hat{F}_1\left(\hat{H}_1\right), M_2=(1-\gamma) \hat{F}_1\left(\hat{H}_3\right), M_3=(1-\gamma) \hat{F}_2\left(\hat{H}_3\right),$$ and [TeX:] $$M_4=\gamma \hat{F}_2\left(\hat{H}_2\right)=\gamma \eta_2.$$

Solving this ODE, we obtain the solution of (65) at [TeX:] $$0 \leq h \leq \hat{H}_1$$

(66)
[TeX:] $$\left\{\begin{array}{l} \hat{f}_1(h)=\frac{\hat{\rho}_{2,1}\left(\kappa_1+\kappa_2\right)+e^{-h \hat{\rho}^{\prime}}\left(\hat{\rho}_{1,2} \kappa_1-\hat{\rho}_{2,1} \kappa_2\right)}{\hat{\rho}^{\prime}}, \\ \hat{f}_2(h)=\frac{\hat{\rho}_{1,2}\left(\kappa_1+\kappa_2\right)+e^{-h \hat{\rho}^{\prime}}\left(\hat{\rho}_{2,1} \kappa_2-\hat{\rho}_{1,2} \kappa_1\right)}{\hat{\rho}^{\prime}}, \end{array}\right.$$

where [TeX:] $$\hat{\rho}^{\prime}=\left(\hat{\rho}_{1,2}+\hat{\rho}_{2,1}\right),$$ and

(67)
[TeX:] $$\left\{\begin{array}{l} \kappa_1=\frac{\eta_1-M_1-M_2}{1-\sum_{j=1}^4 M_j} p_{\mathrm{s}, \mathrm{G}}, \\ \kappa_2=\frac{\eta_2-M_3-M_4}{1-\sum_{j=1}^4 M_j} p_{\mathrm{s}, \mathrm{B}} . \end{array}\right.$$

Note with (59) and (60), as [TeX:] $$N \rightarrow \infty,$$ we have [TeX:] $$\hat{\rho}_{c, c^{\prime}} \rightarrow \infty$$ and [TeX:] $$\frac{\hat{\rho}_{2,1}}{\hat{\rho}_{1,2}+\hat{\rho}_{2,1}}=\eta_1, \frac{\hat{\rho}_{1,2}}{\hat{\rho}_{1,2}+\hat{\rho}_{2,1}}=\eta_2 \text {, }$$ then for (66), we have

(68)
[TeX:] $$\left\{\begin{array}{l} \hat{f}_1(h)=\frac{\hat{\rho}_{2,1}\left(\kappa_1+\kappa_2\right)}{\hat{\rho}_{1,2}+\hat{\rho}_{2,1}}=\eta_1\left(\kappa_1+\kappa_2\right), \\ \hat{f}_2(h)=\frac{\hat{\rho}_{1,2}\left(\kappa_1+\kappa_2\right)}{\hat{\rho}_{1,2}+\hat{\rho}_{2,1}}=\eta_2\left(\kappa_1+\kappa_2\right) . \end{array}\right.$$

Next, we obtain the solution of (65) at [TeX:] $$\hat{H}_1\lt h \leq \hat{H}_2$$

(69)
[TeX:] $$\left\{\begin{aligned} \hat{f}_1(h)= & \frac{e^{\left(-\frac{\Upsilon_1}{2}\left(h-\hat{H}_1\right)\right)}\left(-\hat{f}_1\left(\hat{H}_1\right)\left(\Upsilon_1-2 \hat{\rho}_{2,1}\right)+2 \hat{f}_1\left(\hat{H}_1\right) \hat{\rho}_{2,1}\right)}{2 Q} \\ & \frac{e^{\left(-\frac{\Upsilon_2}{2}\left(h-\hat{H}_1\right)\right)}\left(\hat{f}_1\left(\hat{H}_1\right)\left(\Upsilon_2-2 \hat{\rho}_{2,1}\right)-2 \hat{f}_1\left(\hat{H}_1\right) \hat{\rho}_{2,1}\right)}{2 Q}, \\ \hat{f}_2(h)= & \frac{e^{\left(-\frac{\Upsilon_1}{2}\left(h-\hat{H}_1\right)\right)}\left(\hat{f}_2\left(\hat{H}_1\right)\left(\Upsilon_2-2 \hat{\rho}_{2,1}\right)+2 \hat{f}_1\left(\hat{H}_1\right) \hat{\rho}_{1,2}\right)}{2 Q} \\ & \frac{e^{\left(-\frac{\Upsilon_2}{2}\left(h-\hat{H}_1\right)\right)}\left(-\hat{f}_2\left(\hat{H}_1\right)\left(\Upsilon_1-2 \hat{\rho}_{2,1}\right)-2 \hat{f}_1\left(\hat{H}_1\right) \hat{\rho}_{1,2}\right)}{2 Q}, \end{aligned}\right.$$

where

(70)
[TeX:] $$\begin{aligned} & v=\frac{\gamma p_{\mathrm{s}, \mathrm{G}}}{1-\sum_{j=1}^4 M_j}, \\ & Q=\sqrt{v\left(v+2 \hat{\rho}_{1,2}-2 \hat{\rho}_{2,1}\right)+\left(\hat{\rho}_{1,2}+\hat{\rho}_{2,1}\right)^2}, \\ & \Upsilon_1=v+\hat{\rho}_{1,2}+\hat{\rho}_{2,1}-Q, \\ & \Upsilon_2=v+\hat{\rho}_{1,2}+\hat{\rho}_{2,1}+Q, \\ & \hat{f}_1\left(\hat{H}_1\right)=\eta_1\left(\kappa_1+\kappa_2\right), \\ & \hat{f}_2\left(\hat{H}_1\right)=\eta_2\left(\kappa_1+\kappa_2\right) . \end{aligned}$$

Given γ, as [TeX:] $$N \rightarrow \infty,$$ we have [TeX:] $$\Upsilon_2 \rightarrow \infty,$$ and [TeX:] $$\lim _{N \rightarrow \infty} Q=\hat{\rho}_{1,2}+\hat{\rho}_{2,1},$$ then for (69), we have

(71)
[TeX:] $$\left\{\begin{array}{l} \hat{f}_1(h)=\eta_1\left(\hat{f}_1\left(\hat{H}_1\right)+\hat{f}_2\left(\hat{H}_1\right)\right) e^{-\frac{\frac{\gamma p_{\mathrm{s}, \mathrm{G}}}{1-\sum_{j=1}^4 M_j}\left(h-\hat{H}_1\right)}{2}}, \\ \hat{f}_2(h)=\eta_2\left(\hat{f}_1\left(\hat{H}_1\right)+\hat{f}_2\left(\hat{H}_1\right)\right) e^{-\frac{\frac{\gamma p_{\mathrm{s}, \mathrm{G}}}{1-\sum_{j=1}^4 M_j}\left(h-\hat{H}_1\right)}{2}} . \end{array}\right.$$

Similarly, as [TeX:] $$N \rightarrow \infty,$$ we obtain the solution of (65) at [TeX:] $$h>\hat{H}_3$$

(72)
[TeX:] $$\left\{\begin{array}{l} \hat{f}_1(h)=\eta_1\left(\hat{f}_1\left(\hat{H}_3\right)+\hat{f}_2\left(\hat{H}_3\right)\right) e^{-\frac{\frac{p_{\mathrm{s}, \mathrm{G}}}{1-\sum_{j=1}^4 M_j}\left(h-\hat{H}_3\right)}{2}}, \\ \hat{f}_2(h)=\eta_2\left(\hat{f}_1\left(\hat{H}_3\right)+\hat{f}_2\left(\hat{H}_3\right)\right) e^{-\frac{\frac{p_{\mathrm{s}, \mathrm{G}}}{1-\sum_{j=1}^4 M_j}\left(h-\hat{H}_3\right)}{2}} . \end{array}\right.$$

Considering the aforementioned conditions and the scheduling strategy in Algorithm 2, the following theorem is true.

Theorem 4: In a partially observable time-varying channel environment, i.e., [TeX:] $$\gamma \in(0,1),$$ Algoritnm 1 achieves a long time-average AoI of [TeX:] $$\bar{\Delta}_{\mathrm{Alg} .1} \text {, }$$ and

(73)
[TeX:] $$\bar{\Delta}_{\mathrm{Alg} .1}=\frac{N}{2 p_{\mathrm{s}, \mathrm{G}}}+\mathcal{O}(1) .$$

Algorithm 2
Threshold-based scheduler 1
pseudo2.png

Proof: According to

(74)
[TeX:] $$\left\{\begin{array}{l} \int_0^{\hat{H}_1} \hat{f}_1(h) \mathrm{d} h=\eta_1\left(\kappa_1+\kappa_2\right) \hat{H}_1\lt \eta_1 \\ \int_0^{\hat{H}_1} \hat{f}_2(h) \mathrm{d} h=\eta_2\left(\kappa_1+\kappa_2\right) \hat{H}_1\lt \eta_2 \end{array}\right.$$

it gives [TeX:] $$\hat{H}_1\lt \frac{1}{\kappa_1+\kappa_2}.$$ There exists an arbitrary [TeX:] $$\varepsilon\gt 0,$$ let [TeX:] $$\hat{H}_1^*=\frac{1-\varepsilon}{\kappa_1+\kappa_2},$$ then [TeX:] $$\left(1-\sum_{j=1}^4 M_j\right) \rightarrow 0,$$ thus [TeX:] $$\hat{f}_1(h) \rightarrow 0,$$ [TeX:] $$\hat{f}_2(h) \rightarrow 0 \text { at } h\gt \hat{H}_1 \text {. }$$ Therefore, the average AoI of each class is

(75)
[TeX:] $$\left\{\begin{array}{l} \overline{\hat{h}}_1=\int_0^{H_1} \eta_1\left(\kappa_1+\kappa_2\right) h \mathrm{~d} h=\frac{\eta_1(1-\varepsilon)^2}{2\left(\kappa_1+\kappa_2\right)}, \\ \overline{\hat{h}}_2=\int_0^{H_1} \eta_2\left(\kappa_1+\kappa_2\right) h \mathrm{~d} h=\frac{\eta_2(1-\varepsilon)^2}{2\left(\kappa_1+\kappa_2\right)}, \end{array}\right.$$

and the average AoI across all classes satisfies

(76)
[TeX:] $$\overline{\hat{h}} \geq \frac{\eta_1(1-\varepsilon)^2}{2\left(\kappa_1+\kappa_2\right)}+\frac{\eta_2(1-\varepsilon)^2}{2\left(\kappa_1+\kappa_2\right)}=\frac{(1-\varepsilon)^2}{2\left(\kappa_1+\kappa_2\right)}.$$

With [TeX:] $$\frac{\kappa_1}{p_{\mathrm{s}, \mathrm{G}}}+\frac{\kappa_2}{p_{\mathrm{s}, \mathrm{B}}}=1, \kappa_1 \geq 0, \kappa_2 \geq 0,$$ then

(77)
[TeX:] $$\begin{aligned} \overline{\hat{h}} \geq \frac{(1-\varepsilon)^2}{2\left(\kappa_1+\kappa_2\right)} & =\frac{(1-\varepsilon)^2}{2\left[\left(1-\frac{p_{\mathrm{s}, \mathrm{B}}}{p_{\mathrm{s}, \mathrm{G}}}\right) \kappa_1+p_{\mathrm{s}, \mathrm{B}}\right]} \\ & =\frac{(1-\varepsilon)^2}{2\left[\left(1-\frac{p_{\mathrm{s}, \mathrm{G}}}{p_{\mathrm{s}, \mathrm{B}}}\right) \kappa_2+p_{\mathrm{s}, \mathrm{G}}\right]}. \end{aligned}$$

It is obvious that [TeX:] $$\overline{\hat{h}} \geq 0 \text {. }$$ Since [TeX:] $$p_{\mathrm{s}, \mathrm{B}}=0 \text {, }$$ to minimize [TeX:] $$\hat{h} \text {, }$$ then [TeX:] $$\kappa_1$$ should be maximized or [TeX:] $$\kappa_2$$ should be minimized, i.e.,

(78)
[TeX:] $$\left\{\begin{array}{l} \kappa_1=p_{\mathrm{s}, \mathrm{G}}, \\ \kappa_2=0 . \end{array}\right.$$

According to (67), the above equation holds when [TeX:] $$\hat{H}_3=\infty,$$ then

(79)
[TeX:] $$\hat{H}_1^*=\frac{1-\varepsilon}{p_{\mathrm{s}, \mathrm{G}}} .$$

Because [TeX:] $$\varepsilon$$ can be arbitrarily small and the fluid limits bring in an approximation error of [TeX:] $$\mathcal{O}(1 / N),$$ the AoI of Algorithm 1 is

(80)
[TeX:] $$\bar{\Delta}_{\mathrm{Alg} .1}^*=\frac{N}{2 p_{\mathrm{s}, \mathrm{G}}}+\mathcal{O}(1)$$

2) Known all channel states:

In the case where all channel states are known, i.e., γ = 1, there are only two classes of agents: Good and Bad, and the ODE is

(81)
[TeX:] $$\begin{aligned} & \left\{\begin{array}{l} \frac{\mathrm{d} \hat{f}_1(h)}{\mathrm{d} h}=\hat{\rho}_{2,1} \hat{f}_2(h)-\hat{\rho}_{1,2} \hat{f}_1(h), \\ \frac{\mathrm{d} \hat{f}_2(h)}{\mathrm{d} h}=\hat{\rho}_{1,2} \hat{f}_1(h)-\hat{\rho}_{2,1} \hat{f}_2(h), \end{array} \quad \text { if } 0 \leq h \leq \hat{H}_1,\right. \\ & \left\{\begin{array}{1} \frac{\mathrm{d} \hat{f}_1(h)}{\mathrm{d} h}= & \hat{\rho}_{2,1} \hat{f}_2(h)-\hat{\rho}_{1,2} \hat{f}_1(h) \\ & -\frac{p_{\mathrm{s}, \mathrm{G}} \hat{f}_1(h)}{1-\sum_{j=1}^2 M_j}, \\ \frac{\mathrm{d} \hat{f}_2(h)}{\mathrm{d} h}= & \hat{\rho}_{1,2} \hat{f}_1(h)-\hat{\rho}_{2,1} \hat{f}_2(h), \end{array} \text { if } h>\hat{H}_1,\right. \\ & \left\{\begin{array}{l} \hat{f}_1(0)=\frac{\eta_1-M_1}{1-\sum_{j=1}^2 M_j} p_{\mathrm{s}, \mathrm{G}}, \\ \hat{f}_2(0)=\frac{\eta_2-M_2}{1-\sum_{j=1}^2 M_j} p_{\mathrm{s}, \mathrm{B}}, \end{array}\right. \\ & \end{aligned}$$

where [TeX:] $$M_1=\hat{F}_1\left(\hat{H}_1\right) \text {, and } M_2=\hat{F}_2\left(\hat{H}_2\right)=\eta_2 \text {. }$$

Considering the aforementioned conditions and the scheduling strategy in Algorithm 3, the following theorem is true.

Theorem 5: In the fully observable time-varying channel environment, i.e., γ = 1, Algorithm 3 achieves a long time-average [TeX:] $$\text { Aol of } \bar{\Delta}_{\text {Alg. } 3} \text {, and}$$

(82)
[TeX:] $$\bar{\Delta}_{\mathrm{Alg} .3}=\frac{N}{2 p_{\mathrm{s}, \mathrm{G}}}+\mathcal{O}(1) .$$

Algorithm 3
Threshold-based scheduler 1
pseudo3.png

Proof: The proof procedure is similar to the case where partial channel states are known, and is simpler since [TeX:] $$\kappa_2=0.$$ For the sake of brevity, the specific proof procedure is omitted.

3) Unknown all channel states:

In the case where all channel states are unknown, i.e., γ = 0. there are only one classes of agents: Unknown, and the ODE is

(83)
[TeX:] $$\begin{aligned} & \left\{\begin{array}{l} \frac{\mathrm{d} \hat{f}_1(h)}{\mathrm{d} h}=\hat{\rho}_{2,1} \hat{f}_2(h)-\hat{\rho}_{1,2} \hat{f}_1(h), \\ \frac{\mathrm{d} \hat{f}_2(h)}{\mathrm{d} h}=\hat{\rho}_{1,2} \hat{f}_1(h)-\hat{\rho}_{2,1} \hat{f}_2(h), \end{array} \quad \text { if } 0 \leq h \leq \hat{H};\right. \\ & \left\{\begin{array}{1} \frac{\mathrm{d} \hat{f}_1(h)}{\mathrm{d} h}= & \hat{\rho}_{2,1} \hat{f}_2(h)-\hat{\rho}_{1,2} \hat{f}_1(h) \\ & -\frac{p_{\mathrm{s}, \mathrm{G}} \hat{f}_1(h)}{1-\sum_{j=1}^2 M_j}, \\ \frac{\mathrm{d} \hat{f}_2(h)}{\mathrm{d} h}= & \hat{\rho}_{1,2} \hat{f}_1(h)-\hat{\rho}_{2,1} \hat{f}_2(h) \\ & -\frac{p_{\mathrm{s}, \mathrm{B}} \hat{f}_2(h)}{1-\sum_{j=1}^2 M_j}, \end{array} \text { if } h>\hat{H};\right. \\ & \left\{\begin{array}{l} \hat{f}_1(0)=\frac{\eta_1-M_1}{1-\sum_{j=1}^2 M_j} p_{\mathrm{s}, \mathrm{G}}, \\ \hat{f}_2(0)=\frac{\eta_2-M_2}{1-\sum_{j=1}^2 M_j} p_{\mathrm{s}, \mathrm{B}}, \end{array}\right. \\ & \end{aligned}$$

where [TeX:] $$M_1=\hat{F}_1(\hat{H}) \text {, and } M_2=\hat{F}_2(\hat{H}).$$

Considering the aforementioned conditions and the scheduling strategy in Algorithm 4, the following theorem is true.

Theorem 6: In the blind time-varying channel environment, i.e., γ = 0, Algorithm 4 achieves a long time-average AoI of [TeX:] $$\bar{\Delta}_{\text {Alg. } 1},$$ and

(84)
[TeX:] $$\bar{\Delta}_{\mathrm{Alg} .1}=\frac{N}{2 \eta_1 p_{\mathrm{s}, \mathrm{G}}}+\mathcal{O}(1).$$

Algorithm 4
Threshold-based scheduler 1
pseudo4.png
Proof: In this case, the other partial proof procedures are similar to the case where partial channel states are known, but (77) is different, and the specific process is as follows. According to (74),

(85)
[TeX:] $$\left\{\begin{aligned} \frac{\kappa_1}{p_{\mathrm{s}, \mathrm{G}}} & =\frac{\eta_1-M_1}{1-\sum_{j=1}^2 M_j}=\frac{\eta_1-\hat{F}_1(\hat{H})}{1-\sum_{j=1}^2 M_j} \\ & =\frac{\eta_1-\eta_1\left(\kappa_1+\kappa_2\right) \hat{H}}{1-\sum_{j=1}^2 M_j}=\frac{\eta_1\left[1-\left(\kappa_1+\kappa_2\right) \hat{H}\right]}{1-\sum_{j=1}^2 M_j}, \\ \frac{\kappa_2}{p_{\mathrm{s}, \mathrm{B}}} & =\frac{\eta_2-M_2}{1-\sum_{j=1}^2 M_j}=\frac{\eta_2-\hat{F}_2(\hat{H})}{1-\sum_{j=1}^2 M_j} \\ & =\frac{\eta_2-\eta_2\left(\kappa_1+\kappa_2\right) \hat{H}}{1-\sum_{j=1}^2 M_j}=\frac{\eta_2\left[1-\left(\kappa_1+\kappa_2\right) \hat{H}\right]}{1-\sum_{j=1}^2 M_j} . \end{aligned}\right.$$

Since

(86)
[TeX:] $$\frac{\frac{\kappa_1}{p_{\mathrm{s}, \mathrm{G}}}}{\frac{\kappa_2}{p_{\mathrm{s}, \mathrm{B}}}}=\frac{\eta_1}{\eta_2},$$

and [TeX:] $$\frac{\kappa_1}{p_{\mathrm{s}, \mathrm{G}}}+\frac{\kappa_2}{p_{\mathrm{s}, \mathrm{B}}}=1, \kappa_1 \geq 0, \kappa_2 \geq 0,$$ then

(87)
[TeX:] $$\left\{\begin{aligned} \kappa_1 & =\frac{\eta_1}{\eta_1+\eta_2} p_{\mathrm{s}, \mathrm{G}}=\eta_1 p_{\mathrm{s}, \mathrm{G}}, \\ \kappa_2 & =\frac{\eta_2}{\eta_1+\eta_2} p_{\mathrm{s}, \mathrm{B}}=\eta_2 p_{\mathrm{s}, \mathrm{B}} . \end{aligned}\right.$$

Therefore, (77) can be rewritten as

(88)
[TeX:] $$\overline{\hat{h}} \geq \frac{(1-\varepsilon)^2}{2\left(\kappa_1+\kappa_2\right)}=\frac{(1-\varepsilon)^2}{2\left(\eta_1 p_{\mathrm{s}, \mathrm{G}}+\eta_2 p_{\mathrm{s}, \mathrm{B}}\right)}.$$

With [TeX:] $$p_{\mathrm{s}, \mathrm{B}}=0 \text {, }$$ the optimal threshold is

(89)
[TeX:] $$\hat{H}^*=\frac{1-\varepsilon}{\kappa_1+\kappa_2}=\frac{1-\varepsilon}{\eta_1 p_{\mathrm{s}, \mathrm{G}}+\eta_2 p_{\mathrm{s}, \mathrm{B}}}=\frac{1-\varepsilon}{\eta_1 p_{\mathrm{s}, \mathrm{G}}} .$$

Because [TeX:] $$\varepsilon$$ can be arbitrarily small and the fluid limits bring in an approximation error of [TeX:] $$\mathcal{O}\left(\frac{1}{N}\right),$$ the AoI of Algorithm 1 is

(90)
[TeX:] $$\bar{\Delta}_{\mathrm{Alg} .1}=\frac{N}{2 \eta_1 p_{\mathrm{s}, \mathrm{G}}}+\mathcal{O}(1).$$

Combined with the previous analysis, it is evident that the following inference holds.

Remark 4: In a partially observable time-varying channel scenario, by applying Algorithm 2, the optimal AoI performance is almost the same as when all channel states are known as long as a fraction of the channel states are known. That is, when γ > 0, the optimal threshold and the optimal long-time average AoI are independent of γ. This is intuitive since we only schedule agents with good channel states, and for those with bad or unknown channel states, it is better to wait until the channel state is known to be good.

VII. SIMULATIONS

In this section, we provide the simulation results corresponding to the analysis in Section V and Section VI. The simulation results are discussed in two parts: AoI in timeinvariant channels and AoI in time-varying channels, respectively.

A. AoI in Time-Invariant Channels

About the AoI in time-invariant channels, computer simulations are conducted. Two classes of agents (equal number of agents in each class) are tested, with [TeX:] $$p_{\mathrm{s}, \mathrm{G}}=0.9 \text { and } p_{\mathrm{s}, \mathrm{B}}=0.2$$ respectively. In Fig. 2, the convergence of empirical CDF of rescaled AoI to the limiting equilibrium of (41) based on applying Algorithm 1 is shown. The initial AoI distribution is generated based on a Gaussian distribution with mean of N/2 and variance N. The simulation is ran in slotted time domain and lasts for [TeX:] $$10^6$$ time slots. Each slot corresponds to a scheduling event in CTMC. It is observed that when N grows, the match between empirical and theoretical CDF results by fluid limits is evident. Note that it takes longer to converge with larger N because our rescaled AoI requires the time is accelerated by a factor of N, which can be seen by comparing the (2, 2) and (3, 2) figures. In Fig. 3, we also compare the time-average AoI performance of Algorithm 1, the Whittle’s index approach [6] and the theoretical fluid limits (without the [TeX:] $$\mathcal{O}(1)$$ term and thus is also the AoI lower bound in (50)). It is found that the three are very close to each other. Note that Theorem 3 gives the performance of Algorithm 1 within an error term of [TeX:] $$\mathcal{O}(N)$$, but the error can still be dependent on N with lower order than 1, e.g., [TeX:] $$\mathcal{O}(\sqrt{N})$$—this explains the widened gap when N grows.

Fig. 2.

Empirical and theoretical CDF of the rescaled AoI with number of agents of 10 (top row), 100 (middle row) and 1000 (bottom row). From left to right, the four columns represent CDFs at time 100, 1000, 10000 and 50000 (not rescaled).
2.png

Fig. 3.

Average AoI comparisons obtained by Algorithm 1, Whittle’s index [ 6] and fluid limits in Theorem 3.
3.png
B. AoI in Time-Varying Channels

About the AoI in time-varying channels, computer simulations are conducted in two scenarios: known partial channel states and unknown all channel states. To facilitate the analysis, in both cases, considering the same number of agents for both Good and Bad classes, we show the theoretical PDF of the rescaled AoI and a comparison of the empirical and theoretical CDFs of the rescaled AoI.

1) Known partial channel states:

In Fig. 4, to better represent the variation details of AoI, we show the theoretical PDF of rescaled AoI when N = 2000, [TeX:] $$\rho_{c, c^{\prime}}=0.1, \gamma=0.1, p_{\mathrm{s}, \mathrm{G}}=1, \hat{H}_1=0.5, \hat{H}_2=1.5.$$ It is shown that the PDFs of the two classes of agents have different initial values at h = 0, but as h increases, a transfer process occurs between different classes of agents due to the characteristics of the time-varying channel, and the transfer rate tends to infinity in the rescaled time domain. Therefore, the PDFs of the two classes of agents rapidly converge to steady state. Starting from [TeX:] $$h=\hat{H}_1,$$ the PDFs of both classes of agents start to decrease almost simultaneously because the class of agents with good channel state is scheduled. Starting from [TeX:] $$h=\hat{H}_2,$$ the PDFs of both classes of agents start to decrease again because the class of agents with unknown channel state is scheduled.

Fig. 4.

Theoretical PDF of the rescaled AoI when partial channel states are known.
4.png

In Fig. 5, we present the empirical and theoretical CDFs of the rescaled AoI at the optimal threshold when N = 2000, [TeX:] $$\rho_{c, c^{\prime}}=0.1, \gamma=0.1, p_{\mathrm{s}, \mathrm{G}}=0.5.$$ As can be seen from the figure, the empirical CDF of rescaled AoI gradually converges to equilibrium and is in perfect consistency with the theoretical CDF.

Fig. 5.

Empirical and theoretical CDF of the rescaled AoI when partial channel states are known, with the top and low rows denoting the classes of Good and Bad, respectively. From left to right, the four columns represent CDFs at time 100, 1000, 10000 and 50000 (not rescaled).
5.png

2) Unknown all channel states:

In Fig. 6, we present the empirical and theoretical CDFs of the rescaled AoI at the optimal threshold when N = 2000, [TeX:] $$\rho_{c, c^{\prime}}=0.1, \gamma=0, p_{\mathrm{s}, \mathrm{G}}=1.$$ As can be seen from the figure, the empirical CDF of rescaled AoI gradually converges to equilibrium and is in perfect consistency with the theoretical CDF.

Fig. 6.

Empirical and theoretical CDF of the rescaled AoI when all channel states are unknown, with the top and low rows denoting the classes of Good and Bad, respectively. From left to right, the four columns represent CDFs at time 100, 1000, 10000 and 50000 (not rescaled).
6.png

C. Average AoI Comparison

About the AoI in channels with different time-varying degrees, i.e., γ , computer simulations are conducted. Two classes of agents (equal number of agents in each class) are tested, with [TeX:] $$p_{\mathrm{s}, \mathrm{G}}=1 \text{ and } p_{\mathrm{s}, \mathrm{B}}=0,$$ respectively. Since there have been no previous studies on the Whittle’s index policy under partially observable time-varying channels, we assume that the probability of successful transmission of the unknown channel is [TeX:] $$p_{\mathrm{s}, \text { Unknown }}=\frac{\eta_1}{\eta_1+\eta_2} p_{\mathrm{s}, \mathrm{G}}+\frac{\eta_2}{\eta_1+\eta_2} p_{\mathrm{s}, \mathrm{B}}.$$ Consequently, we show the AoI performance under [16, Theorem 5] with γ = 1 and the AoI performance under [6, Theorem 17] with γ = 0.1, γ = 0. In Fig. 7, we compare the time-average AoI performance of Fluid Limits, the Whittle’s index approach [6], [16], both with γ = 1, γ = 0.1, γ = 0 and the theoretical fluid limits with [TeX:] $$\rho_{c, c^{\prime}}=1, \eta_1=1$$ (without the [TeX:] $$\mathcal{O}(1)$$ term and thus is also the AoI lower bound in (50)). It is found that the fluid limits with [TeX:] $$\gamma \neq 0$$ are very close to the Lower Bound with all channel states are good and outperforms the Whittle’s index policy at any γ, which consistent with our previous theoretical analysis results.

Fig. 7.

Average AoI comparisons obtained by Algorithm 1, Whittle’s index [ 6] and Lower Bound with different γ.
7.png

VIII. CONCLUSIONS AND OUTLOOK

In this paper, the system asymptotic behavior with a large number of agents, i.e., the fluid limits, is developed for AoI occupancy measure in wireless multiaccess networks. It is shown that, under a simple threshold-based policy, the system limiting behavior converges to a fluid limit within an error inversely proportional to the number of agents. Moreover, the fluid limit has a provable asymptotically local stable equilibrium that can be used to solve for the stationary AoI distribution.

Leveraging this framework, we solve an AoI scheduling problem with heterogeneous i.i.d. user channel conditions. A well-known existing lower bound, yet with unknown tightness, is shown to be asymptotically tight. The achievability is proved by optimizing the thresholds in our proposed thresholdbased policy. The resultant scheduling policy is asymptotically optimal with explicitly derived thresholds, and is much easier to decentralize compared with the index-based policy. Users in the proposed policy only need to know its fixed individual threshold, whereas users have to compare their indices every time by the index policy.

Furthermore, we show that the AoI with partially observable time-varying channels can be solved by the fluid limit framework. We analyzed the optimal long-time average AoI with different proportions of observable channel states and show that the optimal AoI performance can be achieved as long as only a small fraction of the channel states are known. Simulations reveal that our analysis matches the reality closely, and that the convergence to the fluid limit is evident with a moderate number of users.

Essentially, we believe a large scope of AoI scheduling problems can be solved leveraging this powerful analysis tool. The crux is the convergence of AoI occupancy measure to the fluid limit. As long as this is upheld, which is often the case in AoI scheduling, the rest follows conveniently. Generalization to other applications is our ongoing work.

Biography

Feng Yuan

Feng Yuan is curently working towards the M.S. degree with the School of Communication and Information Engineering in Shanghai University (SHU), Shanghai, China. His current research interests include age of information (AoI), and beyond-5G systems.

Biography

Zeyu Hu

Zeyu Hu received the Ph.D. degree in Information and Telecommunication Engineering from the Beijing University of Posts and Telecommunications (BUPT), Beijing, China, in 2023. He is currently a Postdoctor with the School of Communication and Information Engineering in Shanghai University (SHU), Shanghai, China. His current research interests include space-air-ground integrated networks, age of information (AoI), and beyond-5G systems. Zhiyuan Jang [S’12-M’15] received the B.S. and Ph.D. degrees from the Electronic Engineering Department, Tsinghua University, China, in 2010 and 2015, respectively. He is currently a Professor at the School of Communication and Information Engineering, Shanghai University, Shanghai, China. He visited the WiDeS Group, University of Southern California, Los Angeles, CA, USA, from 2013 to 2014. He worked as an experienced researcher at Ericsson from 2015 to 2016. He visited ARNG at the University of Southern California, Los Angeles, CA, USA, from 2017 to 2018. He worked as a Wireless Signal Processing Scientist at Intel Labs, Hillsboro, OR, USA, in 2018. His current research interests include URLLC in wireless networked control systems and signal processing in MIMO systems. He serves as a TPC member for IEEE INFOCOM, ICC, GLOBECOM, and WCNC. He received the ITC Rising Scholar Award in 2020, the Best Paper Award at the IEEE ICC 2020, the best In-Session Presentation Award of IEEE INFOCOM 2019, and the Exemplary Reviewer Award of IEEE WCL in 2019. He serves as an Associate Editor for the IEEE/KICS Journal of Communications and Networks, and a Guest Editor for the IEEE IoT Journal.

Biography

Zhiyuan Jang

Zhiyuan Jang [S’12-M’15] received the B.S. and Ph.D. degrees from the Electronic Engineering De- partment, Tsinghua University, China, in 2010 and 2015, respectively. He is currently a Professor at the School of Communication and Information Engi- neering, Shanghai University, Shanghai, China. He visited the WiDeS Group, University of Southern California, Los Angeles, CA, USA, from 2013 to 2014. He worked as an experienced researcher at Ericsson from 2015 to 2016. He visited ARNG at the University of Southern California, Los Angeles, CA, USA, from 2017 to 2018. He worked as a Wireless Signal Processing Scientist at Intel Labs, Hillsboro, OR, USA, in 2018. His current research interests include URLLC in wireless networked control systems and signal processing in MIMO systems. He serves as a TPC member for IEEE INFOCOM, ICC, GLOBECOM, and WCNC. He received the ITC Rising Scholar Award in 2020, the Best Paper Award at the IEEE ICC 2020, the best In-Session Presentation Award of IEEE INFOCOM 2019, and the Exemplary Reviewer Award of IEEE WCL in 2019. He serves as an Associate Editor for the IEEE/KICS Journal of Communications and Networks, and a Guest Editor for the IEEE IoT Journal.

References

  • 1 Z. Jiang, "Analyzing age of information in multiaccess networks by fluid limits," in Proc. IEEE INFOCOM, 2021.doi:[[[10.1109/INFOCOM42981.2021.9488712]]]
  • 2 S. Kaul, R. Yates, and M. Gruteser, "Real-time status: How often should one update?," in Proc. IEEE INFOCOM, 2012.doi:[[[10.1109/INFCOM.2012.6195689]]]
  • 3 Y . Sun and B. Cyr, "Sampling for data freshness optimization: Nonlinear age functions," J. Commun. Netw., vol. 21, no. 3, pp. 204-219, 2019.doi:[[[10.48550/arXiv.1812.07241]]]
  • 4 A. Kosta, N. Pappas, A. Ephremides, and V . Angelakis, "Age of information performance of multiaccess strategies with packet management," J. Commun. Netw., vol. 21, no. 3, pp. 244-255, 2019.doi:[[[10.1109/JCN.2019.000039]]]
  • 5 B. Li, R. Li, and A. Eryilmaz, "Throughput-optimal scheduling design with regular service guarantees in wireless networks," IEEE/ACM Trans. Netw., vol. 23, no. 5, pp. 1542-1552, 2015.doi:[[[10.1109/TNET.2014.2333008]]]
  • 6 I. Kadota, A. Sinha, E. Uysal-Biyikoglu, R. Singh, and E. Modiano, "Scheduling policies for minimizing age of information in broadcast wireless networks," IEEE/ACM Trans. Netw., vol. 26, pp. 2637-2650, Dec. 2018.doi:[[[10.1109/TNET.2018.2873606]]]
  • 7 Z. Jiang, B. Krishnamachari, S. Zhou, and Z. Niu, "Can decentralized status update achieve universally near-optimal age-of-information in wireless multiaccess channels?," in Proc. ITC, 2018.doi:[[[10.1109/ITC30.2018.00031]]]
  • 8 Y . Hsu, "Age of information: Whittle index for scheduling stochastic arrivals," in Proc. IEEE ISIT, 2018.doi:[[[10.1109/ISIT.2018.8437712]]]
  • 9 A. Maatouk, S. Kriouile, M. Assaad, and A. Ephremides, "On the optimality of the Whittle’s index policy for minimizing the age of information," arXiv preprint arXiv:2001.03096, 2020.doi:[[[10.1109/TWC.2020.3032237]]]
  • 10 R. Talak, S. Karaman, and E. Modiano, "Optimizing information freshness in wireless networks under general interference constraints," in Proc. ACM MobiHoc, 2018.doi:[[[10.1109/TNET.2019.2946481]]]
  • 11 R. Talak, S. Karaman, and E. Modiano, "Improving age of information in wireless networks with perfect channel state information," IEEE/ACM Trans. Netw., vol. 28, no. 4, pp. 1765-1778, 2020.doi:[[[10.1109/TNET.2020.2996237]]]
  • 12 R. Talak, I. Kadota, S. Karaman, and E. Modiano, "Scheduling policies for age minimization in wireless networks with unknown channel state," in Proc. IEEE ISIT, 2018.doi:[[[10.1109/ISIT.2018.8437688]]]
  • 13 Y . Zou, K. T. Kim, X. Lin, and M. Chiang, "Minimizing age-ofinformation in heterogeneous multi-channel systems: A new partialindex approach," in Proc. ACM MobiHoc, 2021.doi:[[[10.1145/3466772.3467030]]]
  • 14 Y . Shao, Q. Cao, S. Liew, and H. Chen, "Partially observable minimumage scheduling: The greedy policy," IEEE Trans. Commun., vol. 70, no. 1, pp. 404-418, 2021.doi:[[[10.1109/TCOMM.2021.3123362]]]
  • 15 A. Munari and G. Liva, "Information freshness analysis of slotted aloha in gilbert-elliot channels," IEEE Commun. Lett., vol. 25, no. 9, pp. 28692873, 2021.doi:[[[10.1109/LCOMM.2021.3092193]]]
  • 16 B. Sombabu, A. Mate, D. Manjunath, and S. Moharir, Whittle Index for AoI-Aware Scheduling, pp. 630-633, 2020.doi:[[[10.1109/COMSNETS48256.2020.9027444]]]
  • 17 B. Dedhia and S. Moharir, "You snooze, you lose: Minimizing channelaware age of information," in Proc. IEEE WiOPT, 2020.custom:[[[https://ieeexplore.ieee.org/document/9155332]]]
  • 18 Z. Jiang, B. Krishnamachari, X. Zheng, S. Zhou, and Z. Niu, "Timely status update in wireless uplinks: Analytical solutions with asymptotic optimality," IEEE Internet Things J., vol. 6, pp. 3885-3898, 2019.doi:[[[10.1109/JIOT.2019.2893319]]]
  • 19 R. G. Gallager, Discrete stochastic processes, vol. 321. Springer Science & Business Media, 2012.custom:[[[https://link.springer.com/book/10.1007/978-1-4615-2329-1]]]
  • 20 W. Whitt, "Minimizing delays in the GI/G/1 queue," Operations Research, vol. 32, no. 1, pp. 41-51, 1984.custom:[[[https://www.columbia.edu/~ww2040/minimizingdelays84.pdf]]]
  • 21 E. T. Ceran, D. G¨ und¨ uz, and A. Gy¨ orgy, "Average age of information with hybrid ARQ under a resource constraint," IEEE Trans. Wireless Commun., vol. 18, pp. 1900-1913, 2019.doi:[[[10.1109/TWC.2019.2899303]]]
  • 22 H. Tang, J. Wang, L. Song, and J. Song, "Minimizing age of information with power constraints: Multi-user opportunistic scheduling in multistate time-varying channels," IEEE J. Select. Areas Commun., vol. 38, no. 5, pp. 854-868, 2020.doi:[[[10.1109/JSAC.2020.2980911]]]
  • 23 J. Sun, Z. Jiang, B. Krishnamachari, S. Zhou, and Z. Niu, "Closed-form Whittle’s index-enabled random access for timely status update," IEEE Trans. Commun., vol. 68, no. 3, pp. 1538-1551, 2020.doi:[[[10.1109/TCOMM.2019.2960346]]]
  • 24 S. N. Ethier and T. G. Kurtz, Markov processes: Characterization and convergence, vol. 282. John Wiley & Sons, 2009.custom:[[[https://www.wiley.com/en-kr/Markov+Processes%3A+Characterization+and+Convergence-p-9780470317327]]]
  • 25 L. Bortolussi, J. Hillston, D. Latella, and M. Massink, "Continuous approximation of collective system behaviour: A tutorial," Performance Evaluation, vol. 70, no. 5, pp. 317-349, 2013.doi:[[[10.1016/j.peva.2013.01.001]]]
  • 26 A. Kolesnichenko, V . Senni, A. Pourranjabar, and A. Remke, "Applying mean-field approximation to continuous time Markov chains," Int. Autumn School Rigorous Dependability Analysis Using Model Checking Tech. Stochastic Syst., pp. 242-280, 2012.custom:[[[https://link.springer.com/chapter/10.1007/978-3-662-45489-3_7]]]
  • 27 A. Chaintreau, J.-Y . Le Boudec, and N. Ristanovic, "The age of gossip: spatial mean field regime," ACM SIGMETRICS Performance Evaluation Review, vol. 37, no. 1, pp. 109-120, 2009.doi:[[[10.1145/2492101.1555363]]]
  • 28 T. G. Kurtz, Approximation of population processes. SIAM, 1981.custom:[[[-]]]
  • 29 E. A. Coayla-Teran et al., "Hartman-Grobman theorems along hyperbolic stationary trajectories," Discrete and Continuous Dynamical Systems, vol. 17, no. 2, p. 281, 2007.doi:[[[10.3934/dcds.2007.17.281]]]