State-aware Real-time Tracking and Remote Reconstruction of a Markov Source

Mehrdad Salimnejad , Marios Kountouris and Nikolaos Pappas

Abstract

Abstract: The problem of real-time remote tracking and reconstruction of a two-state Markov process is considered here. A transmitter sends samples from an observed information source to a remote monitor over an unreliable wireless channel. The receiver, in turn, performs an action according to the state of the reconstructed source. We propose a state-aware randomized stationary sampling and transmission policy which accounts for the importance of different states of the information source, and their impact on the goal of the communication process. We then analyze the performance of the proposed policy, and compare it with existing goal-oriented joint sampling and transmission policies, with respect to a set of performance metrics. Specifically, we study the real-time reconstruction error, the cost of actuation error, the consecutive error, and a new metric, coined importanceaware consecutive error. In addition, we formulate and solve a constrained optimization problem that aims to obtain the optimal sampling probabilities that minimize the average cost of actuation error. Our results show that in the scenario of constrained sampling generation, the optimal state-aware randomized stationary policy outperforms all other sampling policies for fast evolving sources, and, under certain conditions, for slowly varying sources. Otherwise, a semantics-aware policy performs better only when the source is slowly varying.

I. INTRODUCTION

TODAY’S communication networks are in a transitional phase to supporting cyber-physical and interactive critical systems, which are key enablers for a plethora of new services and applications, such as autonomous transportation, industrial robotics, telehealth, and environmental monitoring. Emerging real-time autonomous systems, empowered with networked agents with advanced processing and learning capabilities, are expected to take advantage of network and sensing data and transform both human and digital decision making. Nevertheless, the realization of this euphoric vision hinges upon networks’ ability to timely and effectively gather, analyze, and transport vast new sources of data. As a step in that direction, a radically new approach, which accounts for the semantics of information, defined as the importance and the goal-oriented utility of data exchanged in a network, has emerged. Reconsidering the entire communication process under the prism of semantics of information is instrumental in transforming the way we generate, transmit, and reconstruct data in timesensitive and data-intensive communication systems. Anthony Ephremides is among the very first who proposed and advocated for the concept of semantics of information, laying the foundation stones of goal-oriented semantic communications. A highly relevant yet challenging problem in this context is to design joint source sampling, transmission, and reconstruction techniques, which consider the dynamics of the information source and enable real-time remote tracking with the objective of actuation.

Most prior work on remote tracking to date has mainly focused on proposing sampling or scheduling policies aiming to minimize estimation error or mean square error, letting the significance and the usefulness of the generated and transmitted information with respect to the application-driven goal and context aside. In contrast to these works, in this paper, we propose a new state-aware sampling and transmission policy and introduced a new importance-aware error metric, unearthing the prominent role of having different action probabilities for different states.

A. Related Work

The problem of scheduling in event-triggered estimation has been considered in [1]–[6], where a sensor observes the state of a process and transmits it to the receiver only when certain events occur. Optimal sampling and transmission policies for noiseless communication channels are proposed in [7]–[9]. The study in [7] considers sequential estimation with limited information, where an observer sequentially observes a stochastic process and sends the resulting sample to a receiver over a noiseless communication channel. The authors in [8] study a remote estimation problem in a noiseless communication system in the presence of an energy harvesting sensor and a remote estimator. [9] presents an optimal threshold transmission policy for a noiseless communication system where a sensor observes a first-order Markov process and transmits the sample to the receiver.

The work [10] proposes an optimal transmission strategy in two sensor-assisted Gauss-Markov systems, extended to multiple sensors and processes in [11]. The work [12] analyzes the optimal estimation and transmission policies for remote estimation over time-varying packet drop channels. This study involves a scenario representing the information source as a finite-state Markov chains and first-order auto-regressive processes. The authors in [13] and [14] study the fundamental limits and trade-offs of remote estimation of Markov processes under communication constraints.

Optimal sampling and remote estimation for monitoring real-time stochastic processes is studied in [15]–[18]. The problem of estimating the current state of a dynamic process using previous measurements and a linear time-invariant discrete-time (LTI) model of the process is investigated in [19]–[21]. The main objective of the aforementioned studies is to present sampling and transmission strategies that minimize estimation errors, disregarding the importance of information with respect to its utilization. Metrics that capture the semantics and effectiveness of information, leveraging synergies between data processing, information transmission, and signal reconstruction have recently been introduced in [22]–[35].

B. Contributions

In this work, we consider the problem of real-time remote tracking of an information source in a time slotted communication system. A sampler performs sampling of a two-state Markov process, and then the transmitter sends the sample in the form of packets to a remote receiver over an unreliable wireless channel. Then, the real-time reconstruction of the information source is performed at the receiver based on the successfully received samples. The system is considered to be in a synced state if the source state matches the state of the reconstructed source, otherwise the system is in an erroneous state. Furthermore, the receiver performs a specific action according to the estimated state of the information source. This paper extends the results of [23], [34] in which the problem of real-time tracking and reconstruction of an information source with the purpose of actuation is studied. These papers proposed semantics-empowered policies to achieve significant reduction in both the real-time reconstruction and the cost of actuation errors. In this work, we introduce a new stateaware sampling and transmission policy, and we evaluate its performance in terms of a set of semantics-aware metrics that capture the significance of information and various characteristics of the system’s performance. Our key contributions are summarized as follows:

1) We propose a state-aware randomized stationary sampling and transmission policy, in which we consider different sampling and success probabilities for different states of the information source. This becomes relevant to scenarios where the states encode commands for actuation or other potential tasks, where different actions have different importance, thus, is important to allow for different sampling frequencies.

2) We analyze the performance of the proposed strategy in terms of time-averaged reconstruction error, cost of actuation error, and consecutive error metrics, and we compare it with previous proposed joint sampling and transmission policies [23], [33], [34].

3) We define a new timing-aware error metric, namely importance-aware consecutive error metric, which jointly captures both timing- and importance-related aspects of errors. Specifically, this metric measures the impact on the performance when the system remains in a specific erroneous state for several consecutive time slots.

4) We solve the optimization problem of minimizing the average cost of actuation error subject to a time-averaged sampling cost constraint, as a means to reveal when and under which conditions the proposed state-aware randomized stationary policy outperforms state-of-the-art alternatives. [TeX:] $$\{X(t), t \in \mathbb{N}\},$$ depicted in Fig. 2. Therein, the self-transition probability and the probability of transition to another state at time slot t + 1 are defined as follows

Fig. 1.
Real-time remote tracking of an information source over a wireless channel.
Fig. 2.
DTMC describing the evolution of the information source X(t).

II. SYSTEM MODEL

We consider a time slotted communication system in which a sampler performs sampling of an information source X(t) at time slot t, after which the transmitter sends the sample to the receiver over a wireless channel, as shown in Fig. 1. The remote receiver operates as an actuator and performs actions based on the reconstructed state of the information source. We model the information source as a two-state discrete time Markov chain (DTMC)

(1)
[TeX:] $$\operatorname{Pr}[X(t+1)=i \mid X(t)=j]= \begin{cases}1-p, & i=0, j=0 \\ q, & i=0, j=1 \\ p, & i=1, j=0 \\ 1-q, & i=1, j=1.\end{cases}$$

In this paper, we consider different sampling and transmission actions for the states of the information source. We denote the action of sampling at time slot t when the information source is at state i (i = 0, 1) by [TeX:] $$\alpha_i^{\mathrm{S}}(t),$$ where [TeX:] $$\alpha_i^{\mathrm{S}}(t)=1$$ if the source at state i is sampled and [TeX:] $$\alpha_i^{\mathrm{s}}(t)=0$$ otherwise. Furthermore, when [TeX:] $$\alpha_i^{\mathrm{s}}(t)=1,$$ the action of transmitting the sample is denoted by [TeX:] $$\alpha_i^{\mathrm{tx}}(t),$$ where [TeX:] $$\alpha_i^{\mathrm{tx}}(t)=1$$ if the sample is transmitted, otherwise the transmitter remains idle, [TeX:] $$\alpha_i^{\mathrm{tx}}(t)=0.$$ At time slot t, the receiver constructs an estimate of the process X(t), denoted by [TeX:] $$\hat{X}(t)$$ based on successfully received samples. The channel state [TeX:] $$h_i(t)$$ is equal to 1 if the information source at state i is sampled and successfully decoded by the receiver, and 0 otherwise. We define the success probability when the information source at state i is sampled and transmitted, as [TeX:] $$p_{\mathrm{s}_i}=\operatorname{Pr}\left[h_i(t)=1\right] .$$ Note that allowing for different success probabilities can have interesting connections with performing simple state-aware power control. Successful/failed transmissions are declared to the transmitter using acknowledgment (ACK)/negative-ACK packets, which are assumed to be delivered instantaneously and error free to the transmitter1. Therefore, the transmitter has perfect knowledge of the reconstructed source state at time slot t, i.e., [TeX:] $$\hat{X}(t)$$2. We also assume that a sample is discarded when its transmission fails.

1 Actually, only the semantics-aware policy requires an ACK/NACK feedback channel.

2 In this paper, since we do not consider any decoder/estimation policy, the current state at the transmitter is the latest received update.

III. SAMPLING AND TRANSMISSION POLICIES

We propose a new sampling and transmission policy, coined state-aware randomized stationary, in which the generation of a sample is triggered in a probabilistic manner at each time slot. Specifically, we introduce a scheme that allows for assigning different sampling probabilities for different states, adjusting the sampling frequency depending on the importance of the state. Consider, for example, the scenario where each state is a command for a remote agent that requires to be executed, and different commands are of different importance or criticality. We assume that [TeX:] $$p_{\alpha_i^{\mathrm{s}}}$$ is the probability of joint sampling and transmission actions when the source is at the state i. Therefore, we define [TeX:] $$p_{\alpha_i^{\mathrm{s}}}$$ as follows

(2)
[TeX:] $$\operatorname{Pr}\left[\alpha_i^{\mathrm{s}}(t+1)=1, \alpha_i^{\mathrm{tx}}(t+1)=1\right]=p_{\alpha_i^{\mathrm{s}}} .$$

The probability that the source at the state i is not sampled at time slot t+1 is [TeX:] $$\operatorname{Pr}\left[\alpha_i^{\mathrm{s}}(t+1)=0\right]=1-p_{\alpha_i^{\mathrm{s}}} .$$ In addition, for comparison, we adopt three relevant policies proposed in [23], [34]. Below we provide a short description of them.

1) Uniform: Sampling is conducted periodically every d time slots, independently of the evolution of the source X(t). Therefore, the sampling time sequences are [TeX:] $$\left\{t_k=k d, k \geqslant 1\right\}.$$ While this policy is simple and easy to implement, several state transitions can be missed during the time interval between two performed samples.

2) Change-aware: A new sample is generated when a change in the state of the source X(t) is observed between two consecutive time slots without considering whether the system is in sync or not.

3) Semantics-aware: When the system is in a sync state, i.e., [TeX:] $$X(t)=\hat{X}(t)$$, a sample is generated if a change in the source state is observed at the next time slot, i.e., [TeX:] $$X(t+1) \neq X(t)$$. However, when the system is in an erroneous state, i.e., [TeX:] $$X(t) \neq \hat{X}(t)$$, a sample is generated if the source state at the next time slot is not equal to the state of the reconstructed source at time slot t, i.e., [TeX:] $$X(t+1) \neq \hat{X}(t)$$.

IV. PRELIMINARY PERFORMANCE ANALYSIS

In this section, we analyze the performance of state-aware randomized stationary policy in terms of time-averaged reconstruction error and average cost of actuation error.

A. Real-time Reconstruction Error

The real-time reconstruction error captures the discrepancy between the original source X(t) and the reconstructed source [TeX:] $$\hat{X}(t)$$, at time slot t, i.e.,

(3)
[TeX:] $$E(t)=|X(t)-\hat{X}(t)|,$$

where at time slot t, E(t) = 0 denotes the system is in the sync state, while the erroneous state of the system is denoted by [TeX:] $$E(t) \neq 0$$. The time-averaged reconstruction error or the probability that the system is in an erroneous state, [TeX:] $$P_E,$$ for an observation interval [1, T] with T being a large positive number, is defined as [23], [34]

(4)
[TeX:] $$P_E=\lim _{T \rightarrow \infty} \frac{1}{T} \sum_{t=1}^T \mathbb{1}(E(t) \neq 0)=\lim _{T \rightarrow \infty} \frac{1}{T} \sum_{t=1}^T \mathbb{1}(X(t) \neq \hat{X}(t)),$$

Where [TeX:] $$\mathbb{1}(\cdot)$$ is the indicator function.

For a two-state DTMC information source, [TeX:] $$P_E$$ in (4) is given by

(5)
[TeX:] $$\begin{aligned} P_E & =\operatorname{Pr}[X(t)=0, \hat{X}(t)=1]+\operatorname{Pr}[X(t)=1, \hat{X}(t)=0] \\ & =\pi_{0,1}+\pi_{1,0} \end{aligned}$$

Note that [TeX:] $$\pi_{0,1} \text { and } \pi_{1,0}$$ are the probabilities obtained from the stationary distribution of the two-dimensional DTMC describing the joint status of the system regarding the current state at the original source, i.e., [TeX:] $$(X(t), \hat{X}(t)).$$ To derive [TeX:] $$\pi_{i, j},(i, j) \in\{0,1\},$$ we assume that when the sampler performs sampling, the transmitter sends the sample in the form of packets during the same time slot.

Lemma 1. For a two-state DTMC information source, the stationary distribution [TeX:] $$\pi_{i, j}$$ for the state-aware randomized stationary policy is given by3

3 This work can be extended to more than two-state DTMC information source. In Appendix B, we provide an example for the three-state DTMC information source.

(6a)
[TeX:] $$\pi_{0,0}=\frac{q p_{\alpha_0^s} p_{s_0}\left[q+(1-q) p_{\alpha_1^s} p_{s_1}\right]}{(p+q) \Phi\left(p_{\alpha_0^s}, p_{\alpha_1^s}\right)},$$

(6b)
[TeX:] $$\pi_{0,1}=\frac{p q p_{\alpha_1^s} p_{s_1}\left(1-p_{\alpha_0^s} p_{s_0}\right)}{(p+q) \Phi\left(p_{\alpha_0^s}, p_{\alpha_1^s}\right)},$$

(6c)
[TeX:] $$\pi_{1,0}=\frac{p q p_{\alpha_0^s} p_{s_0}\left(1-p_{\alpha_1^s} p_{s_1}\right)}{(p+q) \Phi\left(p_{\alpha_0^s}, p_{\alpha_1^s}\right)},$$

(6d)
[TeX:] $$\pi_{1,1}=\frac{p p_{\alpha_1^s} p_{s_1}\left[p+(1-p) p_{\alpha_0^s} p_{s_0}\right]}{(p+q) \Phi\left(p_{\alpha_0^s}, p_{\alpha_1^s}\right)},$$

where

(7)
[TeX:] $$\Phi\left(p_{\alpha_0^s}, p_{\alpha_1^s}\right)=p p_{\alpha_1^s} p_{s_1}\left(1-p_{\alpha_0^s} p_{s_0}\right)+p_{\alpha_0^s} p_{s_0}\left(q+(1-q) p_{\alpha_1^s} p_{s_1}\right).$$

Proof. See Appendix A.

Using (6b) and (6c), the time-averaged reconstruction error in (4) can be calculated as

(8)
[TeX:] $$P_E=\pi_{0,1}+\pi_{1,0}=\frac{p q\left[p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}+p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\left(1-2 p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right)\right]}{(p+q) \Phi\left(p_{\alpha_0^{\mathrm{s}}}, p_{\alpha_1^{\mathrm{s}}}\right)},$$

where [TeX:] $$\Phi\left(p_{\alpha_0^{\mathrm{s}}}, p_{\alpha_1^{\mathrm{s}}}\right)$$ is given in (7).

B. Cost of Actuation Error

This metric captures the significance of the error at the receiver side and considers different cost or penalties for different erroneous actions. To study the cost of actuation error we define [TeX:] $$C_{i, j}$$ the cost of error when the current state of the source is i, and the reconstructed source is in state [TeX:] $$j \neq i .$$ It is assumed that [TeX:] $$C_{i, j}$$ does not change over time. Now, using [TeX:] $$C_{i, j}$$, the average cost of actuation error for a two-state DTMC can be calculated as follows

(9)
[TeX:] $$P_E^C=C_{0,1} \pi_{0,1}+C_{1,0} \pi_{1,0},$$

where using (6b) and (6c), we can write (9) as

(10)
[TeX:] $$\begin{aligned} & P_E^C= \\ & \quad \frac{p q\left[C_{0,1} p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\left(1-p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\right)+C_{1,0} p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\left(1-p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right)\right]}{(p+q)\left[p p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\left(1-p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\right)+p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\left(q+(1-q) p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right)\right]} . \end{aligned}$$

Remark 1. Using (39) and (41), we can prove that when [TeX:] $$\max \left\{0, T_1\right\} \leqslant p_{\alpha_1^s} \leqslant 1,$$ the state-aware randomized stationary policy has lower average cost of actuation error as compared to the semantics-aware policy for [TeX:] $$\max \left\{0, T_2\right\} \leqslant p_{\alpha_0^s} \leqslant 1,$$ where [TeX:] $$T_1 \text { and } T_2$$ are given by

(11)
[TeX:] $$\begin{aligned} T_1 & =\frac{p C_{1,0}+C_{1,0} p_{s_0}-p C_{1,0} p_{s_0}-q C_{0,1}\left(1-p_{s_0}\right)}{C_{1,0}(1-p) p_{s_0}+p C_{1,0} p_{s_1}+C_{0,1}(1-q) p_{s_1}+q C_{0,1} p_{s_0}} \\ T_2 & =p_{\alpha_1^s}\left[C_{0,1}\left(q+(1-q) p_{s_1}\right)-p C_{1,0}\left(1-p_{s_1}\right)\right] \\ & \times\left[p_{\alpha_1^s}\left(C_{1,0} p_{s_0}(1-p)+p C_{1,0} p_{s_1}+C_{0,1} p_{s_1}(1-q)+q C_{0,1} p_{s_0}\right)\right. \\ & \left.-p C_{1,0}-C_{1,0} p_{s_0}+p C_{1,0} p_{s_0}+q C_{0,1}\left(1-p_{s_0}\right)\right]^{-1} . \end{aligned}$$

Also, when [TeX:] $$0 \leqslant p_{\alpha_1^s} \leqslant \min \left\{0, T_1\right\}$$ and [TeX:] $$0 \leqslant p_{\alpha_0^s} \leqslant \min \left\{0, T_2\right\} \text {, }$$ the state-aware randomized stationary policy has lower average cost of actuation error in comparison with the semantics-aware policy.

Remark 2. We can analytically prove that for [TeX:] $$\frac{q^2 p_{\alpha_0^s}^s p_{s_0}}{p_{s_1}\left[1-p_{\alpha_0^s}^s p_{s_0}(1+q(1-q))\right]} \leqslant p_{\alpha_1^s} \leqslant 1,$$ the timeaveraged reconstruction error and the average cost osf actuation error are decreasing with p, when [TeX:] $$\sqrt{\frac{q p_{\alpha_0^s} p_{s_0}\left(q+(1-q) p_{\alpha_1^s} p_{s_1}\right)}{p_{\alpha_1^s} p_{s_1}\left(1-p_{\alpha_0^s} p_{s_0}\right)}}\lt p \leqslant 1.$$ Furthermore, when [TeX:] $$\frac{p^2 p_{\alpha_1^s} p_{s_1}}{p_{s_0}\left[1-p_{\alpha_1^s} p_{s_1}(1+p(1-p))\right]} \leqslant p_{\alpha_0^s} \leqslant 1 \text {, }$$ the time-averaged reconstruction error and the average csost of actuation error are decreasing with q, for [TeX:] $$\sqrt{\frac{p p_{\alpha_0^s} p_{s_0} p_{\alpha_1^s} p_{s_1}+p^2 p_{\alpha_1^s} p_{s_1}\left(1-p_{\alpha_0^s} p_{s_0}\right)}{p_{\alpha_0^s} p_{s_0}\left(1-p_{\alpha_1^s} p_{s_1}\right)}}\lt q \leqslant 1 .$$

V. JOINT TIMING AND IMPORTANCE ERROR METRICS

In this section we consider the impact of timing and importance of errors, and we propose an extension of the consecutive error metric, termed importance-aware consecutive errors, which takes into account jointly both the timing and the importance aspects of errors.

A. Consecutive Error Metric

The consecutive error metric, first introduced in [34], quantifies the number of consecutive time slots during which the system is in an erroneous state 4. This metric can be described by a DTMC as depicted in Fig. 3. At time slot t, [TeX:] $$C_E(t)=0$$ denotes the synced state, whereas [TeX:] $$C_E(t)=1 \leqslant i \leqslant n-1$$ denotes the number of consecutive time slots for which the system is in an erroneous state. Furthermore, the transition probability [TeX:] $$P_{i, i+1}$$ is defined as [TeX:] $$P_{i, i+1}=\operatorname{Pr}\left[C_E(t+1)=\right.\left.i+1 \mid C_E(t)=i\right].$$ For the state-aware randomized stationary policy, this transition probability is given by

4 A similar metric was defined first in [36] and then in [37].

(12)
[TeX:] $$\begin{aligned} P_{i, i+1} & =\operatorname{Pr}\left[C_E(t+1)=i+1 \mid C_E(t)=i\right] \\ & =\frac{\operatorname{Pr}\left[C_E(t)=i+1\right]}{\operatorname{Pr}\left[C_E(t)=i\right]}, \quad \forall i \geqslant 0, \end{aligned}$$

Fig. 3.
DTMC describing the state of the consecutive error.

where [TeX:] $$\operatorname{Pr}\left[C_E(t)=i\right] \text { for } i=0$$ is equal to [TeX:] $$\operatorname{Pr}\left[C_E(t)=0\right]=\pi_{0,0}+\pi_{1,1} \text {, and for } i \geqslant 1 \text {, }$$ it is calculated as (see Appendix C)

(13)
[TeX:] $$\begin{aligned} & \operatorname{Pr}\left[C_E(t)=i\right] \\ & =p(1-q)^{i-1}\left(1-p_{\alpha_1^s} p_{\mathrm{s}_1}\right)^i \pi_{0,0}+q(1-p)^{i-1}\left(1-p_{\alpha_0^s} p_{\mathrm{s}_0}\right)^i \pi_{1,1}, \end{aligned}$$

where [TeX:] $$\pi_{i, j}, \forall i, j \in\{0,1\}$$ was given in Lemma 1. Now, using [TeX:] $$\operatorname{Pr}\left[C_E(t)=0\right]=\pi_{0,0}+\pi_{1,1}$$ and (13), the transition probability given in (12) can be written as

(14a)
[TeX:] $$P_{0,1}=\frac{p\left(1-p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right) \pi_{0,0}+q\left(1-p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\right) \pi_{1,1}}{\pi_{0,0}+\pi_{1,1}},$$

(14b)
[TeX:] $$\begin{aligned} & P_{i, i+1} \\ & =\frac{p(1-q)^i\left(1-p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right)^{i+1} \pi_{0,0}+q(1-p)^i\left(1-p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\right)^{i+1} \pi_{1,1}}{p(1-q)^{i-1}\left(1-p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right)^i \pi_{0,0}+q(1-p)^{i-1}\left(1-p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\right)^i \pi_{1,1}}. \end{aligned}$$

Using (13), we can calculate the average consecutive error [TeX:] $$\bar{C}_E$$ as

(15)
[TeX:] $$\begin{aligned} \bar{C}_E & =\sum_{x=1}^{\infty} x \operatorname{Pr}\left[C_E(t)=x\right] \\ & =\frac{p\left(1-p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right) \pi_{0,0}}{\left(q+(1-q) p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right)^2}+\frac{q\left(1-p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\right) \pi_{1,1}}{\left(p+(1-p) p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\right)^2}. \end{aligned}$$

Note that the convergence conditions for the previous expression are [TeX:] $$\left|(1-p)\left(1-p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\right)\right|\lt 1$$ and [TeX:] $$\left|(1-q)\left(1-p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right)\right|\lt1.$$

In the following, we define the consecutive error violation probability metric as the percentage of time during which the system remains in an erroneous state for more than n consecutive time slots. Therefore, using the expression given in (13) and Lemma 1, we can write

(16)
[TeX:] $$\begin{aligned} \operatorname{Pr}\left[C_E(t)>n\right]= & \sum_{x=n+1}^{\infty} \operatorname{Pr}\left[C_E(t)=x\right] \\ = & \frac{p q p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\left[(1-q)\left(1-p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right)\right]^{n+1}}{(1-q)(p+q) \Phi\left(p_{\alpha_0^{\mathrm{s}}}, p_{\alpha_1^{\mathrm{s}}}\right)} \\ & +\frac{p q p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\left[(1-p)\left(1-p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\right)\right]^{n+1}}{(1-p)(p+q) \Phi\left(p_{\alpha_0^{\mathrm{s}}}, p_{\alpha_1^{\mathrm{s}}}\right)}, \end{aligned}$$

where n in (16) is finite.

B. Importance-Aware Consecutive Errors

In this section, we introduce a new timing-aware error metric as a means to capture the significance of a particular erroneous action at the receiver side. For that, we propose the importance-aware consecutive error metric, which is defined as the number of consecutive time slots that the system is in an particular erroneous state. Here, we assume that the system is in an erroneous action when the state of the source is [TeX:] $$1(X(t)=1) \text {, }$$ and the reconstructed source is in state [TeX:] $$0(\hat{X}(t)=0)$$. Now, let [TeX:] $$S(t) \neq 0$$ denote that the system is in the mentioned erroneous state at time slot t, while the synced state of the system is denoted by S(t) = 0. We also define [TeX:] $$C_S(t)$$ as the consecutive error at time slot t when the system is in the mentioned particular erroneous state. Now, we can define the state evolution of consecutive error as follows

(17)
[TeX:] $$C_S(t+1)= \begin{cases}C_S(t)+1, & X(t+1)=1, \hat{X}(t+1)=0 \\ 0, & \text { otherwise. }\end{cases}$$

Using (17), we define the transition probability of [TeX:] $$C_S(t)$$ as

(18)
[TeX:] $$\begin{aligned} P_{i, i+1}^S & =\operatorname{Pr}\left[C_S(t+1)=i+1 \mid C_S(t)=i\right] \\ & =\frac{\operatorname{Pr}\left[C_S(t+1)=i+1\right]}{\operatorname{Pr}\left[C_S(t+1)=i\right]} . \end{aligned}$$

Now, using the similar procedure presented in Section V-A, one can obtain [TeX:] $$\operatorname{Pr}\left[C_S(t)=i\right]$$ as follows

(19)
[TeX:] $$\operatorname{Pr}\left[C_S(t)=i\right]=\left\{\begin{array}{l} 1-\pi_{1,0}, \quad i=0 \\ p(1-q)^{i-1}\left(1-p_{\alpha_1^s} p_{\mathrm{s}_1}\right)^i \pi_{0,0}, \quad i \geqslant 1. \end{array}\right.$$

where [TeX:] $$\pi_{0,0} \text { and } \pi_{1,0}$$ are given in Lemma 1. Now, using (19) we can write (18) as

(20)
[TeX:] $$P_{i, i+1}^S= \begin{cases}\frac{p\left(1-p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right) \pi_{0,0}}{1-\pi_{1,0}}, & i=0 \\ (1-q)\left(1-p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right), & i \geqslant 1 .\end{cases}$$

Remark 3. The transition probability [TeX:] $$P_{i, 0}^S(i \geqslant 0),$$ is defined as [TeX:] $$P_{i, 0}^S=1-P_{i, i+1}^S .$$

Now, using (19) and Lemma 1, the average importanceaware consecutive error, [TeX:] $$\bar{C}_S,$$ can be obtained as

(21)
[TeX:] $$\begin{aligned} \bar{C}_S & =\sum_{x=1}^{\infty} x \operatorname{Pr}\left[C_S(t)=x\right] \\ & =\frac{p q p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\left(1-p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right)}{(p+q)\left(q+(1-q) p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right) \Phi\left(p_{\alpha_0^{\mathrm{s}}}, p_{\alpha_1^{\mathrm{s}}}\right)}, p_{\alpha_0^{\mathrm{s}}}, p_{\alpha_1^{\mathrm{s}}} \neq 0, \end{aligned}$$

where [TeX:] $$\Phi\left(p_{\alpha_0^{\mathrm{s}}}, p_{\alpha_1^{\mathrm{s}}}\right)$$ is given in (7).

Remark 4. We would like to emphasize that since the metric of importance-aware consecutive errors considers exclusively the scenario of a specific error, this metric needs to be studied in combination with another error metric, for instance, considering a constrained optimization problem or combining the metrics to form error vectors. This will become prominent in the numerical results section.

VI. OPTIMIZATION PROBLEM

In this section, our objective is to find an optimal stateaware randomized stationary sampling policy, which minimizes the average cost of actuation error subject to a timeaveraged sampling cost constraint. Here, we assume that each attempted sampling has a sampling cost δ, and that the timeaveraged sampling cost cannot exceed a certain threshold [TeX:] $$\delta_{\max }.$$ Therefore, the optimization problem is formulated as

(22a)
[TeX:] $$\underset{p_{\alpha_0^s}, p_{\alpha_1^s}^{\mathrm{s}}}{\operatorname{minimize}} \quad P_E^C$$

(22b)
[TeX:] $$\text { subject to } \lim _{T \rightarrow \infty} \frac{1}{T} \sum_{t=1}^T \delta \mathbb{1}\left\{\alpha_t^{\mathrm{s}}=1\right\} \leqslant \delta_{\max },$$

where the constraint given in (22b) can be written as

(23)
[TeX:] $$\begin{aligned} \lim _{T \rightarrow \infty} \frac{1}{T} \sum_{t=1}^T \delta \mathbb{1}\left\{\alpha_t^{\mathrm{s}}=1\right\} & =\delta \operatorname{Pr}[X(t)=0] p_{\alpha_0^{\mathrm{s}}}+\delta \operatorname{Pr}[X(t)=1] p_{\alpha_1^{\mathrm{s}}} \\ & =\delta \frac{q p_{\alpha_0^{\mathrm{s}}}}{p+q}+\delta \frac{p p_{\alpha_1^{\mathrm{s}}}}{p+q}. \end{aligned}$$

Now, using (9) and (23), the optimization problem can be simplified as

(24a)
[TeX:] $$\operatorname{minimize}_{p_{\alpha_0^s}, p_{\alpha_1^s}} \quad \frac{p q \Psi\left(p_{\alpha_0^s}, p_{\alpha_1^{\mathrm{s}}}\right)}{(p+q) \Phi\left(p_{\alpha_0^{\mathrm{s}}}, p_{\alpha_1^{\mathrm{s}}}\right)}$$

(24b)
[TeX:] $$\text { subject to } q p_{\alpha_0^{\mathrm{s}}}+p p_{\alpha_1^{\mathrm{s}}} \leqslant \eta(p+q) \text {, }$$

where [TeX:] $$\eta=\delta_{\max } / \delta, \Psi\left(p_{\alpha_0^{\mathrm{s}}}, p_{\alpha_1^{\mathrm{s}}}\right)=C_{0,1} p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\left(1-p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\right)+C_{1,0} p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\left(1-p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right),$$ and [TeX:] $$\Phi\left(p_{\alpha_0^{\mathrm{s}}}, p_{\alpha_1^{\mathrm{s}}}\right)$$ is given by (7).

To solve this optimization problem, we first note that the objective function in (24a) is decreasing with [TeX:] $$p_{\alpha_0^{\mathrm{s}}},$$ i.e., [TeX:] $$\frac{\partial P_E^C}{\partial p_{\alpha_0^{\mathrm{s}}}}\lt 0,$$ when

(25)
[TeX:] $$p_{\alpha_1^{\mathrm{s}}} \geqslant \frac{p C_{1,0}-q C_{0,1}}{p_{\mathrm{s}_1}\left(p C_{1,0}+(1-q) C_{0,1}\right)}.$$

Also, the objective function is decreasing with [TeX:] $$p_{\alpha_1^{\mathrm{s}}}$$ when

(26)
[TeX:] $$p_{\alpha_0^{\mathrm{s}}} \geqslant \frac{q C_{0,1}-p C_{1,0}}{p_{\mathrm{s}_0}\left(q C_{0,1}+(1-p) C_{1,0}\right)}.$$

Based on (25) and (26), we consider two cases: one with [TeX:] $$p C_{1,0} \geqslant q C_{0,1}$$ and one other with [TeX:] $$p C_{1,0}\lt q C_{0,1}$$.

1) When [TeX:] $$p C_{1,0} \geqslant q C_{0,1}$$: In this case, we can always find a probability [TeX:] $$p_{\alpha_0^{\mathrm{s}}} \in[0,1]$$ that satisfies the condition given in (26). Therefore, since [TeX:] $$p_{\alpha_0^{\mathrm{s}}} \geqslant 0,$$ using (26), the objective function has its minimum value when [TeX:] $$p_{\alpha_1^{\mathrm{s}}}$$ is maximized. Now, using the constraint given in (24b), the maximum value of [TeX:] $$p_{\alpha_1^{\mathrm{s}}}$$ is

(27)
[TeX:] $$p_{\alpha_1^{\mathrm{s}}}=\frac{\eta(p+q)-q p_{\alpha_0^{\mathrm{s}}}}{p} .$$

Using (27), the optimization problem can be written as

(28a)
[TeX:] $$\underset{p_{\alpha_0^{\mathrm{s}}}}{\operatorname{minimize}} \frac{F}{G}$$

(28b)
[TeX:] $$\text { subject to } p_{\alpha_0^{\mathrm{s}}}^{\mathrm{LB}} \leqslant p_{\alpha_0^{\mathrm{s}}} \leqslant p_{\alpha_0^{\mathrm{s}}}^{\mathrm{UB}} \text {, }$$

where [TeX:] $$F, G, p_{\alpha_0^{\mathrm{s}}}^{\mathrm{LB}} \text {, and } p_{\alpha_0^{\mathrm{s}}}^{\mathrm{UB}}$$ are given by

(29)
[TeX:] $$\begin{aligned} F & =A_1 p_{\alpha_0^{\mathrm{s}}}^2+A_2 p_{\alpha_0^{\mathrm{s}}}+A_3, G=B_1 p_{\alpha_0^{\mathrm{s}}}^2+B_2 p_{\alpha_0^{\mathrm{s}}}+B_3 . \\ p_{\alpha_0^{\mathrm{s}}}^{\mathrm{LB}} & =\max \left\{0, \frac{\eta(p+q)-p}{q}\right\}, p_{\alpha_0^{\mathrm{s}}}^{\mathrm{UB}}=\min \left\{1, \frac{\eta(p+q)}{q}\right\}, \end{aligned}$$

where [TeX:] $$A_i \text {, and } B_i \forall i \in\{1,2,3\} \text {, }$$ are given by

(30)
[TeX:] $$\begin{aligned} & A_1=p q^2 p_{\mathrm{s}_0} p_{\mathrm{s}_1}\left(C_{0,1}+C_{1,0}\right) \\ & A_2=p q\left[C_{1,0} p p_{\mathrm{s}_0}\left(1-\eta p_{\mathrm{s}_1}\right)-q C_{0,1} p_{\mathrm{s}_1}-q \eta C_{1,0} p_{\mathrm{s}_0} p_{\mathrm{s}_1}\right. \\ & \left.-\eta(p+q) C_{0,1} p_{\mathrm{s}_0} p_{\mathrm{s}_1}\right] \\ & A_3=p q(p+q) \eta C_{0,1} p_{\mathrm{s}_1} \\ & B_1=(p+q)\left[p q p_{\mathrm{s}_0} p_{\mathrm{s}_1}-q(1-q) p_{\mathrm{s}_0} p_{\mathrm{s}_1}\right] \\ & B_2=(p+q)\left[p q p_{\mathrm{s}_0}-p q p_{\mathrm{s}_1}-\eta p(p+q) p_{\mathrm{s}_0} p_{\mathrm{s}_1}\right. \\ & \left.+\eta(1-q)(p+q) p_{\mathrm{s}_0} p_{\mathrm{s}_1}\right] \\ & B_3=\eta p p_{\mathrm{s}_1}(p+q)^2 . \\ & \end{aligned}$$

To determine the value of [TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$ that minimizes the objective function in (28a), we need to calculate the critical points of the objective function within the interval [TeX:] $$\left[p_{\alpha_0^{\mathrm{s}}}^{\mathrm{LB}}, p_{\alpha_0^{\mathrm{s}}}^{\mathrm{UB}}\right].$$ When [TeX:] $$\left(2 A_1 B_3-2 A_3 B_1\right)^2 \geqslant 4\left(A_1 B_2-\right.\left.A_2 B_1\right)\left(A_2 B_3-A_3 B_2\right),$$ one can obtain the critical points of the objective function by taking the first derivative [TeX:] $$\frac{\partial}{\partial p_{\alpha_0^s}}\left(\frac{F}{G}\right)=0$$ as5

5 When Δ in (32) is negative, the optimal value of [TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$ that minimizes the objective function in (28a) is equal to [TeX:] $$p_{\alpha_0^{\mathrm{s}}}^{\mathrm{LB}}$$ if [TeX:] $$A_1 B_2\gt A_2 B_1$$ and [TeX:] $$p_{\alpha_0^{\mathrm{s}}}^{\mathrm{UB}}$$ if [TeX:] $$A_1 B_2\lt A_2 B_1.$$

(31a)
[TeX:] $$p_{\alpha_0^{\mathrm{s}}}=\frac{2\left(A_3 B_1-A_1 B_3\right) \pm \sqrt{\Delta}}{2\left(A_1 B_2-A_2 B_1\right)}, \quad A_1 B_2 \neq A_2 B_1,$$

(31b)
[TeX:] $$p_{\alpha_0^{\mathrm{s}}}=\frac{A_3 B_2-A_2 B_3}{2\left(A_1 B_3-A_3 B_1\right)}, A_1 B_2=A_2 B_1, A_1 B_3 \neq A_3 B_1,$$

where [TeX:] $$A_i, \text { and } B_i$$ are given in (30) and Δ can be written as

(32)
[TeX:] $$\Delta=\left(2 A_1 B_3-2 A_3 B_1\right)^2-4\left(A_1 B_2-A_2 B_1\right)\left(A_2 B_3-A_3 B_2\right) .$$

Note that we consider only the value of [TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$ in (31) within the interval [TeX:] $$\left[p_{\alpha_0^{\mathrm{s}}}^{\mathrm{LB}}, p_{\alpha_0^{\mathrm{s}}}^{\mathrm{UB}}\right].$$ Now, we evaluate the objective function at the critical points, as well as at points [TeX:] $$p_{\alpha_0^{\mathrm{s}}}^{\mathrm{LB}} \text { and }p_{\alpha_0^{\mathrm{s}}}^{\mathrm{UB}}$$. The minimum value of the objective function within the given interval corresponds to the smallest value. After determining the value of [TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$ that minimizes the objective function, we can calculate [TeX:] $$p_{\alpha_1^{\mathrm{s}}}$$ by utilizing the expression given in (27). We note that the values of [TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$ and [TeX:] $$p_{\alpha_1^{\mathrm{s}}}$$ obtained by solving the optimization problem in (28), are the optimal values of the sampling probabilities when [TeX:] $$p_{\alpha_1^{\mathrm{s}}} \geqslant \frac{p C_{1,0}-q C_{0,1}}{p_{\mathrm{s}_1}\left(p C_{1,0}+(1-q) C_{0,1}\right)}.$$ Otherwise, the optimal values of sampling probabilities [TeX:] $$p_{\alpha_0^{\mathrm{s}}} \text { and } p_{\alpha_1^{\mathrm{s}}}$$ are given by

(33)
[TeX:] $$p_{\alpha_0^s}^*=0, \quad p_{\alpha_1^s}^*=\min \left\{1, \frac{\eta(p+q)}{p}\right\} .$$

This is because using (25), as [TeX:] $$p_{\alpha_1^{\mathrm{s}}}\lt \frac{p C_{1,0}-q C_{0,1}}{p_{\mathrm{s}_1}\left(p C_{1,0}+(1-q) C_{0,1}\right)},$$ the objective function in (24a) is increasing with [TeX:] $$p_{\alpha_0^{\mathrm{s}}}.$$ Therefore, the optimal values of the sampling probabilities are given by (33).

Remark 5. When [TeX:] $$p C_{1,0} \geqslant q C_{0,1}$$ and [TeX:] $$p_{s_1}\lt \frac{p C_{1,0}-q C_{0,1}}{p C_{1,0}+(1-q) C_{0,1}},$$ we cannot find a probability [TeX:] $$p_{\alpha_1^s} \in[0,1]}$$ that satisfies the condition given in (25). Therefore, in that case, the optimal values of [TeX:] $$p_{\alpha_0^s} \text { and }p_{\alpha_1^s}$$ that minimize the objective function in (24a) are given by [TeX:] $$p_{\alpha_0^s}^*=0 \text { and } p_{\alpha_1^s}^*=\min \{1, \eta(p+q) / p\} \text {. }$$

2) When [TeX:] $$p C_{1,0}\lt q C_{0,1}$$ : In this case, since [TeX:] $$p_{\alpha_1^s} \geqslant 0,$$ using (25), as [TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$ increases, the objective function in (24a) decreases. Using the constraint in (24b), the maximum value of [TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$ is given by

(34)
[TeX:] $$p_{\alpha_0^{\mathrm{s}}}=\frac{\eta(p+q)-p p_{\alpha_1^{\mathrm{s}}}}{q} .$$

Now, using (34), the optimization problem given in (24) is simplified as

(35a)
[TeX:] $$\underset{p_{\alpha_0^{\mathrm{s}}}}{\operatorname{minimize}} \frac{H}{K}$$

(35b)
[TeX:] $$\text { subject to } p_{\alpha_1^{\mathrm{s}}}^{\mathrm{LB}} \leqslant p_{\alpha_1^{\mathrm{s}}} \leqslant p_{\alpha_1^{\mathrm{s}}}^{\mathrm{UB}} \text {, }$$

where [TeX:] $$H, K, p_{\alpha_1^{\mathrm{s}}}^{\mathrm{LB}} \text {, and } p_{\alpha_1^{\mathrm{s}}}^{\mathrm{UB}}$$ are given by

(36)
[TeX:] $$\begin{aligned} H= & p q\left[C_{1,0} p_{\mathrm{s}_0}\left(1-p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right)\left(p p_{\alpha_1^{\mathrm{s}}}-\eta(p+q)\right)\right. \\ & \left.-C_{0,1} p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\left(q+p p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_0}-\eta p_{\mathrm{s}_0}(p+q)\right)\right] \\ K= & (p+q)\left[\eta p_{\mathrm{s}_0}(p+q)\left(p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}(p+q-1)-q\right)\right. \\ & \left.-p p_{\alpha_1^{\mathrm{s}}}\left(q\left(p_{\mathrm{s}_1}-p_{\mathrm{s}_0}\right)+p_{\mathrm{s}_0} p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}(p+q-1)\right)\right], \end{aligned}$$

(37)
[TeX:] $$p_{\alpha_1^{\mathrm{s}}}^{\mathrm{LB}}=\max \left\{0, \frac{\eta(p+q)-q}{p}\right\}, p_{\alpha_1^{\mathrm{s}}}^{\mathrm{UB}}=\min \left\{1, \frac{\eta(p+q)}{p}\right\}.$$

Similar to the case when [TeX:] $$p C_{1,0} \geqslant q C_{0,1},$$ we can obtain [TeX:] $$p_{\alpha_1^{\mathrm{s}}}$$ that minimizes the objective function in (35a) by calculating its critical points within the interval [TeX:] $$\left[p_{\alpha_1^{\mathrm{s}}}^{\mathrm{LB}}, p_{\alpha_1^{\mathrm{s}}}^{\mathrm{UB}}\right] .$$ Then, we obtain [TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$ using (34). We can similarly prove that when [TeX:] $$p_{\alpha_0^{\mathrm{s}}} \geqslant \frac{q C_{0,1}-p C_{1,0}}{p_{\mathrm{s}_0}\left(q C_{0,1}+(1-p) C_{1,0}\right)}, p_{\alpha_0^{\mathrm{s}}} \text { and } p_{\alpha_1^{\mathrm{s}}}$$ derived by solving the optimization problem in (35), are the optimal values of the sampling probabilities. Otherwise, the optimal values of the sampling probabilities are equal to [TeX:] $$p_{\alpha_0^s}^*=\min \{1, \eta(p+q) / q\} \text { and } p_{\alpha_1^s}^*=0.$$

Remark 6. In what follows, RS and RSC policies are the abbreviations for the state-aware randomized stationary policy and the state-aware randomized stationary policy in the constrained optimization problem, respectively.

VII. SIMULATION RESULTS

In this section, we validate our analytical results and evaluate the performance of the sampling policies in terms of timeaveraged reconstruction error and the average cost of actuation error under various system parameters. In the uniform policy, a sample is acquired every 5 time slots. Simulation results are obtained averaging over [TeX:] $$10^7$$ time slots.

In Tables I through VI, we illustrate the minimum average cost of actuation error when [TeX:] $$C_{0,1}=1, C_{1,0}=2$$ under a sampling cost constraint for [TeX:] $$\eta=0.5$$, and various values of p, q, [TeX:] $$p_{\mathrm{s}_0}, \text { and } p_{\mathrm{s}_1}$$. As seen in these Tables, when [TeX:] $$p C_{1,0} \geqslant q C_{0,1},$$ the average cost of actuation error has its minimum value when [TeX:] $$p_{\alpha_1^{\mathrm{s}}}$$ is greater than [TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$. Otherwise, the minimum average cost of actuation error occurs for [TeX:] $$p_{\alpha_0^{\mathrm{s}}}\gt p_{\alpha_1^{\mathrm{s}}}.$$ Note also that for the lower successful probabilities, the minimum average cost of actuation error occurs at small values of [TeX:] $$p_{\alpha_0^s}^* \text { and }p_{\alpha_1^s}^*,$$ where [TeX:] $$p C_{1,0} \geqslant q C_{0,1}$$ and [TeX:] $$q C_{0,1}\gt p C_{1,0},$$ respectively. Furthermore, we observe that the optimal RSC policy exhibits superior performance than the semantics-aware policy under the conditions given in Remark 1, for both slow and fast changing the information source. Otherwise, when Remark 1 is not satisfied, the semantics-aware policy performs better only when the source is slowly evolving. Note that the optimal values in red color for the semantics-aware, change-aware, and RS policies are obtained for values of p, q, [TeX:] $$p_{\mathrm{s}_0}, \text { and } p_{\mathrm{s}_1}$$ that do not satisfy the constraint requirement. This means that in the unconstrained scenario, the performance of the optimal RS policy is either better or the same as that of the semantics-aware policy. However, in this case, the optimal solution for the RS policy is to sample and transmit at most of the time slots, resulting in an excessive amount of samples being generated.

TABLE I
MINIMUM AVERAGE COST OF ACTUATION ERROR FOR RSC STATE-AWARE WITH [TeX:] $$\eta=0.5, C_{0,1}=1, C_{1,0}=2, p_{\mathrm{s}_0}=0.2, p_{\mathrm{s}_1}=0.3,$$ AND DIFFERENT VALUES OF p AND q.
TABLE II
MINIMUM AVERAGE COST OF ACTUATION ERROR FOR [TeX:] $$\eta=0.5, C_{0,1}=1, C_{1,0}=2, p_{\mathrm{s}_0}=0.2, p_{\mathrm{s}_1}=0.3,$$ AND DIFFERENT VALUES OF p AND q.

The optimal values for sampling probabilities in the unconstrained scenario are shown in Tables III and VI. As observed in these Tables, for all values of p and q, we have [TeX:] $$p C_{1,0} \geqslant q C_{0,1} .$$ Consequently, the optimal value of [TeX:] $$p_{\alpha_1^s}^*$$ is 1. Furthermore, for values of p and q where [TeX:] $$p_{\mathrm{S}_1}\lt \frac{p C_{1,0}-q C_{0,1}}{p C_{1,0}+(1-q) C_{0,1}},$$ the optimal value of [TeX:] $$p_{\alpha_0^s}^*$$ is 0; otherwise, [TeX:] $$p_{\alpha_0^s}^*=1.$$ This implies that when the success probability of a state is low, the optimal solution is to refrain from sampling for the state that causes the less important error in terms of actuation, while for a higher success probability, the optimal solution is to always perform sampling.

TABLE III
MINIMUM AVERAGE COST OF ACTUATION ERROR FOR RS STATE-AWARE WITH [TeX:] $$\eta=0.5, C_{0,1}=1, C_{1,0}=2, p_{\mathrm{s}_0}=0.2, p_{\mathrm{s}_1}=0.3,$$ AND DIFFERENT VALUES OF p AND q.
TABLE IV
MINIMUM AVERAGE COST OF ACTUATION ERROR FOR RSC STATE-AWARE WITH [TeX:] $$\eta=0.5, C_{0,1}=1, C_{1,0}=2, p_{\mathrm{s}_0}=0.6, p_{\mathrm{s}_1}=0.6,$$ AND DIFFERENT VALUES OF p AND q.
TABLE V
MINIMUM AVERAGE COST OF ACTUATION ERROR FOR [TeX:] $$\eta=0.5, C_{0,1}=1, C_{1,0}=2, p_{\mathrm{s}_0}=0.6, p_{\mathrm{s}_1}=0.6,$$ AND DIFFERENT VALUES OF p AND q.
TABLE VI
MINIMUM AVERAGE COST OF ACTUATION ERROR FOR RS STATE-AWARE WITH [TeX:] $$\eta=0.5, C_{0,1}=1, C_{1,0}=2, p_{\mathrm{s}_0}=0.6, p_{\mathrm{s}_1}=0.6,$$ AND DIFFERENT VALUES OF p AND q.

The performance of the optimal state-aware randomized stationary policy in terms of time-averaged reconstruction error as a function of η for [TeX:] $$p_{\mathrm{s}_0}=0.5, p_{\mathrm{s}_1}=0.6,$$ and different values of p, and q is shown in Tables VII–X. We observe that the time-averaged reconstruction error has a smaller value as η increases. This is because η is the threshold of the total timeaveraged sampling cost, thus a higher value of η results in higher sampling probabilities which in turn decreases the timeaveraged reconstruction error, as demonstrated by Tables VIII and X, q > p and [TeX:] $$p_{\mathrm{S}_0}\gt \frac{q-p}{1+q-p}.$$ Hence, the minimum timeaveraged reconstruction error in the unconstrained scenario is obtained with [TeX:] $$p_{\alpha_0^s}^*=1 \text { and } p_{\alpha_1^s}^*=1.$$

TABLE VII
MINIMUM TIME-AVERAGED RECONSTRUCTION ERROR AS A FUNCTION OF η FOR RSC STATE-AWARE WITH [TeX:] $$p_{\mathrm{s}_0}=0.5, p_{\mathrm{s}_1}=0.6,$$ P=0.2, AND q=0.4. AND DIFFERENT VALUES OF p AND q.
TABLE VIII
MINIMUM TIME-AVERAGED RECONSTRUCTION ERROR AS A FUNCTION OF η FOR [TeX:] $$p_{\mathrm{s}_0}=0.5, p_{\mathrm{s}_1}=0.6,$$ P=0.2, AND q=0.4.
TABLE IX
MINIMUM TIME-AVERAGED RECONSTRUCTION ERROR FOR RSC STATE-AWARE AS A FUNCTION OF η FOR [TeX:] $$p_{\mathrm{s}_0}=0.5, p_{\mathrm{s}_1}=0.6,$$ P=0.6, AND q=0.7.
TABLE X
MINIMUM TIME-AVERAGED RECONSTRUCTION ERROR AS A FUNCTION OF η FOR [TeX:] $$p_{\mathrm{s}_0}=0.5, p_{\mathrm{s}_1}=0.6,$$ AND q=0.7.

Figs. 4 and 5 show the average consecutive error contour plots as a function of [TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$ and [TeX:] $$p_{\alpha_1^{\mathrm{s}}}$$ for p > q, considering the slow and rapid changes of the source, respectively. As illustrated in Fig. 4, when the source changes slowly, the minimum average consecutive error occurs at high values of [TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$ and [TeX:] $$p_{\alpha_1^{\mathrm{s}}}$$. In addition, as observed in Fig. 5, when the source changes rapidly and success probabilities are low, the average consecutive error decreases with a high value of [TeX:] $$p_{\alpha_1^{\mathrm{s}}}$$ and a low value of [TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$. Furthermore, when success probabilities are high, the average consecutive error has its minimum value as [TeX:] $$p_{\alpha_0^{\mathrm{s}}} \text { and } p_{\alpha_1^{\mathrm{s}}}$$ increase. Also, note that these figures can be used to obtain the optimal values of sampling probabilities. Another noteworthy result is that as the success probabilities increase, we can achieve a comparable average consecutive error with smaller sampling probabilities compared to situations where the success probabilities are lower. For example, when p = 0.3 and q = 0.2, with [TeX:] $$p_{\mathrm{s}_0}=0.2 \text { and } p_{\mathrm{s}_1}=0.3,$$ the minimum average consecutive error is approximately 0:65, which is achieved by setting [TeX:] $$p_{\alpha_0^{\mathrm{s}}}=1 \text { and } p_{\mathrm{s}_1}=1.$$ However, for [TeX:] $$p_{\mathrm{s}_0}=0.7 \text { and } p_{\mathrm{s}_1}=0.8,$$ the similar average consecutive error value can be obtained by using [TeX:] $$p_{\alpha_0^{\mathrm{s}}}=0.2 \text { and } p_{\alpha_1^{\mathrm{s}}}=1 \text {. }$$

Fig. 4.
Average consecutive error as a function of [TeX:] $$p_{\alpha_0^{\mathrm{s}}} \text { and } p_{\alpha_1^{\mathrm{s}}}$$ for a slowly changing source with p = 0.3 and q = 0.2.
Fig. 5.
Average consecutive error as a function of [TeX:] $$p_{\alpha_0^{\mathrm{s}}} \text { and } p_{\alpha_1^{\mathrm{s}}}$$ for a rapidly changing source with p = 0.8 and q = 0.1.

The average importance-aware consecutive error as a function of [TeX:] $$p_{\mathrm{s}_0} \text { and } p_{\mathrm{s}_1}$$ for slowly and rapidly changing information source, and selected values of success probabilities, is presented in Figs. 6 and 7. In this analysis, we focus on the particular erroneous state where [TeX:] $$X(t)=1 \text { and } \hat{X}(t)=0.$$ As seen in these figures, the average importance-aware consecutive error is minimized when [TeX:] $$p_{\alpha_1^{\mathrm{s}}}$$ is at its maximum and [TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$ is at its minimum. The reason behind this is that when we have poor channel conditions, successful transmission of the less important state can have a negative impact since when the source will transit to the important state, the destination may miss that transition due to a potentially unsuccessful transmission. Thus, in that case, it may be preferable to avoid performing sampling in the less important state. To avoid such cases, this metric has to be considered and optimized in combination with other error metrics, such as the time-averaged reconstruction error. This is because a decrease in [TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$ may lead to an increase in the time-averaged reconstruction error, depending on the other system parameters. This increase can have significant consequences for the overall performance of such systems. Therefore, it is crucial to consider the interplay between the average importance-aware consecutive error and the time-averaged reconstruction error. Tables XI and XII illustrate the average importance-aware consecutive error and time-averaged reconstruction error as a function of [TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$ and [TeX:] $$p_{\alpha_1^{\mathrm{s}}}$$ for [TeX:] $$p_{\mathrm{s}_0}=0.4, p_{\mathrm{s}_1}=0.7,$$ p = 0.5, and q = 0.9. Here we intend to find sampling probabilities [TeX:] $$p_{\mathrm{s}_0} \text{ and } p_{\mathrm{s}_1}$$ to keep the average importance-aware consecutive error and time-averaged reconstruction error below predefined thresholds [TeX:] $$I_1 \text{ and } I_2,$$ respectively. For example, we assume that the thresholds are equal to [TeX:] $$I_1=0.1 \text{ and } I_2=0.3.$$ As seen in these tables, we can achieve our purpose by setting [TeX:] $$p_{\alpha_0^{\mathrm{s}}}=1 \text { and } p_{\alpha_1^{\mathrm{s}}}=0.9 \text{ or } 1.$$ However, when [TeX:] $$I_1=0.1 \text{ and } I_2=0.2,$$ we cannot find sampling probabilities [TeX:] $$p_{\alpha_0^{\mathrm{s}}} \text { and } p_{\alpha_1^{\mathrm{s}}}$$ to fulfill our objective. While if we only consider the average importance-aware consecutive error metric, we could achieve our purpose by setting [TeX:] $$p_{\alpha_0^{\mathrm{s}}}=0.1 \text { and } p_{\alpha_1^{\mathrm{s}}}=1 \text {. }$$

Fig. 6.
Average importance-aware consecutive error as a function of [TeX:] $$p_{\alpha_0^{\mathrm{s}}} \text { and } p_{\alpha_1^{\mathrm{s}}}$$ for a slowly changing source with p = 0.1 and q = 0.2.
Fig. 7.
Average importance-aware consecutive error as a function of [TeX:] $$p_{\alpha_0^{\mathrm{s}}} \text { and } p_{\alpha_1^{\mathrm{s}}}$$ for a rapidly changing source with p = 0.8 and q = 0.9.
TABLE XI
AVERAGE IMPORTANCE-AWARE CONSECUTIVE ERROR FOR [TeX:] $$p_{\mathrm{s}_0}=0.4, p_{\mathrm{s}_1}=0.7,$$ p = 0.5, AND q = 0.9 AND SELECTED VALUES OF [TeX:] $$p_{\alpha_0^{\mathrm{S}}} \text { AND } p_{\alpha_1^{\mathrm{S}}} \text {. }$$
TABLE XII
TIME-AVERAGED RECONSTRUCTION ERROR FOR [TeX:] $$p_{\mathrm{s}_0}=0.4, p_{\mathrm{s}_1}=0.7,$$ p = 0.5, AND q = 0.9 AND SELECTED VALUES OF [TeX:] $$p_{\alpha_0^{\mathrm{S}}} \text { AND } p_{\alpha_1^{\mathrm{S}}} \text {. }$$

VIII. CONCLUSIONS

In this work, we considered a time slotted communication system where sampling and transmission over a wireless erasure channel is performed in order to track a two-state Markov process. We proposed a state-aware randomized stationary, which considers varying sampling and success probabilities for different states of the source. We then analyzed the system performance in terms of set of metrics, namely timeaveraged reconstruction error, average cost of actuation error, consecutive error, and importance-aware consecutive error. Furthermore, we cast and solved the optimization problem of minimizing the average cost of actuation error while keeping the time-averaged sampling cost constraint less than a given threshold. Our results illustrated that the optimal state-aware randomized stationary policy exhibits superior performance compared to other state-of-the-art policies when there is a constraint on the sampling cost, in particular when the source varies quickly. Specifically, it performs better for fast changes in the source and can also perform well for slow changes under certain conditions.

APPENDIX A

PROOF OF LEMMA 1

To obtain [TeX:] $$\pi_{i, j}$$ we depict the two-dimensional DTMC describing the joint status of the system regarding the current state at the original source, i.e., [TeX:] $$(X(t), \hat{X}(t))$$ in Fig. 8, where [TeX:] $$P_0, P_1, P_2 \text {, and } P_3$$ are given by

(38)
[TeX:] $$\begin{aligned} P_0 & =\operatorname{Pr}\left[X_{t+1}=1, \hat{X}_{t+1}=0 \mid X_t=0, \hat{X}_t=0\right] \\ & =p p_{\alpha_1^s}\left(1-p_{\mathrm{s}_1}\right)+p\left(1-p_{\alpha_1^s}\right) \\ P_1 & =\operatorname{Pr}\left[X_{t+1}=0, \hat{X}_{t+1}=1 \mid X_t=0, \hat{X}_t=1\right] \\ & =(1-p) p_{\alpha_0^s}\left(1-p_{\mathrm{s}_0}\right)+(1-p)\left(1-p_{\alpha_0^{\mathrm{s}}}\right) \\ P_2 & =\operatorname{Pr}\left[X_{t+1}=1, \hat{X}_{t+1}=0 \mid X_t=1, \hat{X}_t=0\right] \\ & =(1-q) p_{\alpha_1^s}\left(1-p_{\mathrm{s}_1}\right)+(1-q)\left(1-p_{\alpha_1^s}\right) \\ P_3 & =\operatorname{Pr}\left[X_{t+1}=0, \hat{X}_{t+1}=1 \mid X_t=1, \hat{X}_t=1\right] \\ & =q p_{\alpha_0^{\mathrm{s}}}\left(1-p_{\mathrm{s}_0}\right)+q\left(1-p_{\alpha_0^{\mathrm{s}}}\right) . \end{aligned}$$

Fig. 8.
Two-dimensional DTMC describing the joint status of the system regarding the current state at the original source using a two-state information source model.

Now, using Fig. 8 and (38), we can obtain state stationary [TeX:] $$\pi_{i, j}, \forall i, j \in\{0,1\},$$ as follows

(39)
[TeX:] $$\begin{aligned} & \pi_{0,0}=\frac{q p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\left[q+(1-q) p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right]}{(p+q) \Phi\left(p_{\alpha_0^{\mathrm{s}}}, p_{\alpha_1^{\mathrm{s}}}\right)} \\ & \pi_{0,1}=\frac{p q p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\left(1-p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\right)}{(p+q) \Phi\left(p_{\alpha_0^{\mathrm{s}}}, p_{\alpha_1^{\mathrm{s}}}\right)} \\ & \pi_{1,0}=\frac{p q p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\left(1-p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\right)}{(p+q) \Phi\left(p_{\alpha_0^{\mathrm{s}}}, p_{\alpha_1^{\mathrm{s}}}\right)} \\ & \pi_{1,1}=\frac{p p_{\alpha_1^{\mathrm{s}}} p_{\mathrm{s}_1}\left[p+(1-p) p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\right]}{(p+q) \Phi\left(p_{\alpha_0^{\mathrm{s}}}, p_{\alpha_1^{\mathrm{s}}}\right)}, \end{aligned}$$

where [TeX:] $$\Phi\left(p_{\alpha_0^{\mathrm{s}}}, p_{\alpha_1^{\mathrm{s}}}\right)$$ is given in (7). For the change-aware policy, (39) can be written as

(40)
[TeX:] $$\begin{aligned} & \pi_{0,0}=\frac{q p_{\mathrm{s}_0}}{(p+q)\left(p_{\mathrm{s}_0}+p_{\mathrm{s}_1}-p_{\mathrm{s}_0} p_{\mathrm{s}_1}\right)} \\ & \pi_{0,1}=\frac{q p_{\mathrm{s}_1}\left(1-p_{\mathrm{s}_0}\right)}{(p+q)\left(p_{\mathrm{s}_0}+p_{\mathrm{s}_1}-p_{\mathrm{s}_0} p_{\mathrm{s}_1}\right)} \\ & \pi_{1,0}=\frac{p p_{\mathrm{s}_0}\left(1-p_{\mathrm{s}_1}\right)}{(p+q)\left(p_{\mathrm{s}_0}+p_{\mathrm{s}_1}-p_{\mathrm{s}_0} p_{\mathrm{s}_1}\right)} \\ & \pi_{1,1}=\frac{p p_{\mathrm{s}_1}}{(p+q)\left(p_{\mathrm{s}_0}+p_{\mathrm{s}_1}-p_{\mathrm{s}_0} p_{\mathrm{s}_1}\right)} . \end{aligned}$$

Furthermore, for the semantics-aware policy [TeX:] $$\pi_{i, j}$$ is given by

(41)
[TeX:] $$\begin{aligned} & \pi_{0,0}=\frac{q p_{\mathrm{s}_0}\left[q+(1-q) p_{\mathrm{s}_1}\right]}{(p+q)\left[q p_{\mathrm{s}_0}+(1-q) p_{\mathrm{s}_0} p_{\mathrm{s}_1}+p p_{\mathrm{s}_1}\left(1-p_{\mathrm{s}_0}\right)\right]} \\ & \pi_{0,1}=\frac{p q p_{\mathrm{s}_1}\left(1-p_{\mathrm{s}_0}\right)}{(p+q)\left[q p_{\mathrm{s}_0}+(1-q) p_{\mathrm{s}_0} p_{\mathrm{s}_1}+p p_{\mathrm{s}_1}\left(1-p_{\mathrm{s}_0}\right)\right]} \\ & \pi_{1,0}=\frac{p q p_{\mathrm{s}_0}\left(1-p_{\mathrm{s}_1}\right)}{(p+q)\left[q p_{\mathrm{s}_0}+(1-q) p_{\mathrm{s}_0} p_{\mathrm{s}_1}+p p_{\mathrm{s}_1}\left(1-p_{\mathrm{s}_0}\right)\right]} \\ & \pi_{1,1}=\frac{p p_{\mathrm{s}_1}\left[p+(1-p) p_{\mathrm{s}_0}\right]}{(p+q)\left[q p_{\mathrm{s}_0}+(1-q) p_{\mathrm{s}_0} p_{\mathrm{s}_1}+p p_{\mathrm{s}_1}\left(1-p_{\mathrm{s}_0}\right)\right]}. \end{aligned}$$

APPENDIX B

THREE-STATE DTMC INFORMATION SOURCE

For a three-state DTMC information source model depicted in Fig. 9, the stationary distribution [TeX:] $$\pi_{i, j}$$ for the state-aware randomized stationary policy is given by

(42)
[TeX:] $$\begin{aligned} \pi_{0,0}= & \frac{1}{Z_1}\left[q ^ { 2 } ( 3 p - 1 ) p _ { \alpha _ { 0 } } p _ { \mathrm { s } _ { 0 } } p _ { \alpha _ { 1 } } p _ { \mathrm { s } _ { 1 } } \left(p\left(p_{\alpha_2} p_{\mathrm{s}_2}-1\right)-2 q\right.\right. \\ & \left.\left.+(2 q-1) p_{\alpha_2} p_{\mathrm{s}_2}\right)\right]\left[p\left(p_{\alpha_1} p_{\mathrm{s}_1}-1\right)\left((q-1) p_{\alpha_2} p_{\mathrm{s}_2}-q\right)\right. \\ & \left.+\left(p_{\alpha_1} p_{\mathrm{s}_1}(q-1)-q\right)\left((2 q-1) p_{\alpha_2} p_{\mathrm{s}_2}-2 q\right)\right], \end{aligned}$$

(43)
[TeX:] $$\begin{aligned} \pi_{0,1}= & \frac{1}{Z_1}\left[p q ^ { 2 } ( 3 p - 1 ) p _ { \alpha _ { 1 } } p _ { \mathrm { s } _ { 1 } } ( p _ { \alpha _ { 0 } } p _ { \mathrm { s } _ { 0 } } - 1 ) \left(p\left(p_{\alpha_2} p_{\mathrm{s}_2}-1\right)\right.\right. \\ & \left.\left.-2 q+(2 q-1) p_{\alpha_2} p_{\mathrm{s}_2}\right)\right]\left[2 p p_{\alpha_2} p_{\mathrm{s}_2}\left(p_{\alpha_1} p_{\mathrm{s}_1}-1\right)\right. \\ & \left.-q p_{\alpha_2} p_{\mathrm{s}_2}+p_{\alpha_1} p_{\mathrm{s}_1}\left(4 q p_{\alpha_2} p_{\mathrm{s}_2}-2 p_{\alpha_2} p_{\mathrm{s}_2}-3 q\right)\right], \end{aligned}$$

(44)
[TeX:] $$\pi_{0,2}=\frac{1}{Z_2}\left[p q p_{\alpha_2} p_{\mathrm{s}_2}\left(p\left(p_{\alpha_1} p_{\mathrm{s}_1}-1\right)-2 q+(2 q-1) p_{\alpha_1} p_{\mathrm{s}_1}\right)\right],$$

(45)
[TeX:] $$\begin{aligned} \pi_{1,0}= & \frac{1}{Z_1}\left[p q ^ { 2 } ( 3 p - 1 ) p _ { \alpha _ { 0 } } p _ { \mathrm { s } _ { 0 } } p _ { \alpha _ { 1 } } p _ { \mathrm { s } _ { 1 } } ( p _ { \alpha _ { 1 } } p _ { \mathrm { s } _ { 1 } } - 1 ) \left(p\left(p_{\alpha_2} p_{\mathrm{s}_2}-1\right)\right.\right. \\ & \left.\left.+(2 q-1) p_{\alpha_2} p_{\mathrm{s}_2}-2 q\right)\left((3 q-1) p_{\alpha_2} p_{\mathrm{s}_2}-3 q\right)\right], \end{aligned}$$

(46)
[TeX:] $$\begin{aligned} \pi_{1,1}= & \frac{1}{Z_1}\left[p q ( 3 p - 1 ) p _ { \alpha _ { 1 } } p _ { \mathrm { s } _ { 1 } } \left(2 p p_{\alpha_2} p_{\mathrm{s}_2}\left(p_{\alpha_1} p_{\mathrm{s}_1}-1\right)\right.\right. \\ & \left.\left.-q p_{\alpha_2} p_{\mathrm{s}_2}+p_{\alpha_1} p_{\mathrm{s}_1}\left(4 q p_{\alpha_2} p_{\mathrm{s}_2}-2 p_{\alpha_2} p_{\mathrm{s}_2}-3 q\right)\right)\right] \\ & \times\left[p\left(p_{\alpha_0} p_{\mathrm{s}_0}-1\right)\left(-3 q+(3 q-2) p_{\alpha_2} p_{\mathrm{s}_2}\right)\right. \\ & \left.+p_{\alpha_0} p_{\mathrm{s}_0}\left(2 q+p_{\alpha_2} p_{\mathrm{s}_2}(1-2 q)\right)\right], \end{aligned}$$

(47)
[TeX:] $$\pi_{1,2}=\frac{1}{Z_2}\left[p q p_{\alpha_2} p_{\mathrm{s}_2}(1-3 p)\left(1-p_{\alpha_1} p_{\mathrm{s}_1}\right)\right],$$

(48)
[TeX:] $$\begin{aligned} \pi_{2,0}= & \frac{1}{Z_1}\left[p q^2(3 p-1) p_{\alpha_0} p_{\mathrm{s}_0} p_{\alpha_1} p_{\mathrm{s}_1}\left(p_{\alpha_2} p_{\mathrm{s}_2}-1\right)\right. \\ & \times\left(2 p\left(p_{\alpha_1} p_{\mathrm{s}_1}-1\right)+(q-1) p_{\alpha_1} p_{\mathrm{s}_1}-q\right) \\ & \left.\times\left(p\left(p_{\alpha_2} p_{\mathrm{s}_2}-1\right)-2 q+(2 q-1) p_{\alpha_2} p_{\mathrm{s}_2}\right)\right], \end{aligned}$$

(49)
[TeX:] $$\begin{aligned} \pi_{2,1}= & \frac{1}{Z_1}\left[q p ^ { 2 } ( 3 p - 1 ) p _ { \alpha _ { 1 } } p _ { \mathrm { s } _ { 1 } } ( p _ { \alpha _ { 2 } } p _ { \mathrm { s } _ { 2 } } - 1 ) \left(2 p\left(p_{\alpha_0} p_{\mathrm{s}_0}-1\right)\right.\right. \\ & \left.\left.+(q-1) p_{\alpha_0} p_{\mathrm{s}_0}-q\right)\right]\left[2 p p_{\alpha_2} p_{\mathrm{s}_2}\left(p_{\alpha_1} p_{\mathrm{s}_1}-1\right)-q p_{\alpha_2} p_{\mathrm{s}_2}\right. \\ & \left.+p_{\alpha_1} p_{\mathrm{s}_1}\left(4 q p_{\alpha_2} p_{\mathrm{s}_2}-2 p_{\alpha_2} p_{\mathrm{s}_2}-3 q\right)\right], \end{aligned}$$

(50)
[TeX:] $$\begin{aligned} \pi_{2,2}= & \frac{1}{Z_2}\left[p p _ { \alpha _ { 2 } } p _ { \mathrm { s } _ { 2 } } \left(p+2 p^2\left(p_{\alpha_1} p_{\mathrm{s}_1}-1\right)+p(q-3) p_{\alpha_1} p_{\mathrm{s}_1}\right.\right. \\ & \left.\left.+q-p q+p_{\alpha_1} p_{\mathrm{s}_1}(1-q)\right)\right], \end{aligned}$$

where [TeX:] $$Z_1 \text { and } Z_2$$ in (42) to (50) are given by

(51)
[TeX:] $$\begin{aligned} Z_1= & (2 p+q)\left[2 p^3 p_{\alpha_2} p_{\mathrm{s}_2}\left(p_{\alpha_1} p_{\mathrm{s}_1}-1\right)\right. \\ & +q p_{\alpha_1} p_{\mathrm{s}_1}\left(2 q+p_{\alpha_2} p_{\mathrm{s}_2}(1-2 q)\right)+p^2\left(-3 q p_{\alpha_1} p_{\mathrm{s}_1}\right. \\ & \left.+p_{\alpha_2} p_{\mathrm{s}_2}\left(1-5 q+p_{\alpha_1} p_{\mathrm{s}_1}(8 q-3)\right)\right)+p\left(-2 q(q-1) p_{\alpha_2} p_{\mathrm{s}_2}\right. \\ & \left.\left.+p_{\alpha_1} p_{\mathrm{s}_1}\left(q-6 q^2+p_{\alpha_2} p_{\mathrm{s}_2}\left(1-7 q+8 q^2\right)\right)\right)\right] \\ & \times\left[2 p^2 p_{\alpha_2} p_{\mathrm{s}_2}\left(p_{\alpha_0} p_{\mathrm{s}_0}-1\right)\left(p_{\alpha_1} p_{\mathrm{s}_1}-1\right)\right. \\ & +p_{\alpha_0} p_{\mathrm{s}_0}\left(p_{\alpha_1} p_{\mathrm{s}_1}(q-1)-q\right)\left(-2 q+p_{\alpha_2} p_{\mathrm{s}_2}(2 q-1)\right) \\ & +p\left(q p_{\alpha_2} p_{\mathrm{s}_2}+p_{\alpha_1} p_{\mathrm{s}_1}\left(3 q+(2-4 q) p_{\alpha_2} p_{\mathrm{s}_2}\right)\right. \\ & \left.\left.+p_{\alpha_0} p_{\mathrm{s}_0}\left(q-4 q p_{\alpha_1} p_{\mathrm{s}_1} +p_{\alpha_2} p_{\mathrm{s}_2}\left(1-2 q+p_{\alpha_1} p_{\mathrm{s}_1}(5 q-3)\right)\right)\right)\right], \end{aligned}$$

(52)
[TeX:] $$\begin{aligned} Z_2= & 2 p^3 p_{\alpha_2} p_{\mathrm{s}_2}\left(p_{\alpha_1} p_{\mathrm{s}_1}-1\right)+q p_{\alpha_1} p_{\mathrm{s}_1}\left(2 q+p_{\alpha_2} p_{\mathrm{s}_2}(1-2 q)\right) \\ & +p^2\left(-3 q p_{\alpha_1} p_{\mathrm{s}_1}+p_{\alpha_2} p_{\mathrm{s}_2}\left(1-5 q+p_{\alpha_1} p_{\mathrm{s}_1}(8 q-3)\right)\right) \\ & +p\left(2 q(1-q) p_{\alpha_2} p_{\mathrm{s}_2}+p_{\alpha_1} p_{\mathrm{s}_1}\left(q-6 q^2+p_{\alpha_2} p_{\mathrm{s}_2}\left(1-7 q+8 q^2\right)\right)\right). \end{aligned}$$

APPENDIX C

PROOF OF (13)

[TeX:] $$C_E(t)=i(i \geqslant 1)$$ means that the system was in a synced state at time slot t-i, and it has been in an erroneous state from time slots t - i + 1 to t. Therefore, to obtain [TeX:] $$\operatorname{Pr}\left[C_E(t)=i\right],$$ we need to calculate

(53)
[TeX:] $$\begin{aligned} \operatorname{Pr} & {\left[C_E(t)=i\right] } \\ = & \operatorname{Pr}[E(t) \neq 0, \cdots, E(t-i+1) \neq 0, E(t-i)=0] \\ = & \operatorname{Pr}[E(t) \neq 0, \cdots, E(t-i+1) \neq 0 \mid X(t-i)=0, E(t-i)=0] \\ & \times \operatorname{Pr}[X(t-i)=0, E(t-i)=0] \\ & +\operatorname{Pr}[E(t) \neq 0, \cdots, E(t-i+1) \neq 0 \mid X(t-i)=1, E(t-i)=0] \\ & \times \operatorname{Pr}[X(t-i)=1, E(t-i)=0], \end{aligned}$$

where the first conditional probability in (53) can be written as

(54)
[TeX:] $$\begin{aligned} & \operatorname{Pr}[E(t) \neq 0, \cdots, E(t-i+1) \neq 0 \mid X(t-i)=0, E(t-i)=0] \\ & =\operatorname{Pr}[X(t-i+1)=1, \hat{X}(t-i+1)=0 \mid X(t-i)=0, \hat{X}(t-i)=0] \\ & \quad \times(i \geqslant 2)\left\{\prod_{j=1-i}^{-1} \operatorname{Pr}[X(t+j+1)=1, \hat{X}(t+j+1)=0\right. \\ & \mid X(t+j)=1, \hat{X}(t+j)=0]\}=p(1-q)^{i-1}\left(1-p_{\alpha_1^s} p_{\mathrm{s}_1}\right)^i . \end{aligned}$$

Similarly, one can obtain the second conditional probability in (53) as

(55)
[TeX:] $$\begin{aligned} & \operatorname{Pr}[E(t) \neq 0, \cdots, E(t-i+1) \neq 0 \mid X(t-i)=1, E(t-i)=0] \\ & \quad=q(1-p)^{i-1}\left(1-p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\right)^i . \end{aligned}$$

Now, using Lemma 1, (54) and (55), we can write (53) as

(56)
[TeX:] $$\begin{aligned} & \operatorname{Pr}\left[C_E(t)=i\right] \\ & =p(1-q)^{i-1}\left(1-p_{\alpha_1^s} p_{\mathrm{s}_1}\right)^i \pi_{0,0}+q(1-p)^{i-1}\left(1-p_{\alpha_0^{\mathrm{s}}} p_{\mathrm{s}_0}\right)^i \pi_{1,1}. \end{aligned}$$

Biography

Mehrdad Salimnejad

Mehrdad Salimnejad (Student Member, IEEE) received the B.Sc. degree in Electrical Engineering from the Razi University, Kermanshah, Iran, in 2012, and the M.Sc. degree in Electrical Engineering from the University of Tehran, Tehran, Iran, in 2015. From 2015 to 2022, he was a Research Engineer at the Research Center of Sharif University of Technology working on the design and development of the fifth-generation (5G) wireless cellular networks. He is currently Ph.D. student at the Department of Computer and Information Science at Link¨ oping University, Sweden. His interest include semantic wireless communication, age of information, and communication networks.

Biography

Marios Kountouris

Marios Kountouris (S’04-M’08-SM’15-F’23) received the diploma degree in Electrical and Computer Engineering from the National Technical University of Athens (NTUA), Greece in 2002 and the M.S. and Ph.D. degrees in Electrical Engineering from T´ el´ ecom Paris, France in 2004 and 2008, respectively. He is currently a Professor at the Communication Systems department, EURECOM, Sophia-Antipolis, France. Prior to his current appointment, he has held positions at CentraleSup´ elec, France, the University of Texas at Austin, USA, Huawei Paris Research Center, France, and Yonsei University, South Korea. He is the recipient of a Consolidator Grant of the European Research Council (ERC) in 2020 on goal-oriented semantic communication. He has served as Editor for the IEEE Transactions on Wireless Communications, the IEEE Transactions on Signal Processing, and the IEEE Wireless Communication Letters. He has received several awards and distinctions, including the 2022 Blondel Medal, the 2020 IEEE ComSoc Young Author Best Paper Award, the 2016 IEEE ComSoc CTTC Early Achievement Award, the 2013 IEEE ComSoc Outstanding Young Researcher Award for the EMEA Region, the 2012 IEEE SPS Signal Processing Magazine Award, the IEEE SPAWC 2013 Best Paper Award and the IEEE Globecom 2009 Communication Theory Best Paper Award. He is an IEEE Fellow, an AAIA Fellow, and a chartered Professional Engineer of the Technical Chamber of Greece.

Biography

Nikolaos Pappas

Nikolaos Pappas (Senior Member, IEEE) received a B.Sc. degree in Computer Science, a B.Sc. degree in Mathematics, an M.Sc. degree in Computer Science, and a Ph.D. degree in Computer science from the University of Crete, Greece, in 2005, 2012, 2007, and 2012, respectively. From 2005 to 2012, he was a Graduate Research Assistant with the Telecommunications and Networks Laboratory, Institute of Computer Science, Foundation for Research and Technology—Hellas, Heraklion, Greece, and a Visiting Scholar with the Institute of Systems Research, University of Maryland at College Park, College Park, MD, USA. From 2012 to 2014, he was a Postdoctoral Researcher with the Department of Telecommunications, CentraleSup´ elec, Gif-sur-Yvette, France. He is currently an Associate Professor at the Department of Computer and Information Science at Link¨ oping University, Link¨ oping, Sweden. His main research interests include the field of wireless communication networks with an emphasis on semantics-aware communications, energy harvesting networks, network-level cooperation, age of information, and stochastic geometry. Dr. Pappas has served as the Symposium Co-Chair of the IEEE International Conference on Communications in 2022 and the IEEE Wireless Communications and Networking Conference in 2022. He is Area Editor of the IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY and an Expert Editor for invited papers of the IEEE COMMUNICATIONS LETTERS. He is an Editor of the IEEE TRANSACTIONS ON MACHINE LEARNING IN COMMUNICATIONS AND NETWORKING and the IEEE/KICS JOURNAL OF COMMUNICATIONS AND NETWORKS. He is a Guest Editor of the IEEE NETWORK on "Tactile Internet for a cyber-physical continuum", and the IEEE IoT MAGAZINE on "Task-Oriented Communications and Networking for the Internet of Things". He has served as an Editor of the IEEE COMMUNICATIONS LETTERS and the IEEE TRANSACTIONS ON COMMUNICATIONS. He was a Guest Editor of the IEEE INTERNET OF THINGS JOURNAL on "Age of Information and Data Semantics for Sensing, Communication and Control Co-Design in IoT".

References

  • 1 L. Shi, P. Cheng, and J. Chen, "Sensor data scheduling for optimal state estimation with communication energy constraint," Automatica, vol. 47, no. 8, pp. 1693-1698, 2011.doi:[[[10.1016/j.automatica.2011.02.037]]]
  • 2 J. Wu, Q.-S. Jia, K. H. Johansson, and L. Shi, "Event-based sensor data scheduling: Trade-off between communication rate and estimation quality," IEEE Trans. Autom. Control, vol. 58, no. 4, 2012.custom:[[[https://ieeexplore.ieee.org/document/6286997]]]
  • 3 X. Meng and T. Chen, "Optimal sampling and performance comparison of periodic and event based impulse control," IEEE Trans. Autom. Control, vol. 57, no. 12, 2012.custom:[[[https://ieeexplore.ieee.org/document/6203378]]]
  • 4 J. Wu, Y . Yuan, H. Zhang, and L. Shi, "How can online schedules improve communication and estimation tradeoff?" IEEE Trans. Signal Process., vol. 61, no. 7, pp. 1625-1631, 2013.custom:[[[https://ieeexplore.ieee.org/document/6410050]]]
  • 5 S. Trimpe and R. D’Andrea, "Event-based state estimation with variance-based triggering," IEEE Trans. Autom. Control, vol. 59, no. 12, 2014.custom:[[[https://ieeexplore.ieee.org/document/6882786]]]
  • 6 A. S. Leong, S. Dey, and D. E. Quevedo, "Sensor scheduling in variance based event triggered estimation with packet drops," IEEE Trans. Autom. Control, vol. 62, no. 4, 2016.custom:[[[https://ieeexplore.ieee.org/document/7551140]]]
  • 7 O. C. Imer and T. Basar, "Optimal estimation with limited measurements," in Proc. IEEE CDC, 2005.custom:[[[https://ieeexplore.ieee.org/abstract/document/1582293]]]
  • 8 A. Nayyar, T. Bas ¸ar, D. Teneketzis, and V . V . Veeravalli, "Optimal strategies for communication and remote estimation with an energy harvesting sensor," IEEE Trans. Autom. Control, vol. 58, no. 9, 2013.doi:[[[10.48550/arXiv.1205.6018]]]
  • 9 J. Chakravorty and A. Mahajan, "On the optimal thresholds in remote state estimation with communication costs," in Proc. IEEE CDC, 2014.custom:[[[https://ieeexplore.ieee.org/document/7039519]]]
  • 10 L. Shi and H. Zhang, "Scheduling two Gauss-Markov systems: An optimal solution for remote state estimation under bandwidth constraint," IEEE Trans. Signal Process., vol. 60, no. 4, pp. 2038-2042, 2012.custom:[[[https://ieeexplore.ieee.org/abstract/document/6126061]]]
  • 11 S. Wu, X. Ren, S. Dey, and L. Shi, "Optimal scheduling of multiple sensors over shared channels with packet transmission constraint," Automatica, vol. 96, pp. 22-31, 2018.doi:[[[10.1016/j.automatica.2018.06.019]]]
  • 12 J. Chakravorty and A. Mahajan, "Remote estimation over a packet-drop channel with Markovian state," IEEE Trans. Autom. Control, vol. 65, no. 5, pp. 2016-2031, 2019.doi:[[[10.48550/arXiv.1807.09706]]]
  • 13 J. Chakravorty and A. Mahajan, "Distortion-transmission trade-off in real-time transmission of Markov sources," in Proc. IEEE ITW, 2015.custom:[[[https://ieeexplore.ieee.org/document/7133149]]]
  • 14 J. Chakravorty and A. Mahajan, "Fundamental limits of remote estimation of autoregressive Markov processes under communication constraints," IEEE Trans. Autom. Control, vol. 62, no. 3, pp. 1109-1124, 2016.custom:[[[https://ieeexplore.ieee.org/document/7491287]]]
  • 15 Y . Sun, Y . Polyanskiy, and E. Uysal, "Sampling of the Wiener process for remote estimation over a channel with random delay," IEEE Trans. Inf. Theory, vol. 66, no. 2, pp. 1118-1135, 2019.custom:[[[https://ieeexplore.ieee.org/document/8812616]]]
  • 16 T. Z. Ornee and Y . Sun, "Sampling and remote estimation for the ornstein-uhlenbeck process through queues: Age of information and beyond," IEEE/ACM Trans. Netw., vol. 29, no. 5, pp. 1962-1975, 2021.doi:[[[10.48550/arXiv.1902.03552]]]
  • 17 N. Guo and V . Kostina, "Optimal causal rate-constrained sampling for a class of continuous Markov processes," IEEE Trans. Inf. Theory, vol. 67, no. 12, pp. 7876-7890, 2021.doi:[[[10.48550/arXiv.2002.01581]]]
  • 18 H. Hui, S. Hu, and W. Chen, "Real time monitoring of brownian motions," IEEE Trans. Commun., vol. 70, no. 9, 2022.custom:[[[https://ieeexplore.ieee.org/document/9829866]]]
  • 19 G. M. Lipsa and N. C. Martins, "Remote state estimation with communication costs for first-order LTI systems," IEEE Trans. Autom. Control, vol. 56, no. 9, pp. 2013-2025, 2011.custom:[[[https://ieeexplore.ieee.org/document/5744106]]]
  • 21 M. Pezzutto, L. Schenato, and S. Dey, "Transmission power allocation for remote estimation with multi-packet reception capabilities," Automatica, vol. 140, p. 110257, 2022.doi:[[[10.1016/j.automatica.2022.110257]]]
  • 20 K. Huang, W. Liu, Y . Li, and B. Vucetic, "To retransmit or not: Realtime remote estimation in wireless networked control," in Proc. IEEE ICC, 2019.custom:[[[https://ieeexplore.ieee.org/document/8761710]]]
  • 22 M. Kountouris and N. Pappas, "Semantics-empowered communication for networked intelligent systems," IEEE Commun. Mag., vol. 59, no. 6, pp. 96-102, 2021.doi:[[[10.48550/arXiv.2007.11579]]]
  • 23 N. Pappas and M. Kountouris, "Goal-oriented communication for realtime tracking in autonomous systems," in Proc. IEEE ICAS, 2021.custom:[[[https://ieeexplore.ieee.org/document/9551200]]]
  • 24 M. Kalfa et al., "Towards goal-oriented semantic signal processing: Applications and future challenges," Digital Signal Process., vol. 119, 2021.doi:[[[10.1016/j.dsp.2021.103134]]]
  • 25 Q. Lan et al., "What is semantic communication? A view on conveying meaning in the era of machine intelligence," J. Commun. Inf. Netw., vol. 6, no. 4, pp. 336-371, 2021.doi:[[[10.48550/arXiv.2110.00196]]]
  • 26 Z. Qin, X. Tao, J. Lu, and G. Y . Li, "Semantic communications: Principles and challenges," arXiv preprint arXiv: 2201.01389v2, 2022.doi:[[[10.48550/arXiv.2201.01389]]]
  • 27 P. Popovski et al., "A perspective on time toward wireless 6G," Proceedings IEEE, vol. 110, no. 8, 2022.doi:[[[10.48550/arXiv.2106.04314]]]
  • 28 P. A. Stavrou and M. Kountouris, "A rate distortion approach to goaloriented communication," in Proc. IEEE ISIT, 2022.custom:[[[https://ieeexplore.ieee.org/document/9834593]]]
  • 29 G. J. Stamatakis, N. Pappas, A. Fragkiadakis, and A. Traganitis, "Semantics-aware active fault detection in IoT," in Proc. WiOpt, 2022.custom:[[[https://ieeexplore.ieee.org/document/9930549]]]
  • 30 Jayanth S., N. Pappas, R. Bhat, "Distortion minimization with age of information and cost constraints," in Proc. WiOpt, 2023.doi:[[[10.48550/arXiv.2303.00850]]]
  • 31 D. G¨ und¨ uz et al., "Beyond transmitting bits: Context, semantics, and task-oriented communications," IEEE J. Sel. Areas Commun., vol. 41, no. 1, pp. 5-41, 2023.custom:[[[https://ieeexplore.ieee.org/document/9955525]]]
  • 32 G. Cocco, A. Munari, and G. Liva, "Remote monitoring of twostate Markov sources via random access channels: An information freshness vs. state estimation entropy perspective," arXiv preprint arXiv:2303.04507, 2023.doi:[[[10.48550/arXiv.2303.04507]]]
  • 33 M. Salimnejad, M. Kountouris, and N. Pappas, "Real-time remote reconstruction of a markov source and actuation over wireless," in Proc. IEEE ICC Workshops, 2023.doi:[[[10.48550/arXiv.2302.01132]]]
  • 34 M. Salimnejad, M. Kountouris, and N. Pappas, "Real-time reconstruction of Markov sources and remote actuation over wireless channels," arXiv:2302.13927, 2023.doi:[[[10.48550/arXiv.2302.13927]]]
  • 35 E. Fountoulakis, N. Pappas, and M. Kountouris, "Goal-oriented policies for cost of actuation error minimization in wireless autonomous systems," IEEE Commun. Lett., 2023.doi:[[[10.48550/arXiv.2303.04908]]]
  • 36 G. Stamatakis, N. Pappas, and A. Traganitis, "Control of status updates for energy harvesting devices that monitor processes with alarms," in Proc. IEEE Globecom Workshops, 2019.custom:[[[https://ieeexplore.ieee.org/document/9024463]]]
  • 37 A. Maatouk, S. Kriouile, M. Assaad, and A. Ephremides, "The age of incorrect information: A new performance metric for status updates," IEEE/ACM Trans. Netw., vol. 28, no. 5, pp. 2215-2228, 2020.custom:[[[https://www.researchgate.net/publication/342829365_The_Age_of_Incorrect_Information_A_New_Performance_Metric_for_Status_Updates]]]

TABLE I

MINIMUM AVERAGE COST OF ACTUATION ERROR FOR RSC STATE-AWARE WITH [TeX:] $$\eta=0.5, C_{0,1}=1, C_{1,0}=2, p_{\mathrm{s}_0}=0.2, p_{\mathrm{s}_1}=0.3,$$ AND DIFFERENT VALUES OF p AND q.
p q [TeX:] $$p_{\alpha_0^\mathrm{S}}^*$$ [TeX:] $$p_{\alpha_1^\mathrm{S}}^*$$ Minimum average cost of actuation error
0.1 0.01 0.083 0.542 0.091
0.3 0.1 0 0.667 0.25
0.5 0.4 0 0.9 0.444
0.7 0.8 0 1 0.533
0.9 0.95 0 1 0.513

TABLE II

MINIMUM AVERAGE COST OF ACTUATION ERROR FOR [TeX:] $$\eta=0.5, C_{0,1}=1, C_{1,0}=2, p_{\mathrm{s}_0}=0.2, p_{\mathrm{s}_1}=0.3,$$ AND DIFFERENT VALUES OF p AND q.
p q Semantics-aware Change-aware Uniform RSC RS
0.1 0.01 0.055 0.628 0.131 0.091 0.055
0.3 0.1 0.267 0.613 0.417 0.25 0.25
0.5 0.4 0.489 0.596 0.638 0.444 0.444
0.7 0.8 0.571 0.588 0.683 0.533 0.533
0.9 0.95 0.587 0.589 0.677 0.513 0.513

TABLE III

MINIMUM AVERAGE COST OF ACTUATION ERROR FOR RS STATE-AWARE WITH [TeX:] $$\eta=0.5, C_{0,1}=1, C_{1,0}=2, p_{\mathrm{s}_0}=0.2, p_{\mathrm{s}_1}=0.3,$$ AND DIFFERENT VALUES OF p AND q.
p q [TeX:] $$p_{\alpha_0^\mathrm{S}}^*$$ [TeX:] $$p_{\alpha_1^\mathrm{S}}^*$$ Minimum average cost of actuation error
0.1 0.01 1 1 0.055
0.3 0.1 0 1 0.25
0.5 0.4 0 1 0.444
0.7 0.8 0 1 0.533
0.9 0.95 0 1 0.513

TABLE IV

MINIMUM AVERAGE COST OF ACTUATION ERROR FOR RSC STATE-AWARE WITH [TeX:] $$\eta=0.5, C_{0,1}=1, C_{1,0}=2, p_{\mathrm{s}_0}=0.6, p_{\mathrm{s}_1}=0.6,$$ AND DIFFERENT VALUES OF p AND q.
p q [TeX:] $$p_{\alpha_0^\mathrm{S}}^*$$ [TeX:] $$p_{\alpha_1^\mathrm{S}}^*$$ Minimum average cost of actuation error
0.1 0.01 0.730 0.477 0.049
0.3 0.1 0.155 0.615 0.241
0.5 0.4 0.171 0.763 0.422
0.7 0.8 0.200 0.842 0.501
0.9 0.95 0.127 0.893 0.503

TABLE V

MINIMUM AVERAGE COST OF ACTUATION ERROR FOR [TeX:] $$\eta=0.5, C_{0,1}=1, C_{1,0}=2, p_{\mathrm{s}_0}=0.6, p_{\mathrm{s}_1}=0.6,$$ AND DIFFERENT VALUES OF p AND q.
p q Semantics-aware Change-aware Uniform RSC RS
0.1 0.01 0.017 0.545 0.092 0.049 0.017
0.3 0.1 0.118 0.5 0.404 0.241 0.118
0.5 0.4 0.278 0.444 0.640 0.422 0.278
0.7 0.8 0.373 0.419 0.686 0.501 0.373
0.9 0.95 0.414 0.424 0.690 0.503 0.414

TABLE VI

MINIMUM AVERAGE COST OF ACTUATION ERROR FOR RS STATE-AWARE WITH [TeX:] $$\eta=0.5, C_{0,1}=1, C_{1,0}=2, p_{\mathrm{s}_0}=0.6, p_{\mathrm{s}_1}=0.6,$$ AND DIFFERENT VALUES OF p AND q.
p q [TeX:] $$p_{\alpha_0^\mathrm{S}}^*$$ [TeX:] $$p_{\alpha_1^\mathrm{S}}^*$$ Minimum average cost of actuation error
0.1 0.01 1 1 0.017
0.3 0.1 1 1 0.118
0.5 0.4 1 1 0.278
0.7 0.8 1 1 0.373
0.9 0.95 1 1 0.414

TABLE VII

MINIMUM TIME-AVERAGED RECONSTRUCTION ERROR AS A FUNCTION OF η FOR RSC STATE-AWARE WITH [TeX:] $$p_{\mathrm{s}_0}=0.5, p_{\mathrm{s}_1}=0.6,$$ P=0.2, AND q=0.4. AND DIFFERENT VALUES OF p AND q.
η [TeX:] $$p_{\alpha_0^\mathrm{S}}^*$$ [TeX:] $$p_{\alpha_1^\mathrm{S}}^*$$ Minimum time-averaged reconstruction error
0.1 0.15 0 0.333
0.3 0.394 0.112 0.325
0.5 0.556 0.387 0.277
0.7 0.722 0.655 0.224
0.9 0.889 0.922 0.174

TABLE VIII

MINIMUM TIME-AVERAGED RECONSTRUCTION ERROR AS A FUNCTION OF η FOR [TeX:] $$p_{\mathrm{s}_0}=0.5, p_{\mathrm{s}_1}=0.6,$$ P=0.2, AND q=0.4.
η Semantics-aware Change-aware Uniform RSC RS
0.1 0.151 0.333 0.374 0.333 0.151
0.3 0.151 0.333 0.374 0.325 0.151
0.5 0.151 0.333 0.374 0.277 0.151
0.7 0.151 0.333 0.374 0.224 0.151
0.9 0.151 0.333 0.374 0.174 0.151

TABLE IX

MINIMUM TIME-AVERAGED RECONSTRUCTION ERROR FOR RSC STATE-AWARE AS A FUNCTION OF η FOR [TeX:] $$p_{\mathrm{s}_0}=0.5, p_{\mathrm{s}_1}=0.6,$$ P=0.6, AND q=0.7.
η [TeX:] $$p_{\alpha_0^\mathrm{S}}^*$$ [TeX:] $$p_{\alpha_1^\mathrm{S}}^*$$ Minimum time-averaged reconstruction error
0.1 0.184 0.002 0.461
0.3 0.374 0.214 0.430
0.5 0.565 0.424 0.386
0.7 0.757 0.633 0.338
0.9 0.949 0.842 0.287

TABLE X

MINIMUM TIME-AVERAGED RECONSTRUCTION ERROR AS A FUNCTION OF η FOR [TeX:] $$p_{\mathrm{s}_0}=0.5, p_{\mathrm{s}_1}=0.6,$$ AND q=0.7.
η Semantics-aware Change-aware Uniform RSC RS
0.260 0.317 0.459 0.461 0.260
0.3 0.260 0.317 0.459 0.430 0.260
0.5 0.260 0.317 0.459 0.386 0.260
0.7 0.260 0.317 0.459 0.338 0.260
0.9 0.260 0.317 0.459 0.287 0.260

TABLE XI

AVERAGE IMPORTANCE-AWARE CONSECUTIVE ERROR FOR [TeX:] $$p_{\mathrm{s}_0}=0.4, p_{\mathrm{s}_1}=0.7,$$ p = 0.5, AND q = 0.9 AND SELECTED VALUES OF [TeX:] $$p_{\alpha_0^{\mathrm{S}}} \text { AND } p_{\alpha_1^{\mathrm{S}}} \text {. }$$
[TeX:] $$p_{\alpha_1^{\mathrm{s}}}$$ 0.1 0.3 0.5 0.7 0.9 1
[TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$
0.1 0.189 0.079 0.043 0.025 0.014 0.010
0.3 0.283 0.163 0.101 0.063 0.037 0.028
0.5 0.315 0.205 0.136 0.089 0.055 0.042
0.7 0.330 0.231 0.161 0.109 0.069 0.053
0.9 0.340 0.248 0.179 0.124 0.081 0.062
1 0.343 0.256 0.186 0.131 0.086 0.066

TABLE XII

TIME-AVERAGED RECONSTRUCTION ERROR FOR [TeX:] $$p_{\mathrm{s}_0}=0.4, p_{\mathrm{s}_1}=0.7,$$ p = 0.5, AND q = 0.9 AND SELECTED VALUES OF [TeX:] $$p_{\alpha_0^{\mathrm{S}}} \text { AND } p_{\alpha_1^{\mathrm{S}}} \text {. }$$
[TeX:] $$p_{\alpha_1^{\mathrm{s}}}$$ 0.1 0.3 0.5 0.7 0.9 1
[TeX:] $$p_{\alpha_0^{\mathrm{s}}}$$
0.1 0.480 0.545 0.566 0.577 0.584 0.586
0.3 0.398 0.443 0.466 0.480 0.490 0.494
0.5 0.371 0.391 0.403 0.411 0.418 0.420
0.7 0.358 0.358 0.359 0.360 0.361 0.362
0.9 0.349 0.337 0.328 0.321 0.314 0.312
1 0.346 0.329 0.315 0.304 0.294 0.291
Real-time remote tracking of an information source over a wireless channel.
DTMC describing the evolution of the information source X(t).
DTMC describing the state of the consecutive error.
Average consecutive error as a function of [TeX:] $$p_{\alpha_0^{\mathrm{s}}} \text { and } p_{\alpha_1^{\mathrm{s}}}$$ for a slowly changing source with p = 0.3 and q = 0.2.
Average consecutive error as a function of [TeX:] $$p_{\alpha_0^{\mathrm{s}}} \text { and } p_{\alpha_1^{\mathrm{s}}}$$ for a rapidly changing source with p = 0.8 and q = 0.1.
Average importance-aware consecutive error as a function of [TeX:] $$p_{\alpha_0^{\mathrm{s}}} \text { and } p_{\alpha_1^{\mathrm{s}}}$$ for a slowly changing source with p = 0.1 and q = 0.2.
Average importance-aware consecutive error as a function of [TeX:] $$p_{\alpha_0^{\mathrm{s}}} \text { and } p_{\alpha_1^{\mathrm{s}}}$$ for a rapidly changing source with p = 0.8 and q = 0.9.
Two-dimensional DTMC describing the joint status of the system regarding the current state at the original source using a two-state information source model.