I. INTRODUCTION
Smart vehicles have emerged as an important trend in transportation, thanks to the recent advances in autonomous driving, artificial intelligence (AI), and electric vehicles. While vehicles’ computing power is consistently growing, it is not fast enough to handle diversified and heavy tasks such as processing exploding sensory data, driver monitoring, platooning, infotainment, etc. Energy consumption in computation is another critical factor due to the slow evolution of the battery technology. Therefore, efficiently processing computationintensive vehicular tasks has become one of the most important problems [1], [2].
Computation offloading is a promising solution to the computation-intensive vehicular applications. By offloading local tasks to the resource-rich cloud servers via wireless channels, smart vehicles can process heavy tasks remotely while reducing their energy consumption. In case the cloud servers are located at public clouds (e.g., Amazon’s AWS, Google’s GCP, Microsoft Azure), however, offloading should suffer from high latency in accessing the servers. Hence, alleviating communication latency in offloading has been identified as a critical issue [3], [4].
Nowadays, 5G mobile edge computing (MEC) is expected to mitigate such latency effectively, by placing them at the mobile edge (thus closer to the vehicles). The cloud resources (henceforth called the MEC server) are usually installed at a base station (BS), where the BS can be either a macro cell BS (mBS) or a small cell BS (sBS). There is a tradeoff between mBS’s MEC server (mMEC) and sBS’s MEC (sMEC) server: the mMEC has more computing power but can be accessed through bandwidth-limited cellular bands, while the sMEC, possibly installed at roadside units (RSUs), has limited computing power (mainly due to its small form factor) but can be accessed via millimeter wave (mmWave) links with extensive bandwidth. As a result, 5G MEC based computation offloading introduces a unique challenge to smart vehicles: whether to offload or not, and where to offload (mMEC vs. sMEC).
Existing work on vehicular task offloading mostly considered utilizing sub-6GHz mid-bands for the vehicle-to-sBS wireless link when offloading a task to the sMEC [5]–[8]. Moreover, offloading to the mMEC is assumed to be performed by transmitting the task to the sBS first and then forwarding it to the mMEC via an optical wired backhaul between the sBS and the mBS [5], [6], [9], [10]. Such backhaul-based forwarding, however, would entail a high implementation cost to deploy a backhaul per each small cell [11]–[13], and should endure additional non-trivial delays to transmit over the backhaul, including the conversion delay between electrical and optical signals, compared to the direct wireless access to sBSs and mBSs [14], [15].
Unlike the aforementioned approaches, this paper proposes a multi-interface computation offloading mechanism for 5G vehicular networks, where each vehicle is equipped with two wireless interfaces, mid-band and mmWave, and has three choices for a given task: local computing, offloading to an mMEC (henceforth called ‘mBS offloading’), and offloading to a sMEC (henceforth called ‘sBS offloading’). By offloading tasks wirelessly to both sMEC (via mmWave) and mMEC (via mid-band), we can solve the issue of implementation cost and delay incurred by the wired backhaul. In fact, leveraging mid-band and mmWave together has been also utilized by 5G NR to improve network coverage and data rate for mobile devices [16], [17]. Considering that nowadays 5G cellphones can support both midband and mmWave band with manageable chipset prices, we believe our proposal of dual-interface vehicular architecture is realistic.
Among the three offloading choices, local computing has the weakest computing power but does not incur any communication overhead. If a task is offloaded, however, sBS offloading incurs less communication overhead than mBS offloading thanks to larger channel bandwidth, but offers inferior computing power. Therefore, each vehicle should carefully decide what offloading option to choose so as to jointly minimize the overall latency and energy consumption of the offloaded computation.
In particular, we formulate the problem as an ordinal potential game, to realize distributed offloading decision by individual vehicles. Some existing work in the offloading literature have proposed centralized decision making methods considering multiple MEC server locations at small and macro cells [18], [19]. In the offloading context, however, it is known that centralized decision does not scale well with the number of vehicles, thus leading to an NP-hard problem [20]. On the other hand, distributed decision-making can ease the burden of centralized control, for which game theory is a powerful tool. Hence, we hereby adopt a game theoretic approach to model the strategic interactions between the vehicles.
Most game theoretic approaches for computation offloading in vehicular networks assumed a quasi-static scenario, where the set of vehicles and their channel conditions remain unchanged during a game period [20]–[23]. Such approaches, however, are not suitable for high-speed vehicle environments where the channel’s coherence time reaches only about tens of milliseconds [24], [25]. Therefore, it is crucial to ensure the quasi-static assumption in any game theoretic formulation of vehicular scenarios by designing the game to converge within the coherence time. To do so, this paper proposes slicing the global game covering the whole road into smaller games (henceforth called ‘game slices’) each covering a nonoverlapping road block, so as to run those games in parallel. In addition, we develop a method of determining the optimal number of game slices that minimizes the system-wise cost while satisfying the quasi-static condition.
The contributions of the paper are four-fold as follows.
This paper proposes a novel multi-interface offloading architecture with mmWave-based small cells and a cellular mid-band-based macro cell, where a vehicle has three computation venues: local, sMEC, and mMEC. To the best of the authors’ knowledge, this is the first attempt to consider dual MEC server locations with small cells operated by mmWave bands.
The paper presents how to model inter-vehicle competition on offloading resources with a game theoretic framework. Specifically, the paper casts the problem into a potential game and proves the existence of Nash equilibrium (NE) by deriving the corresponding potential function. In addition, we derive the lower/upper-bounds of the price of anarchy (PoA) for the game.
The paper proposes game-slicing based multi-interface computation offloading (GaMiCO), a practical algorithm with which vehicles can participate in the offloading game in a distributed way. This paper analytically shows how fast the proposed iterative algorithm can converge to a NE. In addition, the dynamics of the game is shown via extensive simulations, e.g., how fast a NE is reached in practice, how good the NE is compared to the optimum, and how effective the game slicing is for minimizing the system-wide cost considering various vehicle arrival rates. Then, we compare the performance of the proposed algorithm with state-of-the-art game-theoretic offloading schemes.
The paper evaluates the impact of game slicing on GaMiCO’s game convergence time, and also shows that our optimal game slicing technique can satisfy the quasistatic assumption in practical vehicular offloading scenarios. To the best of the authors’ knowledge, our work is the first paper that slices the global game into parallel ones to guarantee the quasi-static assumption in vehicular offloading environments.
The rest of the paper is organized as follows. Section II introduces related work, and then Section III describes our system model. Section IV formulates the multi-interface computation offloading potential game, and derive its potential function. Section V elaborates on the proposed offloading game algorithm, and analyzes the convergence of the algorithm. Section VI derives the lower and upper bounds of the PoA. Section VII evaluates the performance and dynamics of the proposed game, and the paper concludes with Section VIII.
II. RELATED WORK
A 5G network is built upon a multi-tier architecture composed of relays, small cells, and macro cells, designed to enhance spectral efficiency, coverage, and interference management [26]. Among the work dealing with MEC offloading in the 5G network [13], [15], [27]–[29], some studies treated small cells just as relays between mobile users and the macro cell’s MEC server [13], [15], [27]. While [13] assumed a wireless backhaul between sBS and mBS, [15], [27] considered a wired backhaul between them. Others assumed that only small cells are equipped with MEC servers [28], [29], e.g., [28] proposing small cell cloud computing (SCC) and [29] considering the offloading decision problem to minimize per-user energy consumption. As opposed to the aforementioned works, we assume that MEC servers are placed at both sBS and mBS.
Regarding the work on dual MEC locations at small and marcro cells (also referred to as the three-tier/layer architecture), [18], [19] have dealt with centralized optimal decision by a mBS for minimizing per-user cost or maximizing peroperator revenue. As mentioned in [20]–[22], however, decentralized offloading decision has several advantages than centralized decision, and thus we formulate the multi-interface offloading problem with game-theoretic approaches. In particular, the potential game has been considered as a technique for decentralized computation offloading. In this vein, [20] formulated decentralized computation offloading in a multiuser environment as an ordinal potential game. [21] introduced a decentralized multi-user multi-channel offloading algorithm. [22] proposed a distributed multi-channel offloading algorithm that releases the channel resources of transmission-completed tasks for transmission-incomplete tasks. Aforementioned studies, however, considered macro cell’s MEC server as the sole offloading destination.
Computation offloading has been treated as a powerful tool for the vehicular network. [9] analyzed the characteristics and the performance of vehicular Wi-Fi offloading scenarios. [10] proposed a theoretical model to characterize the performance of mmWave in a highway vehicular network with blockages. [30] formulated a mmWave beam design problem with position prediction based beam switching for vehicle-toinfrastructure communications. Aforementioned works, however, treatedWi-Fi access points or mmWave BSs just as relays with no computing resources. Since vehicular applications usually have stringent latency requirements, it is imperative to consider small cells as an offloading target.
In terms of dual-MEC computation offloading in vehicular networks, [5], [6] jointly dealt with offloading decision and MEC resource allocation considering the MEC servers at RSUs and at the remote cloud. While they adopted different types of game theoretic approaches for multi-vehicle offloading decisions, both assumed that offloaded tasks to the remote cloud should go through the wired backhaul between an RSU and the remote cloud, which causes non-negligible delay as discussed earlier in Section I. On the contrary, our dual-MEC architecture has an advantage in the sense that the mMEC (as an equivalent to the remote cloud) can be directly accessed by a vehicle via its own wireless interface.
Only few works [7], [8] have considered offloading to both of sMEC and mMEC via wireless links. [7] considered densely-deployed distributed small cells focusing on the impact of interference and limited resources of sBSs, and proposed a hierarchical genetic algorithm and a particle swarm optimization method to minimize the total energy consumption of all mobile devices. On the other hand, [8] studied the channel and computation resource allocation problem from the content provider’s perspective, where the potential game is utilized to maximize the number of allocated mobile devices while minimizing the system cost.
To the best of the authors’ knowledge, this paper is the first game theoretic study that tries to solve the issue of game convergence within channel coherence time in the vehicular offloading environment, by splitting the global game into smaller game slices.
III. SYSTEM MODEL
We consider a set [TeX:] $$\mathcal{N}=\{1,2, \cdots, N\}$$ of smart vehicles with computationally intensive tasks, which are geometrically covered by two types of BS, a single mBS and multiple sBSs. Each BS is equipped with its own MEC server, where the mBS’s server is superior to the sBS’s server in computation. We also assume the mBS utilizes 5G cellular spectrum under 6 GHz whereas each sBS is a 5G mmWave BS equipped with a set of phased-arrays [16], [17], [31], [32]. In addition, each smart vehicle (which is a user equipment) is assumed to be equipped with two types of network interfaces (midband and mmWave) through which two simultaneous wireless connections with the mBS and a sBS are maintained. Our system model is depicted in Fig. 1.
Multi-interface computation offloading in 5G vehicular networks.
The mmWave spectrum, ranging from 24 GHz to 300 GHz [33], suffers from higher attenuation loss than the mid-band spectrum. To compensate the loss, beamforming using high-gain phased array antennas has been considered for mmWave transmission. Furthermore, the array of phased arrays (APA) has been proposed to further extend the fieldof- view of mmWave BS’s coverage [34]. Assuming the hybrid beamforming architecture (i.e., multiple RF chains exist where each of them is connected to one phased array), we can also make a mmWave sBS maintain multiple beams simultaneously, each associated with a separate vehicle.
We assume periodic offloading where the mBS periodically announces the start of a new offloading epoch to all the vehicles within its coverage.1 Then, the vehicles participate in the announced offloading game concurrently. In addition, the offloading epoch consists of four sequential steps: offloading decision (local vs. sMEC vs. mMEC), (if offloaded) uplink transmission of the task to sMEC or mMEC, offloaded computation at the MEC, and downlink transmission of the computation result to the vehicle. Note that we assume the downlink transmission time is negligible since the computation result is usually quite small [20], [21].
1 Considering that various delay-sensitive tasks generate and collect information periodically [35], [36], we assume that the game also occurs periodically. In the meantime, we reserve it as our future work to determine the optimal game period by considering practical task-related factors such as queuing, task arrival rate, etc.
We assume [TeX:] $$\mathcal{N}$$ does not change during an offloading period (i.e., a quasi-static scenario), with little loss of generality as justified below. To guarantee the quasi-static scenario, two conditions must be met — the channel state should not change until the tasks are uplink transmitted, and handover should not occur until the computed results are transmitted through the downlink — which can be satisfied as follows. First, Section VII-C will show that with our game slicing, the tasks can be uplink-transmitted within the channel coherence time of both mBS and sBS. For instance, for a vehicle moving at 80 km/h, the channel coherence time is obtained as 34.38 ms for mBS [37] and 40.95 ms for sBS [25].2 In the meantime, our slicing-based game consumes at most 32.89 ms until the completion of uplink transmission, thus less than both coherence times, while the existing game-theoretic approaches may exceed the coherence times. Second, Section VII-F will also show that only up to 1.82% of the vehicles would cross a small or macro cell’s edge until the downlink transmission completes, suggesting that the quasi-static assumption becomes valid if such cell-edge located vehicles are guided not to participate in the game but to compute locally.
2 Note that the vehicle moving at 80 km/h (equivalently 22.22 m/s) would associate with its sBS via an allocated beam for 63 ms. Specifically, assuming the beamwidth is [TeX:] $$10^{\circ}$$ for the maximum beam gain [38] and the 8 m high sBS is right above the road, the length of the road section covered by the beam becomes [TeX:] $$8 \times \tan \left(5^{\circ}\right) \times 2=1.4 \mathrm{~m}$$, and thus [TeX:] $$1.4 / 22.22 \times 10^3=63 \mathrm{~ms}$$.
We consider that vehicle n has a task to offload which runs with input data of [TeX:] $$b_n$$ bits and requires [TeX:] $$c_n$$ CPU cycles for computation. Then, we denote by [TeX:] $$a_n \in\{0,1,2\}$$ the offloading decision of vehicle [TeX:] $$n \in\mathcal{N}$$, where [TeX:] $$a_n=0$$ means local computing, [TeX:] $$a_n=1$$ means sBS offloading, and [TeX:] $$a_n=2$$ means mBS offloading. We also denote by [TeX:] $$\boldsymbol{a}=\left(a_1, a_2, \cdots, a_N\right)$$ the decision profile of all the vehicles.
We also assume that the offloaded task’s code already exists at MEC servers, which is achievable since the set of offloadable vehicular tasks should be known in advance and thus their codes can be pre-installed at MEC servers by the offloading service provider. Regarding the communication latency between a vehicle and a BS, we consider uplink transmission time as a major factor due to potentially heavy offloading data, while assuming that downlink transmission time is negligible.
A. Game Slicing Model
As mentioned in Section I, we propose slicing one global offloading game (that covers the entire road) into parallel nonoverlapping games, each called a ‘game slice’. Each slice belongs to the set [TeX:] $$\mathbb{G}=\left\{1,2, \cdots, N_{G S}\right\}$$ with [TeX:] $$N_{G S}$$ game slices. In addition, each vehicle belongs to the slice covering its current location on the road, and thus slice g includes a set of vehicles [TeX:] $$\mathcal{N}_g \text { and } \sum_{g=1}^{N_{G S}}\left|\mathcal{N}_g\right|=N.$$ We also assume that the mBS’s channel bandwidth, denoted by [TeX:] $$B_m$$,3 is distributed to [TeX:] $$N_{G S}$$ game slices such as [TeX:] $$\sum_{g=1}^{N_{G S}} B_m^g=B_m$$, where [TeX:] $$B_m^g$$ is the macro cell channel bandwidth allocated to game slice [TeX:] $$g \in \mathbb{G}$$. Note that Section III-C will explain how [TeX:] $$B_m^g$$ is allocated to the vehicles in the slice, while Section VII will evaluate the impact of game slicing on minimizing the system-wide cost.
3 [TeX:] $$B_m$$can be also thought as the reserved portion of the cellular bandwidth for offloading services.
Fig. 2 shows how the global game can be sliced into [TeX:] $$N_{G S}$$ game slices in a real road environment, where each red box represents the area a single game slice covers. For instance, Fig. 2(a) addresses a scenario where [TeX:] $$N_{G S}=1$$, indicating that all vehicles on the road particiapte in the single global game. Note that we tried to evenly distribute the traffic load into the multiple slices, by making per-slice area the same. Although this paper assumes [TeX:] $$N_{G S}$$ is an exponential power of 2 for the ease of visualization, how it is sliced can be determined differently by the offloading service providers.
B. Communication Model: MmWave sBS
We assume that each vehicle can be assigned its own separate line-of-sight (LoS) beam from its associated sBS by considering the following deployment scenario: 1) sBS are installed at a high position and at the both sides of the road, 2) sBS are densely deployed (by nature) and there exist a large number of phased arrays per sBS. We believe such a scenario reflects the reality to some extent since sBS can be installed at tall streetlamps (following the RSU deployment scenario in the 3GPP TR 37.885 which considers the antenna height of 5 meters), and the number of phased arrays per mmWave BS is up to eight nowadays [39] and tends to increase. According to the experimental evidence in [40], the probability of a link suffering from LoS blockage when assuming the 5 meter sBS height is only 2% in the highway scenario. The paper also showed that more than 99% of blockages disappear by increasing the BS height to 10 meters, regardless of deployment scenarios. Based on this evidence, our orthogonal beam assignment through the LOS path seems reasonable.
By the assumption, a vehicle’s uplink transmission to its sBS is not affected by other vehicles’ transmissions, and hence the uplink data rate [TeX:] $$r_n^s$$ of vehicle n (with [TeX:] $$a_n=1$$) is given as
where [TeX:] $$B_s$$ is the mmWave channel bandwidth and [TeX:] $$S N R_n^s$$ is the signal to noise ratio (SNR) measured at the sBS of vehicle n. To obtain [TeX:] $$S N R_n^s$$, we introduce the beamforming antenna gains [38] to the omnidirectional large-scale path loss model for a 28 GHz LOS channel in [41], such as
where d (m) is the distance between vehicle n and the sBS, [TeX:] $$\chi_\sigma$$ is normally distributed with zero mean and standard deviation of [TeX:] $$\sigma=3.6(\mathrm{~dB})$$, and [TeX:] $$G_{T X} \text { and } G_{R X}$$ (both in dB) are beamforming antenna gains at the transmitter and the receiver, respectively.4 Note that [TeX:] $$G_{T X} \text { and } G_{R X}$$ may depend on the mmWave beam pattern (e.g., beam width).
4 The Doppler effect has been assumed to be mitigated by utilizing Doppler shift compensation techniques as proposed in [10], [42].
C. Communication Model: Cellular mBS
We assume each vehicle desires to finish their uplink transmission to the mBS within [TeX:] $$t_d$$ ms, where [TeX:] $$t_d$$ ms is a design parameter according to the target quality of service (QoS) or quality of experience (QoE) level of the network operator [43]–[45]. Furthermore, we assume '[TeX:] $$t_d$$ ms' is an elastic deadline, i.e., it is a preference but not a requirement for vehicles. Then, we can calculate the necessary amount of bandwidth [TeX:] $$W_n$$ for vehicle n to complete the uplink transmission in time, such as
where [TeX:] $$b_n$$ is the offloading data size and [TeX:] $$S N R_n^m$$ is the SNR observed at the mBS.5 To obtain inline-formula>[TeX:] $$S N R_n^m$$, we can refer to the LTE suburban/rural macro channel model [46].
5 In reality, the offloading data size also includes extra bits for constructing a 4G/5G frame, e.g., header size. Since we consider relatively large [TeX:] $$b_n$$ (e.g., uniformly distributed in [10, 30] KB) than such overhead, we ignore the extra bits in this formulation. Including them in the analysis, however, is straightforward.
Since [TeX:] $$B_m^g$$, mBS’s channel bandwidth assigned to slice g, should be shared by the vehicles belonging to the slice, we consider proportional resource allocation where the allocated bandwidth of vehicle [TeX:] $$n \in \mathcal{N}_g$$ is given as
where [TeX:] $$I_{(x)}$$ is an indicator fucntion that returns 1 if the event x is true and 0 otherwise. The intuition of proportional bandwidth allocation is that offloading users are sensitive to latency and thus it is important to be fair in providing the same level of uplink transmission delay to the macro cell users.
Finally, we calculate the uplink data rate of vehicle n as
where [TeX:] $$\boldsymbol{a}^g$$ is the decision profile of the vehicles in slice g.
D. Computation Model: Local Computing
When a vehicle chooses local computing, there are two kinds of cost involved, computational latency and energy consumption in computation. Denoting by [TeX:] $$f_n^l$$ (cycles/second) the computing power of vehicle n, the computational latency [TeX:] $$t_n^l$$ is derived as
Next, denoting by [TeX:] $$\epsilon_n$$ (J/cycle) the energy consumption per CPU cycle, the energy consumption [TeX:] $$e_n^l$$ is given as
Using (6) and (7), the local computing cost [TeX:] $$Z_n^l$$ is given as
where [TeX:] $$\lambda \in[0,1]$$ is the weighting factor between time and energy, to capture a vehicle’s preference between them.6 As gets closer to 1, vehicles care more about time than energy. Note that needs to be well-tuned not only to make a good balance between energy and delay, but also to compensate their difference in magnitude. In such a vein, Section VII will evaluate the influence of by varying its value.
6 Before applying the weighted sum of energy and time, they can be individually normalized for unitless combining. Such an approach, however is fundamentally the same as ours in (8), as the former is only a scaled version of the latter.
E. Computation Model: Offloading to sBS’s MEC (sMEC)
Unlike local computing, offloading to sBS or mBS should deal with additional overhead in uplink transmission to derive the overall cost. Using (1), we can calculate the time overhead [TeX:] $$t_n^{s, o}$$ incurred by uplink transmission to a sBS as
Denoting by [TeX:] $$P_n^s(\mathrm{W})$$ the uplink transmit power of vehicle n to its sBS, the energy consumption [TeX:] $$e_n^{s, o}$$ due to transmission is derived as
Next, we need to calculate the computing time at the sMEC. denoting by [TeX:] $$f_s$$ the computing power of the sMEC, the computational latency [TeX:] $$t_n^{s, c}$$ of the offloaded task is given as
Finally, using (9), (10), and (11), the sBS offloading cost [TeX:] $$Z_n^s$$ is derived as
As previously mentioned, we have ignored the downlink transmission time for receiving the processing result of an offloaded task.
F. Computation Model: Offloading to mBS’s MEC (mMEC)
The derivation of the mBS offloading cost is very similar to that of sBS offloading, except that mBS’s communication and computation parameters should be used. First, let us denote by [TeX:] $$f_m$$ the computational capability of the mMEC, and by [TeX:] $$P_n^m$$ the uplink transmit power of vehicle n to the mBS. Then, using (3) and (5), we can derive the overheads of mBS offloading similarly to the sBS case, such as
where [TeX:] $$t_n^{m, o}$$ denotes the time overhead incurred by uplink transmission to the mBS, [TeX:] $$e_n^{m, o}$$ denotes the energy consumption due to transmission, and [TeX:] $$t_n^{m, c}$$ denotes the computational latency of the offloaded task. Finally, we can derive the mBS offloading cost [TeX:] $$Z_n^m$$ using (13), such as
IV. POTENTIAL GAME FORMULATION
In this section, we present how to formulate the distributed offloading decision making problem with game theoretic analysis. We first formulate it as a strategic game, and then show that the game is a potential game by deriving its potential function. Note that potential games are known to guarantee the existence of an NE and finite-time convergence to the NE.
By the communication model in Section III, a vehicle’s local computing cost and sBS offloading cost are not affected by other vehicles’ decisions. Therefore, each vehicle can initially compare its local computing cost with its sBS offloading cost, to decide which one of them is better (i.e., smaller). Then, the vehicle compares the chosen option (i.e., local or sBS offloading) with mBS offloading to make the final decision. As discussed earlier, the mBS offloading cost is influenced by other vehicles in the same slice due to the proportional resource allocation mechanism in (4); as more vehicles offload to mBS, allocated bandwidth per vehicle decreases and in turn per-vehicle data rate is impaired, leading to an aggravated offloading cost in (14). Hence, the vehicles choosing mBS offloading should participate in the mBS offloading game, until when no more vehicles want to deviate from their decisions. In this vein, Section V will propose an iterative decision update algorithm to guarantee distributed decision making and convergence.
A. Game Formulation as a Strategic Game
The strategic game is a non-cooperative decentralized game [47], for which we need to define the set of players, per-player strategy, and per-player utility (or cost) function. In game slice g, [TeX:] $$\mathcal{N}_g$$ is the set of players, and the decision of player [TeX:] $$n \in \mathcal{N}_g$$ is given as [TeX:] $$a_n \in \mathcal{A}_n^g=\{0,1,2\}$$ where [TeX:] $$\mathcal{A}_n^g$$ represents the player’s set of strategies. Then, the strategy space [TeX:] $$\mathbb{S}^g$$ is given as [TeX:] $$\mathbb{S}^g=\mathcal{A}_{\mathcal{N}_g(1)}^g \times \mathcal{A}_{\mathcal{N}_g(2)}^g \times \cdots \times \mathcal{A}_{\mathcal{N}_g\left(\left|\mathcal{N}_g\right|\right)}^g,$$ where each element of [TeX:] $$\mathbb{S}^g$$ is called a strategy profile [48]. Note that [TeX:] $$\mathcal{N}_g(k)$$ denotes kth element of [TeX:] $$\mathcal{N}_g$$. In the sequel, we will omit the slice index g unless necessary otherwise.
Now, per-player cost function [TeX:] $$Z_n$$ in slice g is determined as
where [TeX:] $$a_{-n}=\left(a_1, a_2, \cdots, a_{(n-1)}, a_{(n+1)}, \cdots, a_{\left|\mathcal{N}_g\right|}\right) .$$ Note that the slice-wide cost is then obtained as [TeX:] $$\sum_{n \in \mathcal{N}_g} Z_n\left(\boldsymbol{a}^g\right),$$ while the system-wide cost is obtained as [TeX:] $$\sum_{g=1}^{N_{G S}} \sum_{n \in \mathcal{N}_g} Z_n\left(\boldsymbol{a}^g\right).$$
Next, we can formulate our game as a strategic game such as [TeX:] $$\mathcal{G}^g=\left[\mathcal{N}_g, \mathbb{S}^g,\left\{Z_n\right\}_{n \in \mathcal{N}_g}\right],$$ where each vehicle (i.e., a player) wants to minimize its cost given others’ decision profiles, i.e.,
Then, the NE of the game is defined as a strategy profile from which no vehicle intends to deviate as long as others’ strategies remain unchanged [47], and the strategy profile [TeX:] $$a^*=\left(a_1^*, a_2^*, \cdots, a_n^*\right)$$ is called a pure-strategy NE if and only if
B. Verifying the Game to Be a Potential Game
A game is considered as a potential game if it has a potential function that maps each player’s strategy update to the change of the real number the function produces. The potential game is very useful since it always ensures the existence of NE. Moreover, any asynchronous update (i.e., only one player updates at each given decision time) always leads to convergence to an NE in finite time.
Our game [TeX:] $$\mathcal{G}^g$$ becomes an ordinal potential game if and only if a potential function [TeX:] $$\phi\left(\boldsymbol{a}^g\right): \mathbb{S}^g \mapsto \mathbb{R}$$ exists, such that [TeX:] $$\forall n \in \mathcal{N}_g$$,
To derive our game’s potential function, we first state the dynamics of the game by considering three cases: local computing vs. sBS offloading, local computing vs. mBS offloading, and sBS offloading vs. mBS offloading.
local computing [TeX:] $$\left(a_n=0\right)$$ vs. mBS offloading [TeX:] $$\left(a_n=2\right)$$: Using (8) and (14), [TeX:] $$Z_n^l\lt Z_n^m$$ leads to
where
Equation (19) states that if the threshold [TeX:] $$T_n^{(l, m)}$$ is smaller than the sum of required bandwidth by the vehicles choosing mBS offloading, vehicle n should prefer local computing against mBS offloading.
sBS offloading [TeX:] $$\left(a_n=1\right)$$ vs. mBS offloading [TeX:] $$\left(a_n=2\right)$$: Using (12) and (14), [TeX:] $$Z_n^s\lt Z_n^m$$ leads to
where
local computing [TeX:] $$\left(a_n=0\right)$$ vs. sBS offloading [TeX:] $$\left(a_n=1\right)$$: Using (8) and (12), [TeX:] $$Z_n^l\lt Z_n^s$$ leads to
which then becomes
by applying (20) and (22).
Now, for vehicle k in slice g, there exist six possible decision changes, from [TeX:] $$a_k \text{ to } a_k^\prime$$. Since a vehicle updates its decision only when it will reduce its cost, (19), (21), and (24) instantly give us the following necessary conditions:
Using these relationships, we can show that the multiinterface computation offloading decision game is a potential game by constructing its potential function with given decision profile a, as stated in the following theorem.
Theorem 1: The proposed multi-interface computation offloading decision game is an ordinal potential game with a potential function [TeX:] $$\phi\left(\boldsymbol{a}^g\right)$$ given as
Proof: We need to show that (18) holds for all possible six combinations of [TeX:] $$\left(a_n, a_k^{\prime}\right) .$$ To do so, we rewrite [TeX:] $$\phi\left(\boldsymbol{a}^g\right)$$ as [TeX:] $$\phi\left(a_k, a_{-k}\right),$$ and for [TeX:] $$i, j \in \mathcal{N}_g$$ we apply three possible values of [TeX:] $$a_k$$ to (31) such as
Then, when [TeX:] $$a_k=0 \rightarrow a_k^{\prime}=1$$ happens, it means vehicle k’s sBS offloading cost is smaller than its local computing cost, i.e., [TeX:] $$Z_k\left(a_k, a_{-k}\right)-Z_k\left(a_k^{\prime}, a_{-k}\right)\gt 0.$$ In addition, by (32) and (33), we have
where the last inequality comes from (25). As a result, it satisfies the definition of ordinal potential games in (18). Similarly, the decision update [TeX:] $$a_k=1 \rightarrow a_k^{\prime}=0$$ means [TeX:] $$Z_k\left(a_k, a_{-k}\right)-Z_k\left(a_k^{\prime}, a_{-k}\right)\gt 0 \text {, }$$ which also satisfies (18) such that
where the last inequality comes from (26).
Next, when [TeX:] $$a_k=0 \rightarrow a_k^{\prime}=2$$ happens, vehicle k’s mBS offloading cost is smaller than its local computing cost, i.e., [TeX:] $$Z_k\left(a_k, a_{-k}\right)-Z_k\left(a_k^{\prime}, a_{-k}\right)\gt 0.$$ By (32) and (34), we have
where the last inequality comes from (27), thus satisfying (18). On the other hand, when [TeX:] $$a_k=2 \rightarrow a_k^{\prime}=0$$, (18) is also satisfied since
where the last inequality comes from (28).
Finally, when [TeX:] $$a_k=1 \rightarrow a_k^{\prime}=2$$, (33) and (34) provide
where the last inequality comes from (29), leading to (18). Similarly, according to (30), [TeX:] $$a_k=2 \rightarrow a_k^{\prime}=1$$ leads to
thus satisfying (18).
As a result, all six cases satisfy (18), which completes the proof.
By the property of ordinal potential games, Theorem 1 also confirms that our game should have a Nash equilibrium and it can be reached in finite time by developing a proper iterative decision update algorithm, where the latter will be discussed in the next section.
V. GAME-SLICING BASED MULTI-INTERFACE COMPUTATION OFFLOADING ALGORITHM
One key feature of the ordinal potential game is that it should be an asynchronous improvement process to ensure finite time convergence to a NE [47], [48]. It means that only one vehicle should be allowed to update its decision at a time. To fulfill the requirement, this section proposes the iterative offloading decision update algorithm called game-slicing based multi-interface computation offloading (GaMiCO) that satisfies the asynchronous improvement property. The proposed algorithm also addresses another important issue: how each vehicle can obtain the information necessary for its distributed decision update. This arises since the mBS communication model in (4) states that per-vehicle uplink channel resources depend on the aggregate demand of the vehicles offloading to the mBS. Therefore, each vehicle needs to know the sum demand while not knowing which vehicles are choosing mBS offloading.
Game-slicing based multi-interface computation offloading (GaMiCO)
Algorithm 1 provides the pseudo-code of the proposed algorithm. At first, every vehicle assumes that all the vehicles in the same slice initially choose local computing, i.e., [TeX:] $$a_n=0, \forall n$$ (line 4), and then determines its initial decision (line 5).7 When the derived decision is either [TeX:] $$a_n=0$$ or [TeX:] $$a_n=1$$, the vehicle exits the algorithm while performing corresponding operation, i.e., local computing or offloading to sBS (lines 7 and 8). Otherwise, the vehicle will participate in the mBS offloading game by sending the ‘Offloading Request’ message to the mBS, with [TeX:] $$W_n$$ enclosed (line 11). We assume this uplink transmission occurs simultaneously among the vehicles with [TeX:] $$a_n=2$$ in the same uplink subframe of 5G, where each vehicle is using its own assigned physical resource block (PRB). After collecting all the requests, the mBS broadcasts the calculated sum demand to the vehicles via the ‘Offloading Response’ message (lines 12–13). The vehicles involved in the game will go through a repeated process with multiple iterations until convergence to a NE is reached (lines 10–26). Specifically, each vehicle re-calculates its best decision among three options (line 14), and if the result is to deviate from [TeX:] $$a_n=2$$ then the vehicle will send the ‘Change Decision’ message to the mBS (line 16). Again, such transmissions from multiple vehicles occur in the same uplink subframe. Then, the mBS selects one vehicle among them randomly, and broadcasts the ‘Change Approval’ message with the chosen vehicle’s id and the updated sum demand to exclude [TeX:] $$W_n$$ of the chosen one (line 17). The selected vehicle changes its decision (line 18) while other vehicles keep their decisions as before (line 21). When the mBS no longer receives any Change Decision message, it broadcasts the ‘Convergence’ message to announce that a NE is reached and then the game is finished (line 26).
7 Assuming [TeX:] $$a_n=0, \forall n$$ at initialization is reasonable since it will give the least mBS offloading cost (the cost grows as more vehicles join mBS offloading). That is, if vehicle n wouldn’t choose mBS offloading even when [TeX:] $$a_n=0, \forall n$$, it will never choose it in any circumstances (thus not necessitating joining the mBS offloading game.
A. Convergence Analysis of the Algorithm
Despite the fact that a potential game ensures finite-time convergence, it is practically important to understand how fast it would converge to a NE. To address this, we first investigate the time complexity of the algorithm using the big [TeX:] $$\mathcal{O}$$ notation and then analyze the upperbound of the time required for the algorithm to converge. Because the time complexity is dominated by the algorithm’s most time-consuming parts, we focus on the offloading decision process corresponding to lines 14–24. Denoting by [TeX:] $$M_g$$ the upperbound of the number of iterations until convergence, [TeX:] $$\mathcal{N}_g$$ vehicles perform argmin until [TeX:] $$M_g$$ iterations in the worst case. Since the complexity of each iteration is [TeX:] $$\mathcal{O}(\mathcal{N}_g)$$, the time complexity of the algorithm becomes [TeX:] $$\mathcal{O}\left(M_g \times \mathcal{N}_g\right)$$. Next, we analyze the upperbound of the convergence time of the algorithm. Algorithm 1 would require at most [TeX:] $$\left(3+2 M_g\right)$$ TTIs (transmission time intervals), considering that each uplink/ downlink transmission takes one subframe and a subframe corresponds to one TTI. More specifically, the initial exchange of Offloading Request and Offloading Response messages costs two TTIs, each iteration also needs two TTIs for Change Decision and Change Approval messages, and the final Convergence announcement takes another TTI. Note that in 5G NR, one TTI equals to 0.125 ms [49].
To derive [TeX:] $$M_g$$, we consider the upperbound of the potential function [TeX:] $$\phi\left(\boldsymbol{a}^g\right)$$ (denoted by [TeX:] $$\left.P_{\max }\right$$) and the minimum amount of decrease in [TeX:] $$\phi\left(\boldsymbol{a}^g\right)$$ when a vehicle changes its decision (denoted by [TeX:] $$\left.P_{\min }\right$$). Then, [TeX:] $$M_g$$ can be determined as
following the approach introduced in [21].
Let us define [TeX:] $$D_n^{\max }$$ for vehicle [TeX:] $$n \in \mathcal{N}_g$$ as
Then, by the form of [TeX:] $$\phi(\boldsymbol{a})$$ in (31), [TeX:] $$P_{\max }$$ is obtained as
Next, the definition of [TeX:] $$P_{\min }$$ implies that [TeX:] $$\phi\left(a_k, a_{-k}\right)-\phi\left(a_k^{\prime}, a_{-k}\right) \geq P_{\min }$$. To obtain [TeX:] $$P_{\min }$$, we define [TeX:] $$D_n^{\min }$$ as
Note that each term inside the min operator states the difference in the potential function due to the three types of decision changes, i.e., between local computing and sBS offloading, local computing and mBS offloading, and sBS offloading and mBS offloading, each corresponding to (35) and (36), (37) and (38), and (39) and (40), respectively. Then, [TeX:] $$P_{\min }$$ is obtained as
In Section VII, we will numerically evaluate the number of iterations required for convergence in practical scenarios, to show that the algorithm actually converges much earlier than the derived [TeX:] $$M_g$$ in most cases.
VI. PRICE OF ANARCHY ANALYSIS
The price of anarchy (PoA) measures how much ‘game theoretic competition’ can approximate the performance of ‘cooperation between game players’ [50]. Following the definition, this section analyzes the efficiency of our multi-interface computation offloading game slicing by quantifying the ratio of the worst-case NE performance to the optimal performance (achieved by the centralized solution), in a term of the systemwide cost [TeX:] $$\sum_{g=1}^{N_{G S}} \sum_{n \in \mathcal{N}_g} Z_n\left(\boldsymbol{a}^g\right)$$.
Let [TeX:] $$\overline{\boldsymbol{a}}^g=\left(\bar{a}_{\mathcal{N}_g(1)}, \bar{a}_{\mathcal{N}_g(2)}, \cdots, \bar{a}_{\mathcal{N}_g\left(\left|\mathcal{N}_g\right|\right)}\right)$$ be the optimal decision profile minimizing the system-wide cost in slice g, and [TeX:] $$\hat{\boldsymbol{a}}^g=\left(\hat{a}_{\mathcal{N}_g(1)}, \hat{a}_{\mathcal{N}_g(2)}, \cdots, \hat{a}_{\mathcal{N}_g\left(\left|\mathcal{N}_g\right|\right)}\right)$$ be the worst-case NE such that [TeX:] $$\hat{\boldsymbol{a}}^g=\arg \max _{\boldsymbol{a}^g \in \mathbb{S}^g} \sum_{g=1}^{N_{G S}} \sum_{n \in \mathcal{N}_g} Z_n\left(\boldsymbol{a}^g\right)$$. Then, the PoA is defined as
where a smaller PoA implies that the game shows better performance.
Unfortunately, it is mathematically intractable to derive an exact decision profile that can achieve the worst-case NE. Moreover, it is infeasible to derive the optimal centralized solution due to the exponentially-growing search space with the number of vehicles. Therefore, following the approach presented in [20], [21], we investigate the upper and lower bounds of the PoA instead, to analyze how far the performance of the proposed game can deviate from the optimal performance. To do so, we define two terms, [TeX:] $$Z_{n, \min }^m \text { and } Z_{n, \max }^m$$, which are the minimum and the maximum mBS offloading cost of vehicle n, respectively, provided that n chooses mBS offloading. [TeX:] $$Z_{n, \min }^m$$, is achieved when only vehicle n selects mBS offloading, so that it can fully utilize the mBS’s channel bandwidth. In such a case, by (4) and (5), the uplink data rate of vehicle n in slice g is maximized as
and thus [TeX:] $$Z_{n, \min }^m$$ is derived as
by applying [TeX:] $$r_{n, \max }^m$$ to [TeX:] $$r_n^m\left(\boldsymbol{a}^g\right)$$ in (14).
[TeX:] $$Z_{n, \max }^m$$ is obtained when all the users in the same slice select mBS offloading, where vehicle n’s uplink data rate is minimized as
and hence [TeX:] $$Z_{n, \max }^m$$ becomes
Now, we can show the two bounds of the PoA in the following theorem.
Theorem 2: The PoA of the proposed multi-interface computation offloading game is bounded as
Proof: Since [TeX:] $$\overline{\boldsymbol{a}}^g$$ minimizes the system-wide cost, the PoA should be lower-bounded by 1. For the upper bound of PoA, we need to find the lower bound of [TeX:] $$\sum_{n \in \mathcal{N}_g} Z_n\left(\overline{\boldsymbol{a}}^g\right)$$ and the upper bound of [TeX:] $$\sum_{n \in \mathcal{N}_g} Z_n\left(\hat{\boldsymbol{a}}^g\right)$$.
If vehicle n performs mBS offloading in the optimal solution [TeX:] $$\overline{\boldsymbol{a}}^g$$, i.e., [TeX:] $$\bar{a}_n=2$$, we have
Otherwise (i.e., [TeX:] $$\bar{a}_n=0 \text { or } 1$$), we have
So, we can conclude that
If vehicle n performs mBS offloading in the NE solution [TeX:] $$\hat{\boldsymbol{a}}^g$$, i.e., [TeX:] $$\hat{a}_n=2$$, we have
Otherwise (i.e., [TeX:] $$\hat{a}_n=0 \text { or } 1$$) we have
So, we can conclude that
Finally, we can obtain the upper bound of the PoA by applying (52) and (53) to (46).
In Section VII, we will numerically evaluate the systemwide cost achieved by the optimal decision set, to show how the proposed game theoretic approach performs well compared to the centralized optimal method.
VII. PERFORMANCE EVALUATION
This section evaluates the performance of the proposed game theoretic mechanism via numerical simulations. Fig. 3 illustrates our simulation scenario, where a mBS with 1 km coverage radius [51] coexists with multiple sBSs deployed on both sides of a six-lane highway.8 The mBS is [TeX:] $$d_m$$ meters away from the center of the road and the inter-sBS distance is [TeX:] $$d_s$$ meters. We set [TeX:] $$d_m=500 \mathrm{~m}$$ and [TeX:] $$d_s=100 \mathrm{~m}$$[52] making 32 sBSs deployed in the whole network (16 sBSs on each side of the road).
8 We have chosen a highway scenario for the smart vehicles, since it is considered as an early-stage deployment site of autonomous driving due to less unpredictability (e.g., no pedestrians, no traffic lights).
The simulation scenario and topology.
For each sBS, we set the channel bandwidth as [TeX:] $$B_s=100 \mathrm{MHz}$$ and the transmit power as [TeX:] $$P_n^s=23 \mathrm{dBm}$$ [53].9 We also set [TeX:] $$G_{T X}=G_{R X}=24.5 \mathrm{dBi}$$ for the mmWave path loss model, utilizing a configuration of [TeX:] $$10^{\circ}$$ beamwidth given by 12 × 12 antenna elements on a 66 mm × 66 mm plate [38], [54]. 10 For the mBS, we set the transmit power as [TeX:] $$P_n^m=23 \mathrm{dBm} .$$11 Then, we set the mBS channel bandwidth to [TeX:] $$B_m=100 \mathrm{MHz}$$ in VII-A through VII-G (to observe the impact of the arrival rate on game slicing),12 while in VII-H we vary the mBS bandwidth in finding the optimal number of game slices that minimizes the system-wide cost at various vehicle arrival rates, to consider practical scenarios.
9 23 dBm is the maximum total radiated power (TRP) for power class 3.
10 Considering the form factor of a vehicle, the assumed antenna size seems practical.
11 23 dBm is the maximum uplink transmit power of power class 3 for non high power UE (HPUE).
12 We assume the mBS bandwidth is fully utilized, which is possible in suburban/rural highway scenarios considered in this paper.
For the tasks to offload, the input data size [TeX:] $$b_n$$ is uniformly distributed in [10, 30] KB and the required number of CPU cycles is [TeX:] $$c_n=2500$$ (cycles/bit). For local computing, we set [TeX:] $$\epsilon_n=1.2$$ (nJ/cycle) according to [55], [56], and set the computing power of a vehicle as [TeX:] $$f_n^l=2.5 \mathrm{GHz}.$$ For computing at MEC servers, we set the computational capability of the mMEC and the sMEC as [TeX:] $$f_m=10 \mathrm{GHz}$$ [1] and [TeX:] $$f_s=5 \mathrm{GHz},$$ respectively. In addition, considering the system-level requirements of various vehicular network applications [43]–[45], we set the elastic deadline for uplink transmission to the mBS as [TeX:] $$t_d=100 \mathrm{~ms}$$.
The arrival and mobility pattens of vehicles are simulated by the widely-used SUMO (Simulation of Urban MObility) simulator [57]. Specifically, we assume the vehicles have an average speed of 80 km/h while arriving at the road section in Fig. 3 with varying arrival rates as {0.2, 0.6, 1.0, 1.4, 1.8} (vehicles/sec), corresponding to an average of {21, 54, 84, 110, 137} (vehicles) on the road. For a given arrival rate, we run one hundred simulations each with different data size realizations, and in each simulation run we wait until the road is fully occupied by vehicles and then take the snapshot of vehicle locations at ten independent time moments. In a snapshot, each vehicle’s mBS and sBS uplink channels are generated by the 3GPP channel model, where a vehicle chooses the closest sBS as its sBS. Then, we run Algorithm 1 in each snapshot, and observe the average performance over one thousand snapshots (since there are ten runs, each with one hundred snapshots).
We set , the weighting factor between time and energy, to 0.5, while it varies in Section VII-E to investigate its impact on the system dynamics.
A. Impact of Game Slicing on the System-wide Cost
Fig. 4 illustrates the evolution of the system-wide cost with Algorithm 1’s iteration, for various arrival rates. According to the arrival rate, Fig. 5 shows that some values of [TeX:] $$N_{GS}$$ could make the game convergence plus uplink transmission time grow beyond the channel coherence time,13, thus violating the quasi-static assumption in Section III (i.e., game convergence and uplink transmission must be completed within the channel coherence time). Therefore, we have excluded such cases in each of Fig. 4 (like in (d) and (e)). Then, we analyze the effect of game slicing on the system-wide cost. When the arrival rate is 0.2, no slicing (i.e., [TeX:] $$N_{GS}=1$$) minimizes the system-systemwide cost. As the arrival rate gradually increases, however, the cost-minimizing number of game slices (in terms of [TeX:] $$N_{GS}$$) varies. For the arrival rates {0.6, 1.0, 1.4, 1.8} (corresponding to a total of {54, 84, 110, 137} vehicles respectively), the optimal [TeX:] $$N_{GS}$$ is found as {1, 8, 16, 32}. In conclusion, unlike the existing game theoretic work without game slicing, our approach of slicing the game according to the traffic volume is more beneficial in minimizing the system-wide cost. In Section VII-D, we will compare the performance of our algorithm with other state-of-the-art game-theoretic offloading schemes. Then in Section VII-G, we will further investigate the impacts of game slicing on the cost and PoA.
13 Since mBS’s channel coherence time (34.38 ms) is tighter than sBS’s (40.95 ms), we have considered mBS’s channel coherence time.
The evolution of system-wide cost with varying arrival rate.
The time for tasks to be uplink transmitted after the game converged for different [TeX:] $$N_{GS}$$ with varying arrival rate.
B. Convergence Time to a NE with Varying Arrival Rate
In every case of Fig. 4, the system-wide cost tends to decrease with iterations, eventually converging to a minimal value once an NE is reached. For each arrival rate in Fig. 4, we consider the optimal number of game slices minimizing the system-wide cost. Then, thus-obtained case’s NE is reached after {2, 13, 14, 16, 12} iterations, corresponding to the five arrival rate values. Since the algorithm consumes (3 + 2 · (the number of iterations until convergence)) TTIs as derived in Section V-A, the observed iterations correspond to {0.88, 3.63, 3.88, 4.38, 3.38} ms respectively. On the contrary, in the legacy game theoretic approaches without game slicing (i.e., [TeX:] $$N_{GS}=1$$), the convergence time becomes worse as {0.88, 3.63, 12.38, 23.63, 32.63} ms for those arrival rates.
We then compare the observed number of iterations until convergence with M, the upperbound of the number of decision iterations required for convergence in (41). For the five arrival rates considered, M is calculated as M = {17, 50, 74, 101, 246} iterations respectively, and hence the proposed algorithm converges to an NE much faster than the maximal possible iterations predicted by M.
Moreover, Fig. 6 presents the total number of vehicles on the road, the number of vehicles offloading to mBS at NE, and the number of vehicles offloading to sBS at NE, each obtained by averaging over ten snapshots. Naturally, all three numbers tend to increase with arrival rate, while the number of ‘mBS offloading’ vehicles grows more slowly than that of ‘sBS offloading’ vehicles, gradually saturating due to the bandwidth competition in mBS’s uplink channel.
The average number of vehicles with varying arrival rate.
C. Impact of Game Slicing on the Quasi-static Assumption
In this simulation, we show the impact of game slicing on ensuring the quasi-static assumption. Figs. 7 and 8 show the combined time taken for the game to converge and the tasks to be uplink-transmitted after that, with and without game slicing, respectively. Fig. 7 shows that with game slicing, it takes at most 32.89 ms even in the worst case, which is less than the channel coherence time. On the contrary, Fig. 8 shows that without game slicing, it may exceed the channel coherence time as the arrival rate increases. In addition, the combined time shows a large variance among the vehicles in Fig. 7 whereas the variance becomes negligible in Fig. 8. This is because the vehicles belonging to the same game slice have the same game convergence time while having a similar uplink transmission time given by (4),14 while the vehicles belonging to different slices would have different game convergence and uplink transmission times. In conclusion, with varying arrival rate, the quasi-static assumption can be guaranteed only by applying our game slicing method.
14 Note that in Fig. 8, there is only one slice – the single global game.
The combined time taken for the game to converge and the tasks to be uplink-transmitted, with game slicing.
The combined time taken for the game to converge and the tasks to be uplink-transmitted, without game slicing.
D. Comparison with Other Baseline Offloading Schemes
We evaluate our GaMiCO in comparison with the following game-theoretic offloading schemes, in terms of the systemwide cost and the game convergence time.
Local computing only (LCO): An algorithm where vehicles always compute their tasks locally.
Distributed computation offloading algorithm (DCO): The algorithm in [21] that addresses the problem of making decisions between local computing and mBS offloading, considering how multiple mobile users can efficiently achieve wireless access coordination.
Distributed multi-channel computation offloading algorithm (DMCO): The algorithm in [22] that proposes offloading decision-making between local and mBS while releasing the channel resources of transmissioncompleted tasks for transmission-incomplete tasks.
Regarding the system-wide cost, both GaMiCO and DMCO achieve the best performance where GaMiCO performs better than DMCO at high arrival rates while DMCO is better at low rates, as shown in Fig. 9. In terms of the game convergence time, however, GaMiCO performs the best among all the algorithms, as shown in Fig. 10. More importantly, the game convergence time remains very small and steady in GaMiCO, whereas it increases fast with arrival rate in other algorithms. In conclusion, the proposed GaMiCO algorithm not only shows the best performance in minimizing the system-wide cost but also successfully suppresses the game convergence time thus ensuring the quasi-static assumption.
The evolution of system-wide cost with varying arrival rate compared to other game-theoretic approaches.
The evolution of game convergence time with varying arrival rate compared to other game-theoretic approaches.
E. Per-vehicle Decision with Varying
In this test, we present the impact of varying on the achieved NE. Fig. 11 shows that as grows, more vehicles try to offload to the mBS, regardless of the arrival rate. This is because the considered task requires heavy computation, and hence the computing latency at a MEC server affects vehicles’ offloading decision significantly. That is, the larger gets, time cost reduction is more crucial, thus making the vehicles prefer mBS’s superior MEC server against sBS’s MEC server.
The percentage of vehicles offloading mBS with varying .
F. Impact of Mobility During a Game Period
In this simulation, we show the impact of vehicle mobility on the quasi-static assumption made in Section III. It is possible that a cell-edge located vehicle should leave its game due to the change of its associated sBS or the mBS. We measured the number of such vehicles, with the vehicle arrival rate varying as {0.2, 0.6, 1.0, 1.4, 1.8}. On average, only {0, 1.82, 1.13, 1.63, 1.25}% of the vehicles (respectively) are changing their cell association during a game period, thus having little impact on the proposed game framework.
G. Goodness of NE Compared to the Optimum
In this simulation, we evaluate the goodness of the achieved NE by comparing it to the optimal solution, in terms of the average system-wide cost (over one thousand snapshots), and also analyze the impact of game slicing on minimizing the system-wide cost. The optimal profile is obtained by a bruteforce search method that enumerates all possible decision profiles and measures the cost of each profile.15 Then, we measure the ratio of the average system-wide cost to the centralized optimal cost (based on the obtained optimal profile), denoted by ‘% against centralized optimal cost’, to show how much more cost our scheme incurs than the optimum.
15 When the brute-search method is used, the time required to find the optimal overhead increases exponentially with the number of vehicles per game. Hence, Fig. 12 omits some of the optimal cost values which incurred prohibitively large computational time.
Optimal vs. System-wide cost.
Fig. 12 suggests that a proper number of game slices (i.e., NGS) can effectively minimize the system-wide cost, where the best NGS varies with the arrival rate. It also shows that optimizing the system-wide cost does not necessarily achieve the best performance compared to the centralized optimal cost, i.e., ‘% against centralized optimal cost’. Nevertheless, the performance loss (i.e., extra cost incurred) at the NE is measured as at most 3.59%, showing that our algorithm achieves a near-optimal performance. Considering that finding the optimal profile requires perfect information on all the vehicles and should deal with enormous search space, the observed performance loss (by applying game theory) seems quite reasonable.
H. Optimal Number of Game Slices for the Minimal Cost
In this simulation, we observe the tendency of the optimal number of game slices which minimizes the system-wide cost when various mBS bandwidths and arrival rates are considered (over one thousand snapshots). Fig. 13 shows that the optimal number of game slices varies according to a given pair of mBS bandwidth and arrival rate, presenting two different trends. First, when the mBS bandwidth is fixed, the optimal number of game slices tends to increase as the arrival rate grows. Similarly, when the arrival rate is fixed, the optimal number of game slices increases as more mBS bandwidth is given while experiencing sudden drops at 40 MHz (for the arrival rates from 0.4 to 2.0) and around 80–90 MHz (for the arrival rates from 1.0 to 2.0). In fact, such drops are rooted at the randomness of the system-wide cost due to the heterogeneous input data distributions. More specifically, the system-wide cost at 40 MHz turns out almost indistinguishable between [TeX:] $$N_{GS}=8$$ and [TeX:] $$N_{GS}=16$$ (only up to 0.83% difference) thus making the optimal [TeX:] $$N_{GS}$$ to be 8 while at 30 and 50 MHz it is 16. The sudden drop around 80–90 MHz occurs due to the same reason. With homogeneous distribution, however, no such dropping happens as shown in Fig. 14, since there exists no randomness in input data distributions.
Note that the results in Figs. 13 and 14 can be used to establish optimal offloading service provisioning with minimal system-wide cost.
Optimal [TeX:] $$N_{GS}$$ minimizing the system-wide cost under heterogeneous input data distributions.
Optimal [TeX:] $$N_{GS}$$ minimizing the system-wide cost under homogeneous input data distributions.
VIII. CONCLUSION
This paper considered multi-interface computation offloading in 5G vehicular networks, and proposed how to model it as an ordinal potential game. We further considered game slicing and derived the potential function to justify the potential game formulation, and proposed an offloading decision game algorithm with finite-time convergence to the NE. We have analyzed the upperbound of the decision iterations towards convergence and the price of anarchy of the proposed game. Then, our extensive simulations have shown that our algorithm converges to an NE in finite time, which is as good as the centralized optimal solution. It has also been shown that the system-wide cost can be minimized by finding the optimal number of game slices according to various traffic conditions.