PDF  PubReader

Tran-Dang and Kim: DISCO: Distributed Computation Offloading Framework for Fog Computing Networks

Hoa Tran-Dang and Dong-Seong Kim

DISCO: Distributed Computation Offloading Framework for Fog Computing Networks

Abstract: Fog computing networks have been widely integrated in IoT-based systems to improve the quality of services (QoS) such as low response service delay through efficient offloading algorithms. However, designing an efficient offloading solution is still facing many challenges including the complicated heterogeneity of fog computing devices and complex computation tasks. In addition, the need for a scalable and distributed algorithm with low computational complexity can be unachievable by global optimization approaches with centralized information management in the dense fog networks. In these regards, this paper proposes a distributed computation offloading framework (DISCO) for offloading the splittable tasks using matching theory. Through the extensive simulation analysis, the proposed approaches show potential advantages in reducing the average delay significantly in the systems compared to some related works.

Keywords: Distributed algorithm , fog computing , parallel computation , stable matching , task offloading , task partition

I. INTRODUCTION

PRACTICALLY, the Internet of things (IoT) has become an integral element for realizing smart practical systems such as smart cities [1], smart factories [2], [3], smart lo- gistics, and supply chain [4]–[6]. The fundamental aspect of IoT-based systems is to connect all devices through the Internet protocol to exchange high volume data and process them to create smart services and applications. Owing to limited computation resources, network, storage, and energy, IoT devices are inadequate for executing all computational tasks, especially tasks with huge volumes and complex data structures. Cloud computing is an essential solution to this problem because it provides powerful resources to fulfill tasks efficiently [7], [8]. However, cloud computing-based solutions do not always meet the expected quality of service (QoS) and quality of experience (QoE) requirements for some classes of IoT applications, especially latency-sensitive ones because of the long physical distance between the IoT devices and the remote cloud servers, scarce spectrum resources, and intermittent network connectivity.

This context has led to an integration of fog computing in middle between the IoT and cloud layer to process and offload most tasks on behalf of the cloud servers, thereby allowing for the QoS and QoE requirements to be met [9], [10]. Besides providing the cloud like services to EUs, the fog computing potentially improves the performance of fog- based systems such as reduction of service delay [11], and energy saving [12]. Recently, the advance of networking generation (i.e., 5G and beyond) leads to an increasing demand of ubiquitous computing and pervasive service accesses by a numerous number of Internet-connected mobile devices and end users (EUs). Motivated by the necessity of network architecture enhancement, a paradigm of fog radio access networks (F-RANs) has emerged as a promising evolution architecture for 5G networks [13], [14], which along with cloud radio access networks (C-RANs) [15] provide the per- vasive computing services. Therefore, F-RANs provide great flexibility to satisfy QoS requirements of various 5G-based services and applications.

However, to realize these benefits of fog computing net- works (FCNs), there requires efficient resource allocation strategies to perform task offloading operations [16]. In- deed, there are many factors that challenge the design and development of effective offloading strategies such as the heterogeneity of computing devices, various types of compu- tational tasks with different requirements [17] such as many applications related to artificial intelligence (AI), machine learning (ML), and augmented reality (AR). There are a large number of centralized optimization techniques and algorithms proposed in the literature to provide optimal solutions to the aforementioned resource allocation problems [18], [19]. Typically, offloading multiple tasks of fog nodes (FNs) to multiple neighbor FNs (i.e., helper nodes (HNs)) is modeled as a multi-task multi-helper (MTMH) problem, which deal with the allocation of fog computing resources for processing tasks to achieve single or multiple objectives [18], [20], [21]. However, these solutions require a centralized control to gather the global system information, thus incurring a significant overhead and computation complexity of algorithms. This complexity is further amplified by the rapidly increase of density and heterogeneity of FCNs [22] when dealing with combination integer programming problems [23].

The aforementioned limitations of optimization have led to a second class of game theory based offloading solu- tions that can avoid the cost-intensive centralized resource management as well as substantially reduce the complexity of algorithms. For example, a paired offloading strategy of multiple tasks (POMT) in heterogeneous fog networks in [24] is modeled as a strategic game between the FNs and HNs for deciding which tasks are offloaded by which HNs to minimize the average task execution delay. The game theory is applied and developed to obtain the Nash equilibrium (NE) point for the POMT, at which the fog network can reach the near- optimal performance in terms of average delay. A similar model is proposed in POST [25], however, which considers the division of tasks into subtasks for parallel computation. Despite their potentials, these approaches pose several lim- itations. First, classical game theoretical algorithms such as best response require some information regarding actions of other players [26]. Correspondingly, many assumptions are introduced in the game theory based algorithms to simplify the system models that, in some case, are impractical. Second, most game-theoretic solutions, for example, Nash equilibrium, investigate one-sided stability notions in which equilibrium deviations are evaluated unilaterally per player. In addition, the stability must be concerned by both sides of players (i.e., resource providers and resource requesters) in the context of fog computing environment.

Ultimately, managing resource allocation effectively in such a complex environment of IFC systems leads to a fundamental shift from the traditional centralized mechanism toward dis- tributed approaches. Recently, matching theory has emerged as a promising technique for solving resource allocation prob- lems in many applications such as wireless communication networks [27] because it can alleviate the shortcomings of game theory and optimization-based approaches. Alternatively, while the optimization and game theory based solutions are efficient in a some limited scenarios, the matching-based approaches have potential advantages owing to the distributed and low computational complexity algorithm. However, to reap the benefits of matching theory for task offloading and resource allocation in the FCNs, there is a need of advanced frameworks to handle their intrinsic properties such as hetero- geneity of fog computing devices and the complex structure of computation tasks [28].

In these regards, this paper proposes a distributed com- putation offloading algorithm called DISCO to minimize the overall task execution delay in the FCNs. Summarily, this paper provides four key contributions as follows:

·We generalized the model of fog computing networks and model the task offloading problem in the fog network as a many-to-one matching game.

·Unlike the models of related works, we construct the preference profiles of task set and HN set according to groups.

·Based on the PLs, we design a distributed algorithms based on DA algorithm to produce the stable matching.

·Intensive simulation analysis is conducted to evaluate the performance of proposed algorithms.

The remainder of this paper is organized as follows. Sec- tion II highlights the primary concept of many-to-one match- ing model and reviews the related works. Section III presents the system model relating to architecture of heterogeneous fog network and computational tasks. Section IV describes the de- sign of distributed computation offloading (DISCO) approach. Section V provides the simulation results and comparative analysis to highlight the performance of the proposed algo- rithm. Section VI concludes the paper and discusses potential directions for extending the proposed model.

Fig. 1.

An example of a M2O matching established between two sets [TeX:] $$\mathcal{X}$$ and [TeX:] $$\mathcal{Y}$$ in which [TeX:] $$\mathcal{M}\left(y_1\right)=\left\{x_1, x_2\right\}, \mathcal{M}\left(y_2\right)=\left\{x_4\right\}, \mathcal{M}\left(y_3\right)=\left\{x_3\right\},\mathcal{M}\left(y_4\right)=\left\{x_6\right\}, x_5$$ and [TeX:] $$y_5$$ are unmatched agents [TeX:] $$\text { i.e., } \mathcal{M}\left(y_5\right)=\left\{y_5\right\},\left.\mathcal{M}\left(x_5\right)=\left\{x_5\right\}\right)$$.
1.png

II. PRELIMINARY AND RELATED WORKS

A. Preliminary of Many-to-One (M2O) Matching Model

Although the most of matching models in the literature deal with two sets of agents, there are other matching models involving only one set (i.e., stable roommate problem [29]), and three sets of agents [30]. In the scope of paper, we consider a M2O matching model of two distinct sets of agents, in which each agent of one side can be matched with multiple agents of the other side but the reverse is not valid. We denote these two sets as [TeX:] $$\mathcal{X}=\left\{x_1, x_2, \cdots, x_n\right\}$$ and [TeX:] $$\mathcal{Y}=\left\{y_1, y_2, \cdots, y_k\right\}$$. Fig. 1 shows an example of M2O matching.

Generally, a M2O matching can be defined as follow:

Definition 2.1: The outcome of M2O matching model is a matching [TeX:] $$\mathcal{M}: \mathcal{X} \cup \mathcal{Y} \mapsto \mathcal{X} \cup \mathcal{Y}$$such that the three following constraints are satisfied:

[TeX:] $$|\mathcal{M}(x)|$$=1 for every agent [TeX:] $$x \in \mathcal{X}$$ and if x is unmatched,

[TeX:] $$|\mathcal{M}(y)|=q_y$$ for every agent [TeX:] $$y \in \mathcal{Y}$$; if the number of agents in [TeX:] $$\mathcal{M}(y)$$ is k and [TeX:] $$k<q_y$$, then [TeX:] $$\mathcal{M}(y)$$ will have [TeX:] $$q_y-k$$copies of y,

[TeX:] $$\mathcal{M}(y)=x$$ if and only if x is an element of [TeX:] $$\mathcal{M}(y)$$.

Recall that each agent y has a positive quota [TeX:] $$q_y$$ to represent the maximum number of agents in the set X it can be matched with. With this definition, [TeX:] $$\mathcal{X}$$ means that agent x is matched with agent y, and [TeX:] $$\mathcal{M}(x)=y$$ indicates that the agent y with the quota [TeX:] $$q_y$$ = 4 has been matched with two agents [TeX:] $$x_1$$ and [TeX:] $$x_2$$ and has two unfilled matching positions.

The objective of matching games is to achieve stability instead of optimality, at which there is no incentive incurred to devise the current matched pairs of agents. A stable matching is defied in the following definition 2.2.

Definition 2.2: A matching M is pairwise stable if there is no blocking pair ([TeX:] $$x, y$$).

Alternatively, a matching is to stably match agents in two sets based on their individual, objectives, and exchanged information. Each agent builds a ranking of other agents in the other side individually using a preference relation, and then creating its own preference list (PL). Basically, a preference can calculated based on an objective utility function that quantifies the QoS achieved by a certain matching between two agents of two sides. Defining [TeX:] $$\mathcal{P}(x)$$ and [TeX:] $$\mathcal{P}(y)$$ as preference lists of agent [TeX:] $$x \in \mathcal{X}$$ and [TeX:] $$y \in \mathcal{Y}$$, respectively. Basically, [TeX:] $$\mathcal{P}(x)=\left\{y_1, y_2, x, y_4, y_5, \cdots\right\}$$ illustrates that agent x prefers [TeX:] $$y_1$$ to [TeX:] $$y_2$$ ( denoted as [TeX:] $$y_1 \succ_x y_2$$), and prefers keeping the position unfilled over other agents like [TeX:] $$y_4$$ and [TeX:] $$y_5$$. A blocking pair is defined as follow:

Definition 2.3: (x,y) is a blocking pair for a matching [TeX:] $$\mathcal{M}$$ if three following conditions are satisfied:

·[TeX:] $$\mathcal{M}(x) \neq y,$$

·[TeX:] $$y \succ_x \mathcal{M}(x)$$,

·[TeX:] $$x \succ_y \mathcal{M}(y)$$,

Based on the created PLs, agents can implement dis-tributed algorithms called deferred acceptance (DA) algorithm to achieve the matching outcome. The DA algorithm known as the Gale-Shapley algorithm [31] is the classical algorithm to find the stable matching in marriage problem, which can be used to find the matching in M2O matching model. Basically, the DA algorithm is an iterative procedure, in which each players in one set make proposals to the other set, whose players, in turn, decide to accept or reject these proposals, respecting their quota and PLs. Following the M2O matching model between the two sets X and Y, the DA algorithms to find its stable matching include five phases as follows:

1) All agents [TeX:] $$x \in \mathcal{X}$$ propose to match with their highest ranked agents [TeX:] $$y \in \mathcal{Y}$$.

2) Each agent y with quota [TeX:] $$q_y$$ selects the [TeX:] $$q_y$$ highest ranked agents x among the ones that proposed in Step 1, and stores them in a waiting list, and reject the rests.

3) The rejected agents x continues proposing to match with their second highest ranked agents y in their PLs.

The agents y re-select their [TeX:] $$q_y$$ highest ranked agents x among those who applied in Step 3 and on the waiting list and stores them in the waiting list, then rejecting the rest.

5) The process repeats, and terminate when all agents x are either matched or have proposed to all the agents y they would like to match with.

With these procedures, the algorithm admits many distributed implementations, which do not require the players to known each other’s preferences. In addition, the DA algorithm ensures to achieve the stable matching outcome in a polynomial time.

B. Related Works

This section highlights the key related works that proposed the many-to-one matching based offloading solutions to min- imize the task execution delay in the fog networks. The key concerns focus on modeling the two sets of agents, formulation to construct the preference profiles, and stability analysis of distributed matching algorithms. We also examine the models which consider the offloading of splittable tasks.

1) Scheduling for parallel offloading and communication: When the computation tasks are splittable, the parallel process- ing of multiple subtasks is exploited to reduce the overall task execution. In addition, scheduling for subtask offloading and transmission is taken into account at individual node to get the benefits of parallel computation. However, these aspects have not been deeply investigated in the literature.

Regarding the task division, the most of existing works only considers to divide each task into two subtasks to reduce the delay and balance the workload among FNs through paral- lel computation of subtasks. For example, according to the FEMTO model in the work [32], each task is divided into two subtasks with different data sizes, which are then processed by the IoT node and offloaded to the fog entity. Similarly, in the task offloading methods introduced in [33], [34], each FN divides the tasks into only two parts, which are processed by itself locally and one HN in parallel. Dividing a task into multiple subtasks is firstly applied in POST algorithm pre- sented in [25]. POST is based on the game theory concept to find the optimal matching pair of subtasks and FNs to achieve the delay minimization objective. Accordingly, the number of subtasks for each task is dynamically optimized depending on the number of idle HNs and tasks in queues in each time slot. The evaluation analysis shows that POST is able to reach the generalized NE (GNE) fixed-point, thus achieving the near-optimal performance in terms of average task execution delay. However, it lack an investigation regarding scheduling of subtask offloading and transmission at individual FN.

The work [22] introduces an online algorithm for task offloading in the IoT-fog-cloud systems, which applies the par- allel communication and computation at multiple computing nodes (i.e., fog and cloud servers) to minimize the latency. Considering the scenarios with varying task arrival rates, the queuing delays of fog nodes are analyzed to optimize the number of tasks offloaded to other computing nodes to achieve the objective. However, the task transmission and computation at the computing nodes are performed according to the first coming and first scheduling (FCFS) policy rather than the optimal task scheduling policy. In addition, the task division is not considered to balance the workload of heterogeneous fog computing nodes. Recently, a recursive computation offloading algorithm (RCO) is developed in [23] to jointly perform task offloading and task scheduling for F-RANs. Considering the execution of tasks resided in the mobile device (MD), the tasks can be offloaded by the edge or cloud tiers. The optimization of task scheduling is associated in RCO to contribute to achieve the delay reduction objective. However, the task division and subtask offloading are not investigated.

2) M2O models for computation offloading: Generally, M2O models are established differently for different appli- cation scenarios depending on the sets of agents, the ways to construct the preference lists, and offloading modes used (i.e., partial offloading or full offloading). This section reviews these proposed M2O matching models to examine the differences.

The work [35] models the full task offloading problem as a M2O matching game between a set F of FNs and a set of IoT nodes generating a corresponding set T of computation tasks. Recall that there is a limited number q of tasks offloaded by a certain FN due to the limitation of resource (i.e., computing, buffer storage). The agents of sets construct their PLs based on their own utility functions, which account for the communication cost, waiting time in queues, and the execution delay of task. The DA algorithm is applied to achieve a two-sided exchange-stable (2ES) matching between two sets. The simulation results show that the proposed algorithm outperforms greedy and random offloading solutions in terms of worst total completion time, mean waiting time per task, mean total time per tasks, and fairness.

A task offloading framework known as METO is presented in [36] aiming to reduce the total energy consumption and overall full offloading delay in the IoT-fog network. In this network model, each IoT device generates a single task, and the resource of each FN is represented by a number q of virtual resource units (VRU), each is able to process only a single task. This offloading model is equivalent to a form of M2O matching problem between the IoT device set [TeX:] $$\mathcal{I}$$ and the FN set [TeX:] $$\mathcal{F}$$, in which [TeX:] $$q_i$$ is the quota of agent [TeX:] $$F_i \in \mathcal{F}$$. As considering jointly multiple criteria (i.e., energy consumption minimization and delay minimization) for the of- floading decision-making, METO employed a hybrid CRITIC and TOPSIS-based technique to produce the preferences of both sets. With this approach, the produced PLs are strict, complete and transitive, therefore ensuring to obtain the stable matching. Using the simulation-based comparative analysis, METO shows its advantage in reducing the total consumption energy as well as the overall delay compared to the baseline algorithms including ME [35], SMETO [37], and a random resource allocation policy.

Similar to METO, a one-to-many matching model between the task set [TeX:] $$\mathcal{T}$$ and the FN set [TeX:] $$\mathcal{F}$$ is used in [38] to seek for an efficient task offloading algorithm in the IoT-fog systems. However, the system considers the presence of fog service providers (SPs), each of which manages the resources of fog nodes in its domain. Consequently, the task offloading problems is transformed into a student project allocation (SPA) game [39], in which IoT devices (or tasks), FNs, and SPs correspond to students, projects, and lectures respectively. The work further presents a DA-based distributed task offloading algorithm called SPATO to tackle the selfishness and rational of individual agents. In particular, PLs of agents are con- structed using the analytical hierarchy process (AHP) [40] that accounts for multiple criteria of system wise objectives to obtain the rankings. The simulation results indicate that the proposed algorithms enable the network to achieve a reduced offloading delay and energy consumption as well as minimum outage compared to the random offloading plan and SMETO [37].

The work [33] introduces a DATS algorithm for offloading dispersive tasks in the fog and cloud networks. In these networks, the tasks can be processed by either partial or full offloading mode dynamically. DATS incorporates two algorithms to minimize the task offloading delay, which are progressive computing resource competition (PCRM) and syn- chronized task scheduling (STS). PCRM is a M2O matching- based algorithm to yield an efficient resource allocation strat- egy between task set [TeX:] $$\mathcal{T}$$ and resource set [TeX:] $$\mathcal{H}$$ of helpers. A new index called processing efficiency (PE) is defined to support the production of PLs for the helpers. PE encapsulates communication and computation delay to examine the delay- oriented performance of resource allocation strategy. Whereas, TNs rank the agents of helpers based on the QoS that helpers can provide. Alternatively, a TN prefers to match with a helper which minimizes the offloading delay. Second, STS algorithm is proposed to optimize the subtask assignment and scheduling for each task given the matching obtained by PCRM. The extensive simulation analysis is conducted to evaluate the performance of DATS under the impact of many factors including task size, quota of helpers, and network bandwidth. Summarily, DATS can significantly reduce the task offloading delay compared to random and greedy offloading policies.

A M2O game is established to model the association prob- lem of fog network, in which each IoT node (user) can be associated with only a cloudlet while a cloudlet can have multiple IoT nodes matched with it [41]. However, there a limited number of IoT nodes connecting to a cloudlet to respect its maximal workload. In addition, the presence of wireless interference between IoT nodes located in proximity regions when connecting wirelessly to the cloudlets indicate external effect in the matching game. This externality makes PLs of agents change whenever a pair of IoT node and cloudlet is matched. The work introduces a concept called swap matching to handle externalities and then achieve the stable matching outcome. The extensive simulation are provided to show the benefits of proposed algorithms, that include the latency minimization and throughput enhancement compared to the random association approach.

The work [42] concerns the joint optimization problem of radio and computational resource allocation in the IoT-Fog computing systems to optimize the system performance in terms of service delay, user satisfaction, and system cost. Such the problem involves three entities including the set of IoT EUs, FNs, and CSPs (which manage the resources of FNs). From the matching perspective, the mutual interaction of these sets can be modeled in a SPA problem since they corresponds to students, projects, and lectures respectively. To handle the external effect, a procedure called user-oriented cooperation (UOC) is developed. Fundamentally, UOC is a strategy to remove possible blocking pairs in the matching given the presence of externality by swap operations, which evaluate the change of utility values of agents. As a swap is applied for any two current pairs, and the corresponding utility values are changeable, the two pairs is considered as blocking ones. With this way, the proposed algorithm can achieve the stable matching with an addition cost of computation complexity resulted in from the swap operations. Regarding the performance, the simulation results show that the proposed framework enables the system to achieve low service latency as well as minimized outages.

III. SYSTEM MODEL

A. Fog Computing Network

This paper considers a FCN with flat architecture (i.e., no centralized fog controller) as illustrated in Fig. 2, which includes a set [TeX:] $$\mathcal{F}$$ of N FNs ([TeX:] $$\mathcal{F}=\left\{F_1, \cdots, F_N\right\}$$) and a cloud data center C providing M virtual machines (VMs). FNs can connect together via either device-to-device (D2D) communication mode or access points (APs) using wired and wireless links for sharing resources. A cloud server can be reached by any FN through APs and gateways (GWs) for providing the cloud services such as computation, storage, and caching. For the sake of simplified representation, the presence of APs and GWs is omitted from Fig. 2. The FNs operate in a distributed manner to process and offload tasks. At an arbitrary scheduling time slot, a FN can be in one of three modes: (i) A task node (TN) that has tasks in its queue, (ii) busy node (BN) that is busy processing tasks, and (iii) helper node (HN) that has available resource to offloading tasks. The state of nodes can change over scheduling times. For example, a BN at a time slot τ will become a HN in the next time slot (τ + 1) if there is no task resided in its queue. Notably, the cloud always serves as a HN because it has a set of virtual machines.

Fig. 2.

An illustrative model of FCN includes three TNs ( [TeX:] $$F_1, F_2, F_7$$), two BNs ( [TeX:] $$F_6, F_8$$), and four HNs ( [TeX:] $$F_3, F_4, F_5, F_9$$), which can connect to the cloud to request the cloud services.
2.png

To design an efficient offloading policy, TNs are based on a table of neighbor resources that contains the updated information about the available resources of neighbor HNs. We define [TeX:] $$\mathcal{H}_i$$ as the set of HNs neighboring to a TN [TeX:] $$F_i$$, thus [TeX:] $$\mathcal{H}_i=\left\{F_p, F_k, \cdots, C\right\}$$. We assume that the fog devices uses a common protocol to communicate and update their resource status periodically [43]. The computing capability of a FN is represented through CPU frequency ([TeX:] $$f$$ (cycles/s)), and CPU processing density ([TeX:] $$(\gamma$$ (cycles/bit)).

B. Computation Offloading Model

Computation tasks from the end user devices such as IoT sensors, and smart phones arriving at the queues of FNs follows an exponential distribution. Fig. 3 shows the overview of task offloading model in the FCN.

At a certain time slot τ, the FCN has a set [TeX:] $$\mathcal{T}=\left\{t_1, t_2, \cdots, t_K\right\}$$ including K computational tasks required to be processed. A single task [TeX:] $$t_i$$ currently resided in the queue of TN [TeX:] $$F_i$$ can be processed locally by [TeX:] $$F_i$$ or fully offloaded by a HN [TeX:] $$F_k\left(F_k \in \mathcal{H}_i\right)$$. In addition, a task [TeX:] $$t_i$$ with a size [TeX:] $$a_i$$ can be divided into [TeX:] $$m=\left|\mathcal{H}_i\right|+1$$ subtasks, which are denoted as [TeX:] $$t_{i k}(k=0, \cdots, N)$$. These subtasks then can be processed in parallel by multiple HNs. As illustrated in Fig. 3, two subtasks [TeX:] $$t_{14}$$ and [TeX:] $$t_{13}$$ divided from [TeX:] $$t_1$$ can be computed simultaneously

Fig. 3.

The offloading model includes full and partial offloading.
3.png

Fig. 4.

Timeline to calculate the delay to execute an arbitrary task [TeX:] $$T_i$$.
4.png

by [TeX:] $$F_4$$ and [TeX:] $$F_3$$ respectively while [TeX:] $$t_{10}$$ is computed locally. We define [TeX:] $$a_{i k}$$ as the data size of subtask [TeX:] $$t_{i k}$$, thus we have:

(1)
[TeX:] $$\sum_{k=0}^N a_{i k}=a_i, \forall t_i \in \mathcal{T}.$$

In addition, when [TeX:] $$a_{i k}$$ =10, the HN [TeX:] $$F_k$$ is not assigned to offload [TeX:] $$t_i$$ .

IV. PROBLEM FORMULATION

The overall delay for executing a task [TeX:] $$t_i$$ ([TeX:] $$D_i$$) is an interval from the arrival time of task ([TeX:] $$T_i^a$$) at the queue of [TeX:] $$F_i$$ until finish processing time ([TeX:] $$T_i^f$$) at other fog node ([TeX:] $$F_i$$ or some HNs). As illustrated in Fig. 4, [TeX:] $$D_i=T_i^f-T_i^a$$.

When the partial offloading policies are applied for exe- cuting [TeX:] $$t_i$$, we define [TeX:] $$D_{i k}$$ as the delay to process the sub- task [TeX:] $$t_{i k}$$. Suppose that there is no data dependency between subtasks of [TeX:] $$t_i$$, therefore the subtasks can be computed in parallel by different HNs. In addition, no aggregation of subtask processing results is required, thus [TeX:] $$D_i$$ = max{[TeX:] $$D_{i k}$$}, k = 1,···,N. Notably, [TeX:] $$D_{i k}=T_{i k}^f-T_i^a$$. In this sense, ik besides subtask offloading policies (i.e., which FN processes which subtask), subtask scheduling (i.e., which subtask is transmitted or processed first) has a significant impact on the total task execution delay. Because the data transmission and data processing are not performed simultaneously by FNs, offloading and transmission scheduling must be taken into account to minimize the delay.

Fig. 5.

Two scheduling cases of data communication and computation at a TN.
5.png

The following proposition 4.1 support to derive an efficient data computation and communication schedule at a TN to reduce the task execution delay.

Proposition 4.1: When subtasks of a task [TeX:] $$T_i$$ are pro- cessed both by [TeX:] $$F_i$$ locally and offloaded by HNs, the subtask transmission to the HNs is performed before performing the subtask computation locally at [TeX:] $$F_i$$ to get the benefit of parallel computation.

Proof Suppose that a task [TeX:] $$t_i$$ is divided into two subtasks [TeX:] $$t_{i i}$$ and [TeX:] $$t_{i k}$$, which are processed locally by [TeX:] $$F_i$$ and offloaded by [TeX:] $$F_k$$, respectively.

We consider the first case as illustrated in Fig. 5(a), in which [TeX:] $$t_{i i}$$ is processed first by [TeX:] $$F_i$$. The total delay to execute [TeX:] $$t_{i i}$$ is:

(2)
[TeX:] $$D_{i i}=\frac{a_{i i} \gamma_i}{f_i}$$

Because data processing and data transmission are not per- formed simultaneously by Fi, the total delay to execute [TeX:] $$$$ including [TeX:] $$$$ as waiting time is calculated by

(3)
[TeX:] $$D_{i k}=D_{i i}+\frac{a_{i k}}{r_{i k}}+\frac{a_{i k} \gamma_k}{f_k}$$

where [TeX:] $$$$ is the data rate on the channel between [TeX:] $$$$ and [TeX:] $$$$. It can be derived from:

(4)
[TeX:] $$r_{i k}=B \log _2\left(1+\frac{G P_i^t}{B N_0}\right)$$

where B is the bandwidth of channel, G is the channel gain, [TeX:] $$P_i^t$$ is the transmission power of [TeX:] $$F_i$$, and [TeX:] $$N_0$$ is the noise power min Di spectral density. The channel gain can be obtained given the path loss PL of channel, which can be adopted from [44]:

(5)
[TeX:] $$P L=20 \log \left(d^{\mathrm{km}}\right)+20 \log \left(B^{\mathrm{kHz}}\right)+32.45(\mathrm{~dB})$$

Notably, the propagation delay is neglected since it is much smaller than the transmission delay ([TeX:] $$a_{i k} / r_{i k}$$).

Because [TeX:] $$D_i=\max \left\{D_{i i}, D_{i k}\right\}$$, thus [TeX:] $$D_i=a_{i i} \gamma_i / f_i$$ + [TeX:] $$a_{i k} / r_{i k}+a_{i k} \gamma_k / f_k$$.

When [TeX:] $$t_{i k}$$ is transmitted to [TeX:] $$F_k$$ first in the case 2 (Fig. 5(b)), the total delay to execute the subtasks are calculated as follows:

(6)
[TeX:] $$D_{i k}^*=\frac{a_{i k}}{r_{i k}}+\frac{a_{i k} \gamma_k}{f_k}$$

(7)
[TeX:] $$D_{i k}^*=\frac{a_{i k}}{r_{i k}}+\frac{a_{i k} \gamma_k}{f_k}$$

The total delay to finish [TeX:] $$t_1$$ in the case 2 is [TeX:] $$D_i^*$$ = max{[TeX:] $$D_{i i}^*, D_{i k}^*$$}. Obviously, [TeX:] $$D_i^*<D_i$$; hence, the proposition is proved. In other words, the delay is potentially reduced if all subtasks offloaded by HNs are scheduled to be transmitted first from TNs.

Based on the proved proposition, we define [TeX:] $$o_{j k}^i$$ to represent the scheduling order between subtasks [TeX:] $$t_{i j}$$ and [TeX:] $$t_{i k}$$ for wireless transmission. Particularly, if [TeX:] $$t_{i j}$$ is scheduled on wireless link before [TeX:] $$t_{i k}$$, we have [TeX:] $$o_{j k}^i$$ = 1, otherwise, [TeX:] $$o_{j k}^i$$ = 0. Since [TeX:] $$t_{i j}$$ is transmitted either before or after tik, we have

(8)
[TeX:] $$o_{i j}^i+o_{i k}^i=1, \forall i, j=0, \cdots, N \& i \neq j$$

We define [TeX:] $$T_{i j}^t$$ as the start time for transmitting [TeX:] $$t_{i j}$$. There are no overlap between their wireless transmission times of two subtasks [TeX:] $$t_{i j}$$ and [TeX:] $$t_{i k}$$, therefore their start transmission times must satisfy:

(9)
[TeX:] $$T_{i j}^t-\left(T_{i k}^t+\frac{a_{i k}}{r_{i k}}\right) \geq-L o_{j k}^i, \forall j, k \in \mathcal{H}_i$$

where L is a large possible constant

Additionally, the start processing time [TeX:] $$T_{i j}^p$$ of [TeX:] $$t_{i j}$$ must not ij be smaller than the arrival time at a HN [TeX:] $$F_j$$, therefore:

(10)
[TeX:] $$T_{i j}^p \geq T_{i j}^t+\frac{a_{i j}}{r_{i j}}, \forall j \in \mathcal{H}_i$$

Finally, the end time Tij to finish tij is calculated as follow:

(11)
[TeX:] $$T_{i j}^f=T_{i j}^p+\frac{a_{i j} \gamma_j}{f_j}, \forall j \in \mathcal{H}_i$$

Recall that [TeX:] $$D_{i j}=T_{i j}^f-T_i^a$$, therefore [TeX:] $$D_i=\max _j\left\{D_{i j} \mid j \in\right.\left.\mathcal{H}_i \cup\{i\}\right\}$$ = [TeX:] $$\max _j\left\{T_{i j}^f\right\}-T_{i j}^a$$. We denote [TeX:] $$\mathbf{P}\left(t_i \mid \mathcal{H}_i\right)$$ as a prob- lem to minimize the execution delay of ti given the available computation resource set [TeX:] $$\mathcal{H}_i$$. The optimization problem P is modeled as follow to minimize [TeX:] $$D_i$$ :

(12)
[TeX:] $$\begin{array}{rll} \mathbf{P}\left(t_i \mid \mathcal{H}_i\right): & \min _{\boldsymbol{\alpha}_i, \boldsymbol{T}_i^t, \boldsymbol{T}_i^p, \boldsymbol{o}_i} & D_i \\ & \text { s. t. } & (1),(8),(9),(10), \end{array}$$

where [TeX:] $$\boldsymbol{\alpha}_i=\left[a_{i i}, a_{i j}, a_{i k}, \cdots\right]$$ = [TeX:] $$\boldsymbol{T}_i^t=\left[T_{i i}^t, T_{i j}^t, T_{i k}^t, \cdots\right], \boldsymbol{T}_i^p$$ = [TeX:] $$\left[T_{i i}^p, T_{i j}^p, T_{i k}^p, \cdots\right]$$ , and [TeX:] $$\boldsymbol{O}_i=\left[o_{i j}^i, o_{j k}^i, \cdots\right], \forall j, k \in \mathcal{H}_i \text {. }$$.

As each fog node selfishly aims to minimize its own task execution delay, the minimization of overall delay of all tasks is infeasible because of coupling resource problem. In addition, achieving the global optimization (i.e., minimizing the sum of total delay of all tasks) face a critical challenge regarding the computation complexity, especially in dense fog networks. In the next section, we propose a distributed algorithm based on matching theory for allowing the fog nodes to perform task offloading in a distributed manner with low computational complexity.

Fig. 6.

An example of optimal resource allocation solutions for offloading three tasks [TeX:] $$T_1, T_3$$, and [TeX:] $$T_4: \overline{\mathcal{H}_1^*}=\left\{F_1, F_2, F_5\right\}, \overline{\mathcal{H}_3^*}=\left\{F_2, F_3, F_5, F_9\right\}, \overline{\overline{\mathcal{H}}_4^*}=\left\{F_4, F_5, F_7, F_9\right\}$$ obtained from their own optimization problems. Because a HN can only process a task/subtask, the resource contention occurs at [TeX:] $$F_2, F_5$$, and [TeX:] $$F_9$$.
6.png

V. DESCRIPTION OF DISCO FRAMEWORK

A. Overview

Based on the system model, we model the computational offloading problem in the FCNs as a M2O matching game between the set of tasks [TeX:] $$\mathcal{T}$$ and the set of HNs [TeX:] $$\mathcal{H}$$. Unlike the M2O matching models studied in the related work, this paper considers a M2O matching with group model in which a task [TeX:] $$t_i$$ prefers to match with a group of HNs, which can collaboratively offload the task [TeX:] $$t_i$$ with the minimized execution delay. Define [TeX:] $$\overline{\mathcal{H}}_i$$ as a subset of [TeX:] $$\mathcal{H}_i$$ and [TeX:] $$\overline{\mathcal{H}}_i^*$$ is the most preferred subset of HNs to derive the minimal execution delay achieved by solution of problem [TeX:] $$\mathbf{P}\left(t_i \mid \mathcal{H}_i\right)$$. Notably, as [TeX:] $$\overline{\mathcal{H}}_i^*=\left\{F_i\right\}, t_i$$ is processed locally by [TeX:] $$F_i$$. Basically, for each task [TeX:] $$t_i$$, there is an optimal subset of HNs to offer the task offloading solution with minimal delay, thus inherently incurring the coupling resource problem as illustrated in Fig. 6.

The DISCO framework includes three algorithms, which are developed to create the PLs of agents (i.e., tasks and HNs), produce a stable matching based on the DA algorithm, and derive an optimal task offloading and scheduling (OSTO) mechanism.

B. PL Construction

Because not all FNs in the network is connected together, the PLs of agents are not complete. In addition, HNs have preferences over individual tasks, and tasks have preferences over subsets of HNs.

1) PLs of tasks: Faced with a set [TeX:] $$\mathcal{H}_i$$ of HNs, each task [TeX:] $$t_i$$ can rank all subsets of [TeX:] $\mathcal{H}_i$$ based on the minimal delay they can offer to offload [TeX:] $$t_i$$. However, when [TeX:] $$\left|\mathcal{H}_i\right|$$ is large in they can offer to offload ti. However, when |Hi| is large in dense fog networks, this method is inefficient because it incurs high overhead of computation. To overcome this issue, we first determine quotas for task agents as follow.

Definition 5.1: The quota [TeX:] $$q_i$$ for a task [TeX:] $$t_i$$ is the number of HN optimized by the problem [TeX:] $$\mathbf{P}\left(t_i \mid \mathcal{H}_i\right)$$. Alternatively, [TeX:] $$q_i=\left|\overline{\mathcal{H}}_i^*\right|$$, and [TeX:] $$q_i \leq\left|\mathcal{H}_i\right|$$. Consequently, the tasks may have different quotas for match- ing with agents of HNs, that is contrary to the model in [33] considering the same quota for all tasks.

Definition 5.2: When [TeX:] $$q_i$$ > 1, the least preferred agent in the PL of [TeX:] $$t_i$$ is [TeX:] $$F_i$$. This definition specifies that eventually, [TeX:] $$t_i$$ is computed locally by [TeX:] $$F_i$$ if there is no optimal solution for offloading [TeX:] $$t_i$$ by HNs. In addition, [TeX:] $$t_i$$ can be executed in the full offloading method by a single HN. Therefore, [TeX:] $$t_i$$ also ranks individual HNs in its PL.

Definition 5.3: A HN [TeX:] $$F_k$$ is acceptable to a task [TeX:] $$t_i$$ if and only if [TeX:] $$\text { if } D_i^*\left(t_i \mid F_k\right)<D_i^*\left(t_i \mid F_i\right)$$. This means that delay achieved by full offloading [TeX:] $$D_i^*\left(t_i \mid F_k\right)$$ is lower than delay achieved by local computing [TeX:] $$D_i^*\left(t_i \mid F_i\right)$$.

Finally, we have an acceptable list of HN agents denoted as [TeX:] $$\mathcal{L}_i$$ that [TeX:] $$t_i$$ can match with, and [TeX:] $$\mathcal{L}_i=\mathcal{H}_i^* \cup\left\{F_k, F_j, \cdots\right\}$$, where [TeX:] $$F_k, F_j$$ are HNs selected according to Definition 5.3 and [TeX:] $$F_k, F_j \in \mathcal{H}_i$$. Each [TeX:] $$t_i$$ can construct its own PLs from the subsets of [TeX:] $$\mathcal{L}_i$$. Basically, [TeX:] $$\mathcal{P}\left(t_i\right)=\left\{\overline{\mathcal{H}}_i^*, \overline{\mathcal{H}}_i^x, \overline{\mathcal{H}}_i^y, \cdots, F_i\right\}$$, where [TeX:] $$\overline{\mathcal{H}}_i^x, \overline{\mathcal{H}}_i^y$$ are subsets of [TeX:] $$\mathcal{L}_i$$ and [TeX:] $$D_i^*\left(t_i \mid \overline{\mathcal{H}}_i^*\right)<D_i^*\left(t_i \mid \overline{\mathcal{H}}_i^x\right)< D_i^*\left(t_i \mid \overline{\mathcal{H}}_i^x\right)<\cdots<D_i^*\left(t_i \mid F_i\right)$$, and [TeX:] $$D_i^*\left(t_i \mid \overline{\mathcal{H}}_i^x\right)$$ is obtained from solution of [TeX:] $$\mathbf{P}\left(t_i \mid \overline{\mathcal{H}}_i^x\right)$$

2) PLs of HNs: We adopt the concept of processing effi- ciency (PE) as presented in [33] to evaluate the capability of HNs in offloading. Recall that PE of a HN [TeX:] $$F_k$$ with respect to a TN [TeX:] $$F_i$$ denoted as [TeX:] $$\rho$$(k, i) is calculated as follow:

(13)
[TeX:] $$\rho(k, i)=\frac{1}{r_{i k}}+\frac{\gamma_k}{f_k}$$

This parameter specifies the efficiency of HN [TeX:] $$F_k$$v to offload a bit data sent from a TN [TeX:] $$F_i$$. Based on PE, each HN can construct its PL over individual tasks. The PL of a HN [TeX:] $$F_j$$ is in this form, [TeX:] $$\mathcal{P}\left(F_k\right)=\left\{t_m, t_n, t_p\right\}$$. Basically, for a pair of tasks [TeX:] $$t_m$$ and [TeX:] $$t_n, t_m \succ_{F_k} t_n$$ if [TeX:] $$\rho(k, m)<\rho(k, n)$$.

C. Matching Algorithm

Based on the PLs of tasks and HNs, DISCO employs the DA algorithm to find the stable matching as shown in Algorithm 1. We implement the task-optimal stable matching algorithm, in which the task proposes and the role of HNs is to reject or accept the proposals based on their PLs.

D. Optimal Task Offloading and Communication Scheduling (OTOS) Algorithm

The aforementioned matching algorithm determines a set of HNs [TeX:] $$\mathcal{H}_i^{\mathcal{M}}$$ used to execute a task [TeX:] $$t_i$$. Then, solving the problem [TeX:] $$\mathbf{P}\left(t_i \mid \mathcal{H}_i^{\mathcal{M}}\right)$$ yields to the optimal scheduling policy of subtask offloading and communication, which represented by the optimal vectors [TeX:] $$\boldsymbol{\alpha}_i, \mathbf{T}_i^t, \mathbf{T}_i^p$$, and [TeX:] $$\mathbf{O}_i$$.

E. Stability Analysis

Proposition 5.1: The matching produced by Algorithm 1 is pairwise stable. Proof The procedure to produce the PLs of TNs and HNs show that each task is always matched with the most preferred subset of HNs such that the delay is minimized. At every step in the algorithm, each task is proposing to its most preferred subset of HNs that does not include any HNs, which have previously rejected the proposal. Considering a task [TeX:] $$t_i$$ and a HN [TeX:] $$F_k$$ at any step such that [TeX:] $$F_k \in C h_i\left(\mathcal{M}\left(t_i\right) \cup F_k\right)$$, where [TeX:] $$\mathcal{M}\left(t_i\right)$$ is set of HNs that [TeX:] $$t_i$$ is match with at this current step, and [TeX:] $$C h_i\left(\mathcal{H}_i\right)$$ is referred to as the most preferred subset of [TeX:] $$\left.\mathcal{H}_i \text { (i.e., } \overline{\mathcal{H}}_i^*\right)$$. At some different step of the algorithm, [TeX:] $$t_i$$ propose to [TeX:] $$F_k$$ and, for example, was subsequently rejected, so [TeX:] $$F_k$$ prefers [TeX:] $$\mathcal{M}\left(F_k\right)$$ to [TeX:] $$t_i$$, and definitely [TeX:] $$\mathcal{M}$$ is not blocked by the pair ([TeX:] $$t_i, F_k$$). Because this pair ([TeX:] $$t_i, F_k$$) is arbitrary, and [TeX:] $$\mathcal{M}$$ is not blocked by any individual agent, [TeX:] $$\mathcal{M}$$ is stable.

Algorithm 1
M2O matching algorithm.
pseudo1.png

VI. SIMULATION AND PERFORMANCE EVALUATION

A. Simulation Environment Setup

The event-driven framework supported SimPy library in Python is used to conduct the simulation scenarios, which investigate the performance of DISCO. The consider network includes 100 FNs deployed uniformly in a region of area 200 m × 200 m. We assume that all the FNs can connect together through the wireless communication links within their coverage. The coverage of each FN is 40 m. The cloud provides 50 VMs for offloading tasks and all the FNs can connect to the cloud via associated APs or GWs. Table I sum- marizes the important parameters and values for the simulation scenario, which are adopted from the related works [25], [33] to simulate and evaluate the performance of DATS and POST algorithms. Notably, U[x,y] is used to indicate the uniform distribution on interval [x, y]. Accordingly, to account for the heterogeneity of fog computing nodes, their CPU frequency and processing density are varied in each simulation run. In addition, task size and active ratio of TNs are varied to evaluate their impacts on the overall performance of computation offloading algorithms.

Table 1.

PARAMETERS FOR SIMULATION.
Parameters Values
Number of FNs, [TeX:] $$N$$ 100
Number of VMs provided by cloud, [TeX:] $$M$$ 50
Task size, [TeX:] $$a_i$$ {5, 10, 15} MB
Active ratio of TNs, β {0.1–0.9}
Processing density of FNs, [TeX:] $$\gamma$$ U[500,1000] cycle/bit
CPU frequency of FNs, [TeX:] $$f$$ {1.0, 1.5, 2.0, 2.5} GHz
Processing density of each VM, [TeX:] $$\gamma_c$$ 100 cycle/bit
CPU frequency of each VM, [TeX:] $$f_c$$ 10 GHz
Transmitting power of FNs, [TeX:] $$P^t$$ U[100,350] mW
Link rates from APs and GWs to cloud U[5,10] Gb/s
Noise power spectral density, [TeX:] $$N_0$$ −174 dBm/Hz
Bandwidth, [TeX:] $$B$$ U[15–90] KHz

Fig. 7.

The average task execution delay offered by the comparative offloading schemes.
7.png

All the simulation results are averaged over 100 simulation rounds and each round lasts 1000 seconds according to the clock of CPU. We compare the proposed DISCO approach with DATS, and POST, which are the most related approaches. Accordingly, these three algorithms employ the task division to get the benefits of parallel subtask computation. In addition, DATS uses the MTO matching model to schedule and offload multiple tasks to HNs at each time of decision-making, but the scheduling for data communication and data computation of subtasks are not employed. Meanwhile, POST applies the game theory to find the GNE point for mapping task/subtasks to the HNs [25].

B. Evaluation and Analysis

Fig. 7 depicts the average delay achieved by POST, DATS, and DISCO when β is varied. When there is a large percentage of HNs in the network (i.e., β is small (0.1, 0.2)), DISCO, DATS, and POST have a similar performance. With this scenarios, there are abundant resources to offer the optimal solutions for offloading tasks. When β increases, the per- formance gap between DISCO and DATS, POST is larger. When β = 0.9, there presents a high workload in the network and a small percentage of available resources (i.e., HNs). Therefore, it is likely to have more tasks computed locally by TNs. That leads to a considerable increase of delay for the three algorithms. However, DISCO still outperforms DATS, and POST because it can allocate the resources better through the stable matching. Concretely, DATS and POST assign the best HNs to tasks in a greedy manner. Meanwhile, DISCO uses the principals of stability in the matching theory to allocate the resource fairly. In addition, DISCO outperforms the classical game theory-based algorithm (POST) owing to the two-side stability of matching, in which the both side tasks and HNs would like to match at the stability point (as analyzed in Section V.E-Stability Analysis). Meanwhile, POST only concerns one-side matching of task, wanting to select the most preferred HNs for offloading. In other word, POST offers a greedy selection of offloading nodes. Furthermore, the proposed OTOS mechanism embedded in DISCO improves the performance of task offloading compared to random schedul- ing approaches used in DATS and POST.

Fig. 8.

The delay reduction ratio offered by DISCO, DATS, and POST.
8.png

Fig. 9.

The delay distribution achieved by DISCO, DATS, and POST.
9.png

To further examine the potential of DISCO, we evaluate the delay reduction ratio (DRR) defined as the total delay reduction compared to the total delay of local computation. Fig. 8 shows the simulation result regarding the DRR achieved by the three algorithms. All schemes employed task division and associated task parallel processing, hence reducing the delay considerable compared with the local computing. In par- ticular, DISCO is able to reduce delay substantially compared to DATS, and POST owing to OTOS mechanism.

We evaluate the distribution of task execution delay as shown in Fig. 9. For different active ratios of TNs, DISCO offers a uniform distribution of task delays for individual TNs owing to the fair resource allocation and optimal subtask scheduling. Contrarily, the high variance of delays is present in DATS and POST algorithms because of unfairness of HN allocations for offloading tasks. To be specific, there are some tasks in the both DATS and POST algorithms, which are offloaded by the coalitions of best HNs. Simultaneously, the other tasks may be suffered by longer delay for execution when offloaded by the remaining low computational HNs. This issue is obvious in the network with the presence of more TNs (e.g., β = 0.8) as shown in Fig. 9. The scarcity of HNs leads to the high variance of delay distribution in the DATS and POST algorithms.

Fig. 10 supports the superior performance of DISCO through the average utilization ratio of HNs. As shown in this figure, the task division employed in the three schemes can increase the utilization ratio of fog computing devices since the limited resource fogs can process the subtasks. However, DISCO is able to use the resource of HNs better than DATS and POST because the proposed matching algorithm allows the tasks to select the subset of HNs fairly. In other words, more percentages of HNs are joined in the collaborative offloading. In addition, the utilization of fog resources is affected by the convergence of algorithms.

Fig. 11 shows the convergence of the related matching algorithm. DISCO outperform DATS and POST with the minimum iterations for convergence. DATS must iterate over subsets of HNs to determine the quota for each TN. POST also uses a high number of exchange information to obtain the equilibrium. The different gap is larger when the percentage of TNs in the network increases. Consequently, DISCO with distributed algorithm and low computation complexity offers potential solution for large-scale fog networks.

DATS and POST take longer run time to reach the conver- gence, thus posing an increased delay of subtask processing and offloading. Consequently, a number of BN are slightly increasing for subsequently time scheduling intervals. This leads to increasing the delay of DATS and POST.

VII. CONCLUSIONS AND FUTURE WORKS

This paper presented DISCO, a distributed computation offloading framework used for the heterogeneous FCNs to reduce the delay of task execution using matching theory. The intensive simulation results show that DISCO is an effective offloading solution to achieve the reduced execution delay for fog computing environment, in which the fog computing devices are heterogeneous complicated, and different kinds of tasks. By applying task division technique and the associated parallel computation, DISCO allows a task to be processed by multiple HNs to significantly reduce the overall task execution delay. The optimal scheduling of subtask transmission and subtask processing is also integrated to contribute to the delay reduction objective. The matching theory based principals are used to remove the rational and selfishness of individual TNs and simultaneously offer the fair resource allocation.

The model presented in this paper potentially leads to some extension directions. First, a many-to-many matching model can be used when a HN can process multiple tasks/subtasks. With this model, a group of HN can help to process a set of tasks or subtasks collaboratively to improve the system performance in term of delay. However, the externality is also considered in this case because the scheduling of tasks or subtasks in each HN is regarded. Second, a two-sided matching is examined in many scenarios as the load bal- ancing, energy consumption criteria are considered in HNs to construct the preference lists. Third, using reinforcement learning algorithms [45] is potential to investigate matching problems where one side may not know their player preference relations a priori. They can only construct their preference list by interacting with players of other side [45].

Biography

Hoa Tran-Dang

Hoa Tran-Dang (M’17) received the B.E. degree in Electrical and Electronics Engineering from Hanoi University of Science and Technology (HUST), Vietnam and the M.S. degree in Electronics Engineering from Kumoh National Institute of Technology (KIT), South of Korea in 2010 and 2012, respectively. He pursued the Ph.D. degree with University of Lorraine, France during 2013-2017. He currently works in NSL laboratory, department of ICT Convergence Engineering at Kumoh National Institute of Technology, South of Korea as a Research Professor. His research interests include wireless sensor networks, Internet of things (IoT), physical Internet, and radio resource management in wireless industrial networks.

Biography

Dong-Seong Kim

Dong-Seong Kim (S’98-M’03-SM’14) received his Ph.D. degree in Electrical and Computer Engineering from the Seoul National University, Seoul, Korea, in 2003. From 1994 to 2003, he worked as a Full-time Researcher in ERC-ACI at Seoul National University, Seoul, Korea. From March 2003 to February 2005, he worked as a Postdoctoral Researcher at the Wireless Network Laboratory in the School of Electrical and Computer Engineering at Cornell University, NY . From 2007 to 2009, he was a Visiting Professor with Department of Computer Science, University of California, Davis, CA. He is currently a Director of kit Convergence Research Institute and ICT Convergence Research Center (ITRC program) supported by Korean government at Kumoh National Institute of Technology. He is IEEE and ACM Senior Member. His current main research interests are real-time IoT, industrial wireless control network, networked embedded system and fieldbus.

References

  • 1 A. Zanella, N. Bui, A. Castellani, L. Vangelista, and M. Zorzi, "Internet of things for smart cities," IEEE Internet Things J., vol. 1, no. 1, pp. 22-32, Feb. 2014.doi:[[[10.48175/ijarsct-5821]]]
  • 2 D. A. Chekired, L. Khoukhi, and H. T. Mouftah, "Industrial IoT data scheduling based on hierarchical fog computing: A key for enabling smart factory," IEEE Trans. Ind. Inform., vol. 14, no. 10, pp. 4590-4602, Oct. 2018.doi:[[[10.1109/tii.2018.2843802]]]
  • 3 D.-S. Kim, T.-D. Hoa, and H.-T. Thien, "On the reliability of industrial internet of things from systematic perspectives: Evaluation approaches, challenges, and open issues," IETE Technical Review, pp. 1-32, Feb. 2022.doi:[[[10.1080/02564602.2022.2028586]]]
  • 4 H. Tran-Dang, N. Krommenacker, P. Charpentier, and D.-S. Kim, "The Internet of things for logistics: Perspectives, application review, and challenges," IETE Technical Review, vol. 39, no. 1, pp. 1-29, Jan. 2022.doi:[[[10.1080/02564602.2020.1827308]]]
  • 5 H. Tran-Dang, N. Krommenacker, P. Charpentier, and D. Kim, "Toward the Internet of things for physical Internet: Perspectives and challenges," IEEE Internet Things J., vol. 7, no. 6, pp. 4711-4736, Feb. 2020.doi:[[[10.1109/jiot.2020.2971736]]]
  • 6 H. Tran-Dang and D.-S. Kim, "The physical Internet in the era of digital transformation: Perspectives and open issues," IEEE Access, vol. 9, pp. 164613-164631, Nov. 2021.doi:[[[10.1109/access.2021.3131562]]]
  • 7 B. P. Rimal, E. Choi, and I. Lumb, "A taxonomy and survey of cloud computing systems," in Proc. IEEE NCM, Aug. 2009.doi:[[[10.1109/ncm.2009.218]]]
  • 8 A. Botta, W. de Donato, V . Persico, and A. Pescap´ e, "Integration of cloud computing and internet of things: A survey," Future Generation Comput. Syst., vol. 56, pp. 684-700, Mar. 2016.doi:[[[10.1016/j.future.2015.09.021]]]
  • 9 F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, "Fog computing and its role in the Internet of things," in Proc. ACM MCC, 2012.doi:[[[10.1145/2342509.2342513]]]
  • 10 S. Sarkar, S. Chatterjee, and S. Misra, "Assessment of the suitability of fog computing in the context of Internet of things," IEEE Trans. Cloud Comput., vol. 6, no. 1, pp. 46-59, Jan. 2018.doi:[[[10.1109/tcc.2015.2485206]]]
  • 11 A. Yousefpour, G. Ishigaki, R. Gour, and J. P. Jue, "On reducing IoT service delay via fog offloading," IEEE Internet Things J., vol. 5, no. 2, pp. 998-1010, Apr. 2018.doi:[[[10.1109/jiot.2017.2788802]]]
  • 12 S. Sarkar and S. Misra, "Theoretical modelling of fog computing: A green computing paradigm to support IoT applications," IET Netw., vol. 5, no. 2, pp. 23-29, Mar. 2016.doi:[[[10.1049/iet-net.2015.0034]]]
  • 13 Y .-J. Ku et al., "5G radio access network design with the fog paradigm: Confluence of communications and computing," vol. 55, no. 4, pp. 46-52, Apr. 2017.doi:[[[10.1109/mcom.2017.1600893]]]
  • 14 M. Peng, S. Yan, K. Zhang, and C. Wang, "Fog-computing-based radio access networks: Issues and challenges," IEEE Netw., vol. 30, no. 4, pp. 46-53, Jul. 2016.doi:[[[10.1109/mnet.2016.7513863]]]
  • 15 T.Q.S. Quek et al., Cloud Radio Access Networks: Principles, Technologies, and Applications. Cambridge University Press, 2017.custom:[[[https://www.cambridge.org/core/books/cloud-radio-access-networks/3F5462788889D55CA51EFE8C7A23CFF2]]]
  • 16 M. Aazam, S. Zeadally, and K. A. Harras, "Offloading in fog computing for IoT: Review, enabling technologies, and research opportunities," Future Generation Comput. Syst., vol. 87, pp. 278-289, Oct. 2018.doi:[[[10.1016/j.future.2018.04.057]]]
  • 17 K. H. Abdulkareem et al., "A review of fog computing and machine learning: Concepts, applications, challenges, and open issues," IEEE Access, vol. 7, pp. 153123-153140, Oct. 2019.doi:[[[10.1109/access.2019.2947542]]]
  • 18 L. Liu, Z. Chang, X. Guo, S. Mao, and T. Ristaniemi, "Multiobjective optimization for computation offloading in fog computing," IEEE Internet Things J., vol. 5, no. 1, pp. 283-294, Dec. 2017.doi:[[[10.1109/jiot.2017.2780236]]]
  • 19 H. Tran-Dang and D.-S. Kim, "FRATO: Fog resource based adaptive task offloading for delay-minimizing IoT service provisioning," IEEE Trans. Parallel Distrib. Syst., vol. 32, no. 10, pp. 2491-2508, Mar. 2021.doi:[[[10.1109/tpds.2021.3067654]]]
  • 20 J. Yao and N. Ansari, "Fog resource provisioning in reliability-aware IoT networks," IEEE Internet Things J., vol. 6, no. 5, pp. 8262-8269, Oct. 2019.doi:[[[10.1109/jiot.2019.2922585]]]
  • 21 A. Yousefpour et al., "Fogplan: A lightweight qos-aware dynamic fog service provisioning framework," IEEE Internet Things J., vol. 6, no. 3, pp. 5080-5096, Jun. 2019.doi:[[[10.1109/jiot.2019.2896311]]]
  • 22 G. Lee, W. Saad, and M. Bennis, "An online optimization framework for distributed fog network formation with minimal latency," IEEE Trans. Wireless Commun., vol. 18, no. 4, pp. 2244-2258, Oct. 2019.doi:[[[10.1109/twc.2019.2901850]]]
  • 23 K. Guo, M. Sheng, T. Q. S. Quek, and Z. Qiu, "Task offloading and scheduling in fog ran: A parallel communication and computation perspective," IEEE Wireless Commun. Letters, vol. 9, no. 2, pp. 215-218, Oct. 2020.doi:[[[10.1109/lwc.2019.2948860]]]
  • 24 Y . Yang et al., "POMT: Paired offloading of multiple tasks in heterogeneous fog networks," IEEE Internet Things J., vol. 6, no. 5, pp. 8658-8669, Jun. 2019.doi:[[[10.1109/jiot.2019.2922324]]]
  • 25 Z. Liu, Y . Yang, K. Wang, Z. Shao, and J. Zhang, "Post: Parallel offloading of splittable tasks in heterogeneous fog networks," IEEE Internet Things J., vol. 7, no. 4, pp. 3170-3183, Jan. 2020.doi:[[[10.1109/jiot.2020.2965566]]]
  • 26 S. Durand and B. Gaujal, "Complexity and optimality of the best response algorithm in random potential games," in Proc. SAGT, Sep. 2016.doi:[[[10.1007/978-3-662-53354-3_4]]]
  • 27 E. A. Jorswieck, "Stable matchings for resource allocation in wireless networks," in Proc. IEEE DSP, Jul. 2011.doi:[[[10.1109/icdsp.2011.6004983]]]
  • 28 H. Tran-Dang and D.-S. Kim, "A survey on matching theory for distributed computation offloading in IoT-fog-cloud systems: Perspectives and open issues," IEEE Access, vol. 10, pp. 118353-118369, Nov. 2022.doi:[[[10.1109/access.2022.3219427]]]
  • 29 D. Gusfield, "The structure of the stable roommate problem: Efficient representation and enumeration of all stable assignments," SIAM J. Comput., vol. 17, no. 4, pp. 742-769, Jul. 2006.doi:[[[10.1137/0217048]]]
  • 30 K. Eriksson, J. Sj¨ ostrand, and P. Strimling, "Three-dimensional stable matching with cyclic preferences," Math. Social Sci., vol. 52, no. 1, pp. 77-87, Jun. 2006.doi:[[[10.1007/s11590-020-01557-4]]]
  • 31 D. Gale and L. S. Shapley, "College admissions and the stability of marriage," The American Math. Monthly, vol. 69, no. 1, pp. 9-15, Jan. 1962.doi:[[[10.4169/amer.math.monthly.120.05.386]]]
  • 32 G. Zhang et al., "FEMTO: Fair and energy-minimized task offloading for fog-enabled IoT networks," IEEE Internet Things J., vol. 6, no. 3, pp. 4388-4400, Dec. 2018.doi:[[[10.1109/jiot.2018.2887229]]]
  • 33 Z. Liu, X. Yang, Y . Yang, K. Wang, and G. Mao, "DATS: Dispersive stable task scheduling in heterogeneous fog networks," IEEE Internet Things J., vol. 6, no. 2, pp. 3423-3436, Dec. 2018.doi:[[[10.1109/jiot.2018.2884720]]]
  • 34 M. Mukherjee et al., "Latency-driven parallel task data offloading in fog computing networks for industrial applications," IEEE Trans. Industrial Inform., vol. 16, no. 9, pp. 6050-6058, Dec. 2020.doi:[[[10.1109/tii.2019.2957129]]]
  • 35 F. Chiti, R. Fantacci, and B. Picano, "A matching theory framework for tasks offloading in fog computing for IoT systems," IEEE Internet Things J., vol. 5, no. 6, pp. 5089-5096, Sep. 2018.doi:[[[10.1109/jiot.2018.2871251]]]
  • 36 C. Swain et al., "METO: Matching-theory-based efficient task offloading in IoT-fog interconnection networks," IEEE Internet Things J., vol. 8, no. 16, pp. 12705-12715, Sep. 2020.doi:[[[10.1109/jiot.2020.3025631]]]
  • 37 Y . Zu et al., "Smeto: Stable matching for energy-minimized task offloading in cloud-fog networks," in Proc. IEEE VTC-Fall, Sep. 2019.doi:[[[10.1109/vtcfall.2019.8891292]]]
  • 38 C. Swain, M. N. Sahoo, and A. Satpathy, "Spato: A student project allocation based task offloading in IoT-fog systems," in Proc. IEEE ICC, Jun. 2021.doi:[[[10.1109/icc42927.2021.9500367]]]
  • 39 D. J. Abraham, R. W. Irving, and D. F. Manlove, "Two algorithms for the student-project allocation problem," J. Discrete Algorithms, vol. 5, no. 1, pp. 73-90, Mar. 2007.doi:[[[10.1016/j.jda.2006.03.006]]]
  • 40 A. Satpathy, M. N. Sahoo, L. Behera, C. Swain, and A. Mishra, "Vmatch: A matching theory based vdc reconfiguration strategy," in Proc. IEEE CLOUD, Oct. 2020.doi:[[[10.1109/cloud49709.2020.00031]]]
  • 41 M. Ali, N. Riaz, M. I. Ashraf, S. Qaisar, and M. Naeem, "Joint cloudlet selection and latency minimization in fog networks," IEEE Trans. Industrial Informatics, vol. 14, no. 9, pp. 4055-4063, Apr. 2018.doi:[[[10.1109/tii.2018.2829751]]]
  • 42 Y . Gu, Z. Chang, M. Pan, L. Song, and Z. Han, "Joint radio and computational resource allocation in IoT fog computing," IEEE Trans. Veh. Technol., vol. 67, no. 8, pp. 7475-7484, Mar. 2018.doi:[[[10.1109/tvt.2018.2820838]]]
  • 43 M. Al-khafajiy et al., "Improving fog computing performance via fog-2-fog collaboration," Future Generation Comput. Syst., vol. 100, pp. 266-280, Nov. 2019.doi:[[[10.1016/j.future.2019.05.015]]]
  • 44 Y . Yu, J. Zhang, and K. B. Letaief, "Joint subcarrier and CPU time allocation for mobile edge computing," in Proc. IEEE GLOBECOM, 2016.doi:[[[10.1109/glocom.2016.7841937]]]
  • 45 H. Tran-Dang, S. Bhardwaj, T. Rahim, A. Musaddiq, and D.-S. Kim, "Reinforcement learning based resource management for fog computing environment: Literature review, challenges, and open issues," J. Commun. Netw., vol. 24, no. 1, pp. 83-98, Feb. 2023.doi:[[[10.23919/jcn.2021.000041]]]