## Hoa Tran-Dang and Dong-Seong Kim## |

Fog node | Fog specification & resource status | |||
---|---|---|---|---|

M (MB) | f (GHz) | [TeX:] $$\gamma$$(cycles/bit) | W (s) | |

[TeX:] $$F_1$$ | 128 | 5 | 500 | 1.2 |

[TeX:] $$F_2$$ | 256 | 10 | 600 | 2.15 |

[TeX:] $$F_3$$ | 128 | 15 | 750 | 0.55 |

[TeX:] $$F_4$$ | 256 | 10 | 1000 | 2.39 |

This FCS model can be deployed in many practical IoT systems such as smart cities, smart factory, and smart ware- house management systems. Generally, these systems include multiple domains for providing IoT services and applications and each FCS is designed to serve a specific set of services in a single domain such as smart parking services, smart inventory monitoring in the warehouses.

To make an efficient offloading policy, [TeX:] $$F_0$$ is based on a table of neighbor resources that contains updated information about the available resources of HNs. We assumed that fog devices of the FCS use a common protocol to communicate and update their resource status periodically [35]. We also assumed that during the offloading decision, the resource states of the HNs are unchanged. Table I shows an example of neighbor resource table stored by F0, which records the resource states of neighbors with respect to the memory capacity of queue buffer (M), CPU frequency (f), CPU processing density ([TeX:] $$(\gamma)$$), and expected waiting time in the queue (W).

Table II.

Notation | Description |
---|---|

[TeX:] $$t_i$$ | Task [TeX:] $$i$$ |

[TeX:] $$a_i$$ | Size of [TeX:] $$T_i$$ (bits) |

[TeX:] $$t_{i k}$$ | Subtask divided from [TeX:] $$t_i$$ |

[TeX:] $$a_{i k}$$ | Size of [TeX:] $$t_{i k}$$ (bits) |

[TeX:] $$T_i^a$$ | Arrival time of [TeX:] $$t_i$$ at [TeX:] $$F_0$$ |

[TeX:] $$T_i^t$$ | Start time to transmit [TeX:] $$t_i$$ to an offloadee by F0 |

[TeX:] $$T_i^P$$ | Start time to process [TeX:] $$t_i$$ by offloadee |

[TeX:] $$T_i^f$$ | Finish time of processing [TeX:] $$t_i$$ by offloadee |

[TeX:] $$D_i, D_{i k}$$ | Total delays for executing [TeX:] $$t_i$$, and [TeX:] $$t_{i k}$$ |

[TeX:] $$W_i$$ | Expected waiting time in queue of [TeX:] $$F_i$$ |

[TeX:] $$M_i$$ | Memory capacity of queue buffer of [TeX:] $$F_i$$ |

[TeX:] $$\lambda$$ | Arrival rate of tasks at [TeX:] $$F_0$$ queue (tasks/s) |

[TeX:] $$F_i$$ | Fog node [TeX:] $$i$$ |

[TeX:] $$r_{i k}$$ | Data transmission rate from [TeX:] $$F_i$$ to [TeX:] $$F_k$$ |

[TeX:] $$f_i$$ | CPU frequency of [TeX:] $$F_i$$ (cycles/s) |

[TeX:] $$\gamma_i$$ | CPU processing density of [TeX:] $$F_i$$ (cycles/bit) |

B. Task Model

Computation tasks arriving at the queue of F0 follow an exponential distribution with an average rate of λ tasks per second (tasks/s). At a given time, there is a set [TeX:] $$\mathcal{T}=\left\{t_1, t_2, \cdots, t_K\right\}$$, including [TeX:] $$K$$ computational tasks, which reside in the queue of [TeX:] $$F_0$$. A single task [TeX:] $$t_i$$ can be processed by either [TeX:] $$F_0$$ or [TeX:] $$F_k(k=1, \cdots, N)$$. In addition, a task [TeX:] $$t_i$$ with a size [TeX:] $$a_i$$ can be divided into[TeX:] $$m=N+1$$ subtasks [TeX:] $$\text { (i.e., } t_{i k}, k=0, \cdots, N)$$, which can be processed in parallel by different fog nodes. As illustrated in Fig. 1, two subtasks, [TeX:] $$t_{21}$$ and [TeX:] $$t_{22}$$ divided from [TeX:] $$t_2$$ can be computed simultaneously by [TeX:] $$F_2$$ and [TeX:] $$F_3$$, respectively. When m = 1, the entire task is processed by a single fog. We define [TeX:] $$a_{i k}$$ as the data size of subtask [TeX:] $$t_{i k}$$; thus we have

With this definition, [TeX:] $$a_{i k}^n>0 \text { if } t_{i k}$$ if [TeX:] $$t_{i k}$$ is computed by [TeX:] $$F_k$$, and [TeX:] $$\alpha_{i k}^n=0$$ if [TeX:] $$F_k$$ is not involved in processing [TeX:] $$t_i$$.

C. Problem Formulation

This paper aims to develop an online approach for [TeX:] $$F_0$$ to offload each individual task whenever it arrives at the queue of [TeX:] $$F_0$$ so that the execution delay of the task is minimized. Based on the resource table, the algorithm also considers dividing the task dynamically, thereby exploiting the parallel computation of subtasks to achieve the delay reduction objective. In addition, we assume that the data transmission and data processing are not performed simultaneously by F0, thus the algorithm requires an efficient scheduling mechanism for subtasks. The next section formulates an optimization problem that combines task offloading and subtask scheduling to minimize the execution delay of an individual task.

For the sake of clarity, the key notations used in this paper are summarized Table II.

The overall delay for executing task [TeX:] $$t_i$$ ([TeX:] $$D_i$$) is the interval from the arrival time of task ([TeX:] $$T_i^a$$) at the queue of [TeX:] $$F_0$$ until the end of the processing time ([TeX:] $$\left(T_i^f\right.$$ ) at an offloadee. As illustrated in Fig.2, [TeX:] $$D_i=T_i^f-T_i^a$$.

Suppose that there is no data dependency between the sub- tasks of [TeX:] $$t_i$$; therefore, the subtasks can be computed in parallel by different HNs. We also assume that the results of processing the subtasks do mot need to be aggregated because the sizes of results are small and their aggregation time can be neg- ligible. Therefore, [TeX:] $$D_i=\max \left\{D_{i k}\right\}, k=0, \cdots, N$$. Notably, [TeX:] $$D_{i k}=T_{i k}^J-T_i^a$$. In this sense, besides subtask offloading (i.e., which fog node processes which subtask), subtask scheduling (i.e., which subtask is transmitted or processed first) has a significant impact on the total task execution delay.

The priority of tasks is not considered during processing; thus, they will be processed according to the FCFS policy in the queue of [TeX:] $$F_0$$. In addition, to minimize the execution delay of individual tasks, scheduling is employed for the substasks of a task. Because data transmission and data processing are not performed simultaneously by [TeX:] $$F_0$$ , an optimal schedule is required to minimize the delay.

Lemma 4.1 specifies an efficient schedule for exploiting the parallel computation of subtasks to reduce the delay.

Lemma 4.1: When the subtasks of a task are processed by both [TeX:] $$F_0$$ and HNs, data transmission is performed before data computation to obtain the benefit of parallel computation.

Proof For the illustrative simplicity, we suppose that task [TeX:] $$t_i$$ is divided into two subtasks, [TeX:] $$t_{i 0}$$ and [TeX:] $$t_{i 1}$$, which are processed by [TeX:] $$F_0$$ and [TeX:] $$F_1$$, respectively.

We consider the first scenario, as illustrated in Fig. 3, in which the the queue of [TeX:] $$F_1$$ is empty at the time of the offloading decision (i.e., [TeX:] $$W_1$$ = 0). We consider Case 1, in which [TeX:] $$t_{i 0}$$ is processed first by [TeX:] $$F_0$$ . The total delay to execute [TeX:] $$t_{i 0}$$ is:

Because data processing and data transmission are not handled simultaneously by [TeX:] $$F_0$$, the total delay to execute [TeX:] $$t_{i 1}$$ including [TeX:] $$D_{i 0}$$, as the waiting time is calculated by [TeX:] $$D_{i 1}=D_{i 0}+a_{i 1} / r_{01}+a_{i 1} \gamma_1 / f_1$$. Because [TeX:] $$D_i=\max \left\{D_{i 0}, D_{i 1}\right\}, D_i=a_{i 0} \gamma_0 / f_0+a_{i 1} / r_{01}+a_{i 1} \gamma_1 / f_1$$. When [TeX:] $$t_{1 i}$$ is processed first in Case 2, the total delay to execute the subtasks is calculated as follows:

The total delay to finish [TeX:] $$t_1$$ in Case 2 is [TeX:] $$D_i^*=\max \left\{D_{i 0}^*, D_{i 1}^*\right\}$$. Obviously, [TeX:] $$D_i^*<D_i$$; thus, the lemma is proved.

We now examine the second scenario where the queue of [TeX:] $$F_1$$ is not empty (i.e., [TeX:] $$W_1$$ > 0), as shown in Fig. 4.

In Case 1, [TeX:] $$D_{i 0}=a_{i 0} \gamma_0 / f_0$$ because [TeX:] $$t_{i 0}$$ is executed first by [TeX:] $$F_0$$. Moreover, the delay to finish [TeX:] $$t_{i 1}$$ is achieved by:

Obviously, Di is minimized when [TeX:] $$W_1<D_{i 0}+a_{i 1} / r_{01}$$; thus, [TeX:] $$D_i=a_{i 0} \gamma_0 / f_0+a_{i 1} / r_{01}+a_{i 1} \gamma_1 / f_1$$.

In Case 2 of the second scenario (i.e., [TeX:] $$t_{i 1}$$ is transmitted first to [TeX:] $$F_1$$),the delay to finish [TeX:] $$D_{i 0}^*$$ [TeX:] $$D_{i 0}^*=a_{i 1} / r_{01}+a_{i 0} \gamma_0 / f_0$$. The execution delay of [TeX:] $$t_{i 1}$$ is as follows:

Recall that the delay to finish [TeX:] $$t_i$$ is obtained by [TeX:] $$D_i^*=\max \left\{D_{i 0}^*, D_{i 1}^*\right\} \text { or } D_i^*=\max \left\{D_{i 0}^*, \bar{D}_{i 1}^*\right\}$$. Therefore, if [TeX:] $$D_i^*=D_{i 0}^* \text { or } D_i^*=D_{i 1}^*$$, then [TeX:] $$D_i^*<D_i$$. This means that Case 2 is beneficial for delay reduction compared with Case 1. If [TeX:] $$D_i^*=\bar{D}_{i]}^*$$, we also have [TeX:] $$D_i^*<D_i$$ because the condition [TeX:] $$W_1<a_{i 0} \gamma_0 / f_0+a_{i 1} / r_{01}$$ still holds (see (5)).

Consequently, DCTO has the potential to reduce the delay only if all subtasks scheduled to be offloaded to HNs are transmitted first from [TeX:] $$F_0$$.

We now consider a more general scenario in which the task [TeX:] $$t_i$$ is divided into three subtasks [TeX:] $$t_{i 0}, t_{i 1}, t_{i 2}$$. We suppose that these subtasks are processed by [TeX:] $$F_0$$ locally and offloaded by [TeX:] $$F_1$$ and [TeX:] $$F_2$$, respectively. Based on the aforementioned proof, for any two subtask [TeX:] $$t_{i 0}$$ and [TeX:] $$t_{i 1}$$ which are processed locally by [TeX:] $$F_0$$ and offloaded by [TeX:] $$F_1$$, the benefit of delay reduction is enabled when [TeX:] $$t_{i 1}$$ is transmitted to [TeX:] $$F_1$$ before [TeX:] $$t_{i 0}$$ is processed. Now we assume that for the three subtasks, this benefit can be achieved when [TeX:] $$t_{i 2}$$ is transmitted lastly to [TeX:] $$F_2$$ after processing [TeX:] $$t_{i 0}$$ is completed. In this case, the total delay [TeX:] $$D_{i 2}$$ to finish [TeX:] $$t_{i 2}$$ is data transmission delay of [TeX:] $$t_{i 1}$$, data processing delay of [TeX:] $$t_{i 0}$$ , data transmission delay of [TeX:] $$t_{i 2}$$ , and data processing delay of [TeX:] $$t_{i 2}$$ at [TeX:] $$F_2$$. Obviously, [TeX:] $$D_{i 2} \geq \max \left\{D_{i 0}, D_{i 1}\right\}$$. Therefore, the assumption of processing [TeX:] $$t_{i 2}$$ lastly is infeasible to get the benefit of delay reduction.

Based on the lemma, we define [TeX:] $$o_{i j}^k$$ to represent the scheduling order between subtask [TeX:] $$t_{i k}$$ and [TeX:] $$t_{j k}$$ for wireless transmission. In particular, if [TeX:] $$t_{i k}$$ is scheduled on a wireless link before [TeX:] $$t_{j k}$$, we have [TeX:] $$o_{i j}^k$$ = 1; otherwise, [TeX:] $$o_{i j}^k$$ = 0. Because [TeX:] $$o_{i j}^k$$ is transmitted either before or after [TeX:] $$o_{j j}^k$$, we have:

We define [TeX:] $$T_{i j}^t$$ as the start time for transmitting [TeX:] $$t_{i j}$$. There is no overlap between the wireless transmission times of the two subtasks [TeX:] $$t_{i j}$$ and [TeX:] $$t_{i k}$$; therefore, their start transmission times must satisfy the following:

where [TeX:] $$L$$ is a large possible constant, and [TeX:] $$r_{00}=\infty$$ indicates that [TeX:] $$t_{i k}$$ is processed by [TeX:] $$F_0$$. Notably, the propagation delay is neglected because it is much smaller than the transmission delay ([TeX:] $$a_{i k} / r_{0 n}$$).

Additionally, the start processing time [TeX:] $$T_{i n}^p$$ of [TeX:] $$t_{i n}$$ must not in be smaller than the arrival time at an offloadee as well as the finish time of the last task in the queue of the offloadee (see Fig. 5). Therefore,

where

Because all the tasks in the queue of [TeX:] $$F_n(n=1, \cdots, N)$$ are transmitted only from [TeX:] $$F_0$$, the expected waiting time in the queue of [TeX:] $$F_n$$ can be achieved by:

where [TeX:] $$\sum_i a_i$$ is the total data size of tasks resided in the queue of [TeX:] $$F_n$$.

In addition, the buffer sizes of queues of fog devices are limited, thus

The end time [TeX:] $$T_{i j}^f$$ to finish [TeX:] $$t_{i j}$$ is calculated as follow:

Recall that [TeX:] $$T_{i n}^f=T_{i n}^p+\sum_{n=0}^N \frac{a_{i n} \gamma_n}{f_n}$$, therefore [TeX:] $$$D_i=\max _j\left\{D_{i j} \mid j=\right.$ $0, \cdots, N\}=\max _j\left\{T_{i j}^f\right\}-T_{i j}^a$. $$. The optimization problem [TeX:] $$\mathbf{P}$$ is modeled as follow to minimize [TeX:] $$D_i$$:

where [TeX:] $$a_i=\left[a_{i 0}, a_{i 1}, \cdots, a_{i N}\right], T_i^t=\left[T_{i 0}^t, \cdots, T_{i N}^t\right], T_i^p=$ $\left[T_{i 0}^p, \cdots, T_{i N}^p\right]$$, and [TeX:] $$o_i=\left[o_{01}^i, \cdots, o_{0 N}^i, \cdots, o_{N 0}^i, \cdots, o_{N N-1}^i\right]$$. All the constraints introduced in the problem are linear except the equation 9. However, it can be relaxed to be linear by using the Big-M method [36]. Then, the problem [TeX:] $$\mathbf{P}$$ can be solved by mixed-integer linear programming (MILP), thus requiring a low computation complexity as compared to the algorithms in the related works. The solutions to the problem [TeX:] $$\mathbf{P}$$ indicates dynamic approaches that specify which tasks and how many subtasks should be divided, which subtask is processed by which fog devices, and also the transmission and processing order of subtasks. In the next section, many simulation scenarios are conducted to examine the perfor- mance of proposed offloading schemes under impact of many factors.

A. Simulation Environment Setup

The event-driven framework supported SimPy library in Python is used to design process-based discrete-event simu- lation scenarios for the studied FCS, which investigate the performance of DCTO as well as the comparative study with the related algorithms. In addition, we use CVX solver [37] to solve the optimization problem. Table III summarizes the important parameters and values for the simulation scenario, where U(x,y) indicates the uniform distribution on interval [x, y].

All the simulation results are averaged over 100 simulation rounds and each round lasts 100 seconds according to the clock of CPU. We compare the proposed DCTO approach with RCO, and POST. Recall that RCO considers to schedule and offload multiple tasks at each time of decision-making, and task division is not employed. In addition, the queuing states of fog devices and task request rate are not taken into account in the RCO algorithm. Meanwhile, POST uses the game theory to find the GNE point for mapping task/subtasks to the fog devices [26]. The impact of task request rate, task scheduling, and queuing states is not also considered in POST.

B. Evaluation and Analysis

We conduct the first simulation scenario, in which all the queues of fog nodes are empty initially. Fig. 6 depicts the average delay achieved by POST, RCO, and DCTO when λ and N are varied.

DCTO always outperforms RCO and POST because it can dynamically adjust the task offloading decision according to the states of available fog resources. When the queues are empty at the beginning of simulation, there would be no beneficial to reduce the delay through task division at the low task arrival rate (i.e., λ = {0.5,1}). Therefore, DCTO and RCO can obtain the similar performance and be better than POST due to the optimal scheduling of tasks. When the task rate increases, the delay is no longer impacted by the queuing states of fog devices. The task division and the associated parallel computation of subtasks are beneficial to cope with the situation. In addition, the increase of number of HNs helps in slowing the increase of average delay because more available resources are utilized to process the tasks/subtasks. In particularly, DCTO which incorporates the fog resource awareness and optimal tasks/subtasks scheduling is able to minimize the average execution delay. Notably, as the number of HNs increases (N = 6, 7), DCTO is able to exploit efficiently the available resource of fogs to perform the parallel computation, thus enabling a gradual delay increase even when the task arrival rate increases.

Fig. 7 supports such the claim through the average utiliza- tion ratio (U ) of HNs. As shown in this figure, the task division employed in the POST and DCTO schemes can increase the utilization ratio of fog computing devices since the limited resource fogs can process the subtasks. As the rate of request is low the task division mechanism is not significantly beneficial.

Although the both DCTO and POST uses the task division and parallel commutation of subtasks to improve the fog resource utilization, the benefit of delay reduction is not the same. This is because, DCTO includes the optimal subtask scheduling based on the resource states of HN to minimize the delay. In addition, POST takes more running time for iteration to reach the near-optimal task-resource mapping.

In the simulation scenario 2, we analyze the impact of queuing state (i.e., waiting time in queue) on the average delay. Initially, all the queues of fog nodes have a number p of tasks and p is randomly selected from a set {1, 2, 3, 4}. In addition, the size of task is randomly selected from {3, 4, 5} (MB). Notably, the expected waiting times (W ) in the queues of HNs can be derived as (11). Fig. 8 shows the average execution task delay obtained by DCTO, RCO, and POST.

Besides the improved performance of DCTO as compared to RCO and POST, the simulation result shows that paral- lel computation and scheduling employed simultaneously in DCTO can decrease the impact of queuing states of fogs effectively. Although, the task division and parallel subtask computation are used in POST, the performance efficiency is not maximized. That is because POST only tries to obtain the optimal mapping between tasks and subtasks with the available fog resources through time slots. This mechanism is negatively influenced by waiting times in the queues of fog devices. In other words, POST without the usage of scheduling and resource allocation is unable to exploit the fog resources maximally since some fogs with queued tasks are not allocated to perform the task offloading. Meanwhile, DCTO divides the tasks adaptively aiming to schedule and allocate the fog resources for offloading the subtasks effectively. RCO also is unable to maximize the fog resources since the task division is not employed. For example, some fogs with less available computation and storage resources may not be allocated to process single tasks in the RCO algorithms. Meanwhile, they absolutely process subtasks as specified in DCTO if scheduled efficiently.

To examine the impact of task size, we conduct the third simulation scenario, in which all the tasks have the same sizes (i.e., [TeX:] $$a_i$$ = a). Two types of tasks are considered including medium tasks (a = 5 MB) and heavy tasks (a = 10 MB).

Simulation results as shown in Fig. 9 indicate that when all the tasks are medium (a = 5 MB), the performance of DCTO is not significantly better than RCO since the division of medium size tasks is not beneficial. In addition, DCTO is just to minimize the execution delay of each single task while RCO can schedule multiple tasks, just achieving a better average task reduction when the task arrival rate is low. However, as the task arrival rate is high, there exist longer queuing delays in some powerful fog devices, thus the task division employed in DCTO is essential to balance the workload as well as reduce the delay through the parallel computation. As a result, the performance of DCTO is slightly improved. As expected, the heavy tasks (i.e., a = 10 MB) have significant impacts on the fog computing environment since some fogs with limited resources individually are unable to process the whole data of single task. Therefore, the system performance is affected negatively by workload imbalance of heterogeneous fog devices. Although optimal resource allocation is employed in RCO, the delay is higher than DCTO, which exploits task division and parallel computation to decrease the task execution delay effectively.

This paper proposes DCTO, a dynamic collaborative task offloading approach for FCS to reduce the delay of task execution. DCTO is used by the fog controller of FCS to derive the offloading solution dynamically based on the state of available resources of HNs. Accordingly, by applying task division technique and the associated parallel computation, DCTO allows a task to be processed by multiple fog com- puting devices to further reduce the overall task execution delay. The optimal scheduling of subtask transmission and subtask processing is also integrated in the DCTO policy to contribute to the delay reduction objective. The intensive simulation results show that DCTO is an effective offloading solution to achieve the reduced execution delay for fog com- puting environment, in which the resources of fog computing devices are heterogeneous and complicated. Especially, DCTO outperforms the existing related offloading schemes in the scenarios that the rate of task requests is high and task sizes are large.

On the basis of the proposed dynamic offloading mech- anism, there are many possible directions to develop and expand DCTO algorithms as future research works. Firstly, the stronger capabilities of fog computing nodes including F0 and HNs should be considered that allow to offload multiple tasks for each time of offloading decision making in. In this case, the scheduling model is required to be redesigned to jointly schedule the order of tasks and also subtasks of different tasks. In addition, the fog nodes can perform data transmission and data processing simultaneously, thus the tasks/subtasks scheduling should be adjusted accordingly. Another direction for future research is to investigate DCTO in the large-scale systems. In the large-scale model, more domains can co- exist, resulting in more fog computing systems needed to be deployed. In such case, the collaboration between systems in different domains can be considered to enhance the flexibility of overall IoT systems as well as enhance the fog resource utilization.

Hoa Tran-Dang (M’17) received the B.E. degree in Electrical and Electronics Engineering from Hanoi University of Science and Technology (HUST), Vietnam and the M.S. degree in Electronics Engineering from Kumoh National Institute of Technology (KIT), South of Korea in 2010 and 2012, respectively. He pursued the Ph.D. degree with University of Lorraine, France during 2013-2017. He currently works in NSL laboratory, department of ICT Convergence Engineering at Kumoh National Institute of Technology, South of Korea as a Research Professor. His research interests include Internet of things (IoT), edge/fog computing, machine learning, artificial intelligence, distributed intelligence, physical internet, and resource management.

Dong-Seong Kim (S’98-M’03-SM’14) received his Ph.D. degree in Electrical and Computer Engineering from the Seoul National University, Seoul, Korea, in 2003. From 1994 to 2003, he worked as a Full-Time Researcher in ERC-ACI at Seoul National University, Seoul, Korea. From March 2003 to February 2005, he worked as a Postdoctoral Researcher at the Wireless Network Laboratory in the School of Electrical and Computer Engineering at Cornell University, NY . From 2007 to 2009, he was a Vistiting Professor with Department of Computer Science, University of California, Davis, CA. He is currently a Director of kit Convergence Research Institute and ICT Convergence Research Center (ITRC program) supported by Korean government at Kumoh National Institute of Technology. He is IEEE and ACM Senior Member. His current main research interests are real-time IoT, industrial wireless control network, networked embedded system and fieldbus.

- 1 A. Zanella, N. Bui, A. Castellani, L. Vangelista, and M. Zorzi, "Internet of things for smart cities,"
*IEEE Internet Things J.*, vol. 1, no. 1, pp. 22-32, 2014.doi:[[[10.48175/ijarsct-5821]]] - 2 Y . Saleem, N. Crespi, M. H. Rehmani, and R. Copeland, "Internet of things-aided smart grid: Technologies, architectures, applications, prototypes, and future research directions,"
*IEEE Access*, vol. 7, pp. 62962-63003, 2019.doi:[[[10.1109/access.2019.2913984]]] - 3 D. A. Chekired, L. Khoukhi, and H. T. Mouftah, "Industrial IoT data scheduling based on hierarchical fog computing: A key for enabling smart factory,"
*IEEE Trans. Ind. Informat.*, vol. 14, no. 10, pp. 4590-4602, 2018.doi:[[[10.1109/tii.2018.2843802]]] - 4 H. Tran-Dang, N. Krommenacker, P. Charpentier, and D.-S. Kim, "The Internet of things for logistics: Perspectives, application review, and challenges,"
*IETE Tech. Review*, pp. 1-29.doi:[[[10.1080/02564602.2020.1827308]]] - 5 H. Tran-Dang, N. Krommenacker, P. Charpentier, and D. Kim, "Toward the Internet of things for physical Internet: Perspectives and challenges,"
*IEEE Internet Things J.*, vol. 7, no. 6, pp. 4711-4736, 2020.doi:[[[10.1109/jiot.2020.2971736]]] - 6 J. Jin, J. Gubbi, S. Marusic, and M. Palaniswami, "An information framework for creating a smart city through internet of things,"
*IEEE Internet Things J.*, vol. 1, no. 2, pp. 112-121, 2014.doi:[[[10.1109/jiot.2013.2296516]]] - 7 H. Tran-Dang and D. Kim, "An information framework for Internet of things services in physical internet,"
*IEEE Access*, vol. 6, pp. 43967-43977, 2018.doi:[[[10.1109/access.2018.2864310]]] - 8 B. P. Rimal, E. Choi, and I. Lumb, "A taxonomy and survey of cloud computing systems,"
*in Proc. NCM*, 2009.doi:[[[10.1109/ncm.2009.218]]] - 9 A. Botta, W. de Donato, V . Persico, and A. Pescap´ e, "Integration of cloud computing and internet of things: A survey,"
*Future Generation Comput. Syst.*, vol. 56, pp. 684-700, 2016.doi:[[[10.1016/j.future.2015.09.021]]] - 10 F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, "Fog computing and its role in the internet of things,"
*in Proc. ACM MCC*, 2012.doi:[[[10.4018/978-1-5225-7149-0.ch003]]] - 11 S. Sarkar, S. Chatterjee, and S. Misra, "Assessment of the suitability of fog computing in the context of internet of things,"
*IEEE Trans. Cloud Comput.*, vol. 6, no. 1, pp. 46-59, 2018.doi:[[[10.1109/tcc.2015.2485206]]] - 12 A. V . Dastjerdi and R. Buyya, "Fog computing: Helping the Internet of things realize its potential,"
*Comput.*, vol. 49, no. 8, pp. 112-116, 2016.doi:[[[10.1109/mc.2016.245]]] - 13 M. Aazam, S. Zeadally, and K. A. Harras, "Offloading in fog computing for IoT: Review, enabling technologies, and research opportunities,"
*Future Generation Comput. Syst.*, vol. 87, pp. 278-289, 2018.doi:[[[10.1016/j.future.2018.04.057]]] - 14 H. Tran-Dang and D.-S. Kim, "Frato: Fog resource based adaptive task offloading for delay-minimizing iot service provisioning,"
*IEEE Trans. Parallel Distrib. Syst.*, vol. 32, no. 10, pp. 2491-2508, 2021.doi:[[[10.1109/tpds.2021.3067654]]] - 15 T. Mattson, B. Sanders, and B. Massingill,
*Patterns for Parallel Programming*, 1st ed. Addison-Wesley Professional, 2004.custom:[[[https://www.oreilly.com/library/view/patterns-for-parallel/0321228111/]]] - 16 Y .-S. Jiang and W.-M. Chen, "Task scheduling in grid computing environments,"
*in Advances in Intelligent Systems and Computing. Springer International Publishing*, 2014, pp. 23-32.doi:[[[10.1007/978-3-319-01796-9_3]]] - 17 A. Elgazar, K. Harras, M. Aazam, and A. Mtibaa, "Towards intelligent edge storage management: Determining and predicting mobile file popularity,"
*in Proc. IEEE MobileCloud*, 2018.doi:[[[10.1109/mobilecloud.2018.00012]]] - 18 L. Liu, Z. Chang, X. Guo, S. Mao, and T. Ristaniemi, "Multiobjective optimization for computation offloading in fog computing,"
*IEEE Internet Things J.*, vol. 5, no. 1, pp. 283-294, 2018.doi:[[[10.1109/jiot.2017.2780236]]] - 19 J. Yao and N. Ansari, "Fog resource provisioning in reliability-aware IoT networks,"
*IEEE Internet Things J.*, vol. 6, no. 5, pp. 8262-8269, 2019.doi:[[[10.1109/jiot.2019.2922585]]] - 20 A. Yousefpour et al., "Fogplan: A lightweight qos-aware dynamic fog service provisioning framework,"
*IEEE Internet Things J.*, vol. 6, no. 3, pp. 5080-5096, 2019.doi:[[[10.1109/jiot.2019.2896311]]] - 21 Y . Yang et al., "Pomt: Paired offloading of multiple tasks in heterogeneous fog networks,"
*IEEE Internet Things J.*, vol. 6, no. 5, pp. 8658-8669, 2019.doi:[[[10.1109/jiot.2019.2922324]]] - 22 A. Yousefpour, G. Ishigaki, R. Gour, and J. P. Jue, "On reducing IoT service delay via fog offloading,"
*IEEE Internet Things J.*, vol. 5, no. 2, pp. 998-1010, 2018.doi:[[[10.1109/jiot.2017.2788802]]] - 23 G. Zhang et al., "Femto: Fair and energy-minimized task offloading for fog-enabled iot networks,"
*IEEE Internet Things J.*, vol. 6, no. 3, pp. 4388-4400, 2019.doi:[[[10.1109/jiot.2018.2887229]]] - 24 Z. Liu, X. Yang, Y . Yang, K. Wang, and G. Mao, "Dats: Dispersive stable task scheduling in heterogeneous fog networks,"
*IEEE Internet Things J.*, vol. 6, no. 2, pp. 3423-3436, 2019.doi:[[[10.1109/jiot.2018.2884720]]] - 25 M. Mukherjee et al., "Latency-driven parallel task data offloading in fog computing networks for industrial applications," vol. 16, no. 9, pp. 6050-6058, 2020.doi:[[[10.1109/tii.2019.2957129]]]
- 26 Z. Liu, Y . Yang, K. Wang, Z. Shao, and J. Zhang, "Post: Parallel offloading of splittable tasks in heterogeneous fog networks,"
*IEEE Internet Things J.*, vol. 7, no. 4, pp. 3170-3183, 2020.doi:[[[10.1109/jiot.2020.2965566]]] - 27 H. Tran-Dang and D.-S. Kim, "A survey on matching theory for distributed computation offloading in iot-fog-cloud systems: Perspectives and open issues,"
*IEEE Access*, vol. 10, pp. 118353-118369.doi:[[[10.1109/access.2022.3219427]]] - 28 H. Tran-Dang and D.-S. Kim, "Disco: Distributed computation offloading framework for fog computing networks,"
*J. Commun. Netw.*, pp. 1-11, 2023.doi:[[[10.23919/jcn.2022.000058]]] - 29 H. Tran-Dang and D.-S. Kim, "A many-to-one matching based task offloading (mato) scheme for fog computing-enabled iot systems,"
*in Proc. ATC*, 2022.doi:[[[10.1109/atc55345.2022.9942992]]] - 30 G. Lee, W. Saad, and M. Bennis, "An online optimization framework for distributed fog network formation with minimal latency,"
*IEEE Trans. Wireless Commun.*, vol. 18, no. 4, pp. 2244-2258, 2019.doi:[[[10.1109/twc.2019.2901850]]] - 31 K. Guo, M. Sheng, T. Q. S. Quek, and Z. Qiu, "Task offloading and scheduling in fog RAN: A parallel communication and computation perspective,"
*IEEE Wireless Commun. Lett.*, vol. 9, no. 2, pp. 215-218, 2020.doi:[[[10.1109/lwc.2019.2948860]]] - 32 H. Tran-Dang and D.-S. Kim, "Dynamic task offloading approach for task delay reduction in the IoT-enabled fog computing systems," inProc. IEEE INDIN, 2022.doi:[[[10.1109/indin51773.2022.9976147]]]
- 33 H. Tran-Dang and D.-S. Kim, "Reinforcement learning for computational offloading in fog-based IoT systems: Applications, and open research issues,"
*in Proc. RIVF*, 2022.doi:[[[10.1109/rivf55975.2022.10013918]]] - 34 H. Tran-Dang, S. Bhardwaj, T. Rahim, A. Musaddiq, and D.-S. Kim, "Reinforcement learning based resource management for fog computing environment: Literature review, challenges, and open issues,"
*J. Commun. Netw.*, vol. 24, no. 1, pp. 83-98.doi:[[[10.23919/jcn.2021.000041]]] - 35 M. Al-khafajiy et al., "Improving fog computing performance via fog-2-fog collaboration,"
*Future Generation Comput. Syst.*, vol. 100, pp. 266-280, 2019.doi:[[[10.1016/j.future.2019.05.015]]] - 36
*I. Griva, S. Nash, and A. Sofer, Linear and Nonlinear Optimization: Second Edition, ser. Other Titles in Applied Mathematics. Society for Industrial and Applied Mathematics. (Online). Available:*https://books. google.co.kr/books?id=u63u\ iNcnRkCcustom:[[[https://books.google.co.kr/books?id=u63u\iNcnRkC]]] - 37
*M. Grant and S. Boyd,*"CVX: Matlab software for disciplined convex programming, version 2.1," http://cvxr.com/cvx.custom:[[[http://cvxr.com/cvx]]]