Essentially, all models are wrong, but some are useful. Collaboration Model of Fog and Cloud


Download 0.58 Mb.
bet9/11
Sana08.01.2022
Hajmi0.58 Mb.
#253639
1   2   3   4   5   6   7   8   9   10   11
Bog'liq
2020MohammedPhD1 - Copy

E As ^ E

Ts sd, Vs £ S

The best available nodes are those that provide a service with minimal delay. To find these fog nodes, Algorithm 2 is developed. Algorithm 2 will find the best fog node to handle the overload on the congested fog node, and then offload the overload from the congested fog node. In addition, the goal of the algorithm is to answer the question of Where to offload?.

Algorithm 2: Service Offloading Input: FogNode (Fn); FogLoad (Fi); OverLoad (Oi). Parameters: FogCapacity (Fc); Propagation (Dp). Initialisation Fn = ф; Fc = ф; Fi = ф; Oi = ф.

Result: Share the Overload with best available node i Procedure 1. Determine best available node by

  1. Fl = list^}
    > F
    l initiate fog list


    >
    remove busy node



  2. FL = list[Fn] <— getFogNodes(out : (Fn,Fc))

  3. FL = sort(FL, by Fc desc)

  4. for each Fn E FL do

  5. if Fn <— (Fi > FcmaX) then

  6. | Fl = pop(Fn)

  7. else

  8. Ts = Zhlriue + r;ro + rl]

  9. if (ts < sd) then

  10. list.add(Fn,Ts)

  11. continue

  12. else

  13. Fl = pop(Fn)

  14. end

  15. end

  16. end

  17. return Fl

  18. End

  19. Procedure 2. Handover the Overload by

if Fl = ф then
21

22

23

24

25

26


F,n = min[FL(ts, Dp)]) Fin = Fi + Oi else

| goto: Procedure 1; end

27 End



The first part of the Algorithm 2, Procedure 1, shows the process of finding the best available node(s) for handling the overload pointed to in Algorithm 1. Lines 2-3 of the algorithm initiate the list of active fog nodes in the domain alongside the node’s capacity and current load (i.e., queue size). The list of available fog nodes will be refined by removing the nodes that are already busy with other services (i.e., Ai = fa) as per lines 6-8. The remaining part of Procedure 1, lines 9-18 compute the time required for a service request to be run on each of the available fog nodes. If the time is within the limit allowed for the service (i.e, before Sd), the algorithm will keep the fog node in the list and log the expected service time against the fog node ID as per lines 9-12. If the ts on Fn is greater than Sd, then Fn will be removed from the list as per lines 13-15. The second part of Algorithm 2, Procedure 2, receives the list of best available nodes. If the list is not empty, that means there is at least one fog node that is able to take the overload for processing. However, if there is more than one node in the list, the system will direct the overload to a fog node that can provide minimal ts and has the lowest propagation delay Dp as per lines 21-23. It worth noting that there is no intermediate processes to be executed between procedure 1 and 2, hence procedure 2 run immediately after procedure 1.

  1. System Evaluation

In this section, the Fog-2-Fog coordination model is evaluated through a MATLAB based simulation. The simulation setting and functions are built according to FRAMES which is about providing optimal fog workload with minimal latency for IoT services. A sci­entific and comprehensive network latency has been calculated, including time delays to compute heavy-packets, light-packets, mixed types of packets and latency per fog node according to their capacities. This is to demonstrate the superior performance of the pro­posed Fog-2-Fof coordination model. The results have been validated against two bench­mark algorithms; Random Walk Algorithm (RWA) [132, 133], and Neighbouring Fogs Algo­rithm (NFA) [165]. Simulation settings are presented in the following subsection, followed by a discussion of the achieved simulations results.

5.4.1 Experiment Configurations



This section describes the adopted MATLAB simulation settings along with the setup pa­rameters. The configurations settings are according to the model proposed in Section 5.3, hence it specifies the network topology, propagation and transmission delay, link bandwidth and fog nodes capabilities, as follows:

  • Network topology: this has been modelled as an indirect graph the represents fog mesh network at the fog layer. Fifteen fog nodes (fn = 15) were used in the simulation and remain the same topology with 15 fog nodes throughout all experiments and during the evaluation of all algorithms. These nodes are connected together through internal communication link based on links transmission speed. Moreover, the links between nodes are weighted based on the propagation time between nodes, for instance, if Dp between fogl and fog2 is two second, then the link weight between both nodes is (fogl 2 fog2). Also, the services arriving at the fog layer are assigned to fog based on the smallest Dp between the node and source, which has the smallest distance. It worth noting that there is no explicit effect/changes of using random topology (i.e., fog nodes can join and leave during run-time) as the FRAMES, using the portal and pinger utilities, will notify other fog nodes when an updates is available. Thus, when a fog node get congested and needs to offload a request, it will have access to only fog nodes reported by FRAMES and no matter whether they are 10, 15 or 20.

  • Network bandwidth: link bandwidth depends on the type of service, thus, heavy- packets provided by heavy services will require more bandwidth compare to light- packets generated by light services. Therefore, for light-packets (e.g., data packets from sensors) the communication bandwidth used has a transmission rate of 250 Kbps [161], which is equivalent to 2.0 x 106 hertz). Such communication protocol is the IEEE 802.15.4, and ZigBee. While for heavy-packets (e.g., data packets from cam­era) the communication bandwidth used with a transmission rate of 54 Mbps [160], which is equivalence to 4.3 x 108 hertz) [160]. Such communication protocol is the IEEE 802.11a/g. The transmission rate between the fog nodes is expected to be higher, around ~ 100 Mbps [14].

  • Transmission and propagation delays: the transmission delay Dt for a packet depends on the packet size lp alongside the associated upload bandwidth b|. Hence, impose an average packet size that will vary according to the type of packet (i.e., heavy and light packets). The average packet size for light-packets is 0.1 KB, while the average packet size for heavy-packets is 80 KB [14]. With regard to the propagation delay Dp, the packet round trip time (i.e., тц-) adopted and inline with [14] by having:

Tj|- = 0.03 x ld + 5

Where ld is the distance with unit km, and тц- time unit is ms.

  • Fog node capabilities consider the service rate g that varies from one fog node to another. The capability of a fog node will highly affect the processing capac­ity (i.e., performance) of the fog node. Thus, a fog node’s capability is determined by CPU frequency. hence a fog node’s CPU variant and the range between 0.2 GHz to 1.5 GHz [166].

  1. Benchmark Algorithms

In order to validate the results achieved by the proposed Fog-2-Fog coordination model and the offloading algorithms, two benchmarks algorithms have been considered:

  1. Random Walk Algorithm (RWA) [132, 133], which imposes that arriving service re­quests are assigned to the nearest fog node to the data source. If the fog is congested it will offload the service randomly to another fog node. In this scenario. This makes the assumption that each fog node within the domain has the same probability of being selected.

  2. Neighbouring Fogs Algorithm (NFA) [165], which imposes that the congested fog node will offload the overload to the nearest fog node with bigger capacity.

Moreover, our comparison also includes the typical service distribution based on as­signing service’s packets to the nearest node to the IoT thing with No Offloading Al­gorithm (NOA). We refer to the proposed offloading algorithm as Optimal Fog Algo­rithm (OFA).

  1. Performance Evaluation and Discussion

The performance metric we used is the average service time that reflects the efficiency of service completion time (aka amount of delay/latency). The lower the average service time (min[rs}), the better the efficiency of service and the QoS and QoE.

Figure 5.7 illustrates the performance of our OFA based on the average response time for all received service requests according to a service’s packet types. Also, it provides a comparison between the results of OFA and the results obtained from other algorithms mentioned in Section 5.4.2. The simulation settings for this experiment is as follows:

  • Fog nodes with different capabilities, hence, nodes vary in their service rate ц.

  • Fog nodes capability based on CPU frequency with a minimum of 200 x 106 hertz, incremented by 100 hertz until it gets to maximum CPU capability of 15 x 108.

  • Service arrival rate A = 3 x 102 packet per second as in [3], and A is fixed during the experiment to ensure all algorithms have the same traffic arrival rate.

Figures 5.7a, 5.7b, and 5.7c are grouped by packet types, having heavy-packets versus light-packets versus mixed-packets. In Figure 5.7a, the packets type is mixed (MTP), having a random number of heavy and light packets. However, the random number is fixed through out the experiment to ensure consistency across all algorithms. In Figures 5.7b and 5.7c, the packets are set to either all heavy-packets (AHP) or all light-packets (ALP). This is to examine the performance based on different scenarios. In Figure 5.7 the vertical line represents the average latency per algorithm to serve all arriving services, and the horizontal line is the number of iterations carried out to ensure that the obtained results are consistent and not random. It is clear that OFA has the lowest service latency among other algorithms through all iterations and with all types of packets. It is obvious that NOA has the largest service time because it does not consider offloading when a fog node becomes congested. Hence, we end-up having a small node capacity with large queue size (i.e., ц < Ai), and a large node capacity with low queue size. The performance of RWA and NFA are better than NOA but still higher than our OFA. However, RWA has the worst performance with MTP and AHP as it randomly offloads the overload, which is a relatively blind algorithm as it does not consider the current fog workload (fw) and the propagation delay (Dp) between


0.4


Download 0.58 Mb.

Do'stlaringiz bilan baham:
1   2   3   4   5   6   7   8   9   10   11




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling