Essentially, all models are wrong, but some are useful. Collaboration Model of Fog and Cloud
Download 0.58 Mb.
|
2020MohammedPhD1 - Copy
Chapter Summary
This chapter focuses on the practicality and management of fog computing. Although fog computing is recognized as a computing model that suits IoT systems/applications, it is still not widely used due to the spatial and temporal dynamics of IoT thing’s distribution that makes the management and distribution of fog nodes difficult. Also, this could make the computations loads on fogs vary significantly. Therefore, some fog nodes could be lightly loaded, while others are not, causing fog congestion hence latency. In this chapter, a novel Fog Resource manAgeMEnt Scheme (FRAMES) has been proposed to crystallise fog distribution and management with an appropriate service load distribution and allocation. Also, service load reallocation via service request(s) offloading among participated fog nodes within one domain to: i) achieve minimal latency for IoT services, and ii) allocate minimal load on fog nodes. FRAMES proposes/allow Fog-2-Fog coordination, which turned out to be a feasible solution that enables fog traffic management via service request offloading in fog based network architecture that serves the purpose of minimizing the average response time for real-time IoT services. Through the extensive experiments, it is clear that FRAMES and its proposed offloading algorithms significantly impact the overall latency of the IoT services. Thus, the proper resources managements with the accurate offloading decisions, the services response time is significantly improved. Also, the number of fog nodes and their capacities will also impact services delays. This chapter have addressed RO3 as it has investigated the barriers that might impede fog in term of resource managements and the approach to provide fog services, also addressed RO4 in term of the design and develop of a comprehensive solution that manages the fog network resources. Hence this chapter fulfill RQ3 and RQ4. From the experiment’s results, it is clear that the proposed OFA has the lowest service response time in comparison with RWA and NFA. Moreover, OFA has not only outperformed on RWA and NFA in latency, but also in the service packets distribution over fog nodes upon their capabilities. In general, if all fog nodes have low load, offloading is unnecessary, and if all fog nodes have heavy loads, offloading will not help to reduce the delay. Offloading only helps when there is a high degree of variance among the fog nodes. OFA has the potential to achieve a sustainable network paradigm to highlight the significance and benefits of adopting the fog computing paradigm. Having the fog-cloud collaboration model discussed in previous chapter and the fog-2-fog model discussed in this chapter, the next chapter will look on having a secure IoT environment for node’s interactions and resources sharing among fog nodes through the fog COMputIng Trust manageMENT (COMITMENT) model. I One-hop with (T^C, T^F, T^C|F}, assuming that processing data at cloud nodes is different from processing data at fog nodes in term of speed, privacy etc. Example of T^C includes batch processing of CCTV data, example of T^F includes processing CCTV frames as being captured in real-time, while the example of T^C|F includes processing CCTV data with no time-sensitivity (i.e., delay is acceptable). ICisco blog on IoT, from Cloud to Fog Computing https://blogs.cisco.com/perspectives/iot-from-cloud-to-fog-computing Imosquitto.org, Open Source MQTT Broker and part of Eclipse IoT project v3.1.1. Download 0.58 Mb. Do'stlaringiz bilan baham: |
Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling
ma'muriyatiga murojaat qiling