The deployment of fog computing has not only helped in task offloading for the end-users toward delay-sensitive task provisioning but also reduced the burden for cloud back-end systems to process variable workloads arriving from the user equipment. However, due to the constraints on the resources and computational capabilities of the fog nodes, processing the computational-intensive task within the defined timelines is highly challenging. Also, in this scenario, offloading tasks to the cloud creates a burden on the upload link, resulting in high resource costs and delays in task processing. Existing research studies have considerably attempted to handle the task allocation problem in fog–cloud networks, but the majority of the methods are found to be computationally expensive and incur high resource costs with execution time constraints. The proposed work aims to balance resource costs and time complexity by exploring collaboration among host machines over fog nodes. It introduces the concept of task scheduling and optimal resource allocation using coalition formation methods of game theory and pay-off computation. The work also encourages the formation of coalitions among host machines to handle variable traffic efficiently. Experimental results show that the proposed approach for task scheduling and optimal resource allocation in fog computing outperforms the existing system by 56.71% in task processing time, 47.56% in unused computing resources, 8.33% in resource cost, and 37.2% in unused storage.
When it comes to filtering and compressing data before sending it to a cloud server, fog computing is a rummage sale. Fog computing enables an alternate method to reduce the complexity of medical image processing and steadily improve its dependability. Medical images are produced by imaging processing modalities using X-rays, computed tomography (CT) scans, magnetic resonance imaging (MRI) scans, and ultrasound (US). These medical images are large and have a huge amount of storage. This problem is being solved by making use of compression. In this area, lots of work is done. However, before adding more techniques to Fog, getting a high compression ratio (CR) in a shorter time is required, therefore consuming less network traffic. Le Gall5/3 integer wavelet transform (IWT) and a set partitioning in hierarchical trees (SPIHT) encoder were used in this study’s implementation of an image compression technique. MRI is used in the experiments. The suggested technique uses a modified CR and less compression time (CT) to compress the medical image. The proposed approach results in an average CR of 84.8895%. A 40.92% peak signal-to-noise ratio (PSNR) PNSR value is present. Using the Huffman coding, the proposed approach reduces the CT by 36.7434 s compared to the IWT. Regarding CR, the suggested technique outperforms IWTs with Huffman coding by 12%. The current approach has a 72.36% CR. The suggested work’s shortcoming is that the high CR caused a decline in the quality of the medical images. PSNR values can be raised, and more effort can be made to compress colored medical images and 3-dimensional medical images.
Fog computing is a type of distributed computing that makes data storage and computation closer to the network edge. While fog computing offers numerous advantages, it also introduces several challenges, particularly in terms of security. Intrusion Detection System (IDS) plays a crucial role in securing fog computing environments by monitoring network traffic and system activities for signs of malicious behavior. Several techniques can be employed to enhance intrusion detection in fog computing environments. Accordingly, this paper proposes a Shepard Neuro-Fuzzy Network (ShNFN) for intrusion detection in fog computing. Initially, in the cloud layer, the input data are passed to data transformation to transform the unstructured data into structured form. Here, data transformation is done employing the Box-Cox transformation. Following this, the feature selection is done in terms of information gain and symmetric uncertainty process and it is used to create a relationship between two variables. After that, the data are classified by employing the proposed ShNFN. The ShNFN is attained by fusing two networks, such as Cascade Neuro-Fuzzy Network (Cascade NFN) and Shepard Convolutional Neural Networks (ShCNN). After this, the physical process is executed at the endpoint layer. Finally, intrusion detection is accomplished in the fog layer by the proposed ShNFN method. The performance of the intrusion detection using ShNFN is calculated by the metrics of recall, F-measure and precision. The proposed method achieves the values of 93.3%, 92.5% and 94.8% for recall, F-measure, and precision, respectively.
Fog Computing extends storage and computation resources closer to end-devices. In several cases, the Internet of Things (IoT) applications that are time-sensitive require low response time. Thus, reducing the latency in IoT networks is one of the essential tasks. To this end, fog computing is developed with a motive for the data production and consumption to always be within proximity; therefore, the fog nodes must be placed at the edge of the network, which is near the end devices, such that the latency is minimized. The optimal location selection for fog node placement within a network out of a very large number of possibilities, such as minimize latency, is a challenging problem. So, it is a combinatorial optimization problem. Hard combinatorial optimization problems (NP-hard) involve huge discrete search spaces. The fog node placement problem is an NP-hard problem. NP-hard problems are often addressed by using heuristic methods and approximation algorithms. Combinatorial optimization problems can be viewed as searching for the best element of some set of discrete items; therefore in principle, any metaheuristic can be used to solve them. To resolve this, meta-heuristic-based methods is proposed. We apply the Simulated Annealing (SA), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) technique to design fog node placement algorithms. Genetic Algorithm is observed to give better solutions. Since Genetic Algorithm may get stuck in local optima, Hybrid Genetic Algorithm, and Simulated Annealing (GA-SA), Hybrid Genetic Algorithm and Particle Swarm Optimization (GA-PSO) were compared with GA. By extensive simulations, it is observed that hybrid GA-SA-based for node placement algorithm outperforms other baseline algorithms in terms of response time for the IoT applications.
An Internet of Things (IoT) device that can automatically measure water consumption can help prevent excessive water usage or leaks. However, automating too many residences or condominiums with multiple IoT devices can lead to extra energy consumption and more network congestion. We propose controlling the energy consumption of an IoT water consumption management system by dynamically controlling its duty cycle. By analyzing the energy consumption of the developed prototype and its duty cycle variation, we calculated how much energy could be saved by controlling the antenna and the water flow sensor used in the IoT device. While controlling the antenna offered some energy savings, having some way to cut down on the water flow sensor’s consumption can have a dramatic impact on the overall IoT energy consumption or its battery longevity. Our results showed that we could get up to 69% extra energy savings compared to just putting the antenna in sleep mode. There is an observable trade-off in saving so much energy, as we can also see that water reading error rates go up alongside the extra energy savings.
The Internet of Things (IoT) creates a large number of datasets, and these are handled in cloud data centers. IoT services are more delayed when data is sent over longer distances to the cloud. Node deployment is used to improve the performance of the multi-tier IoT-Fog environment by finding minimum distance with low Latency. Several methods have been discussed previously to improve the node deployment strategies but they do not provide good results. To overcome these issues, an Efficient and Multi-Tier Node Deployment Strategy using Variable Tangent Search Optimization Algorithm (VTSOA) is proposed in an IoT-Fog Environment. This Multi-Tier Node Deployment Strategy consists of several layers: IoT device layer, Fog layer, and cloud layer. The IoT device layer collects the data from external devices and is transmitted to the Fog layer. The fog layer contains several nodes. Hence, it increases the Latency of sending the data to the cloud. Therefore, VTSOA-based node deployment is done in the fog layer which finds the minimum distance nodes for effective communication. In this, the proposed approach is implemented in MATLAB. After that, the performance of this method is linked to various optimization algorithms.
Recent advances in Internet technology have shifted the focus of end-users from the usage of traditional mobile applications to the Internet of Things (IoT)-based service-oriented smart applications (SAs). These SAs use edge devices to obtain different types of Fog services and provide their real-time response to the end-users. The Fog computing environment extends its services to the edge network layer and hosts SAs that require low latency. Further, a growing number of latency-aware SAs imposes the issue of effective allocation of resources in the Fog environment. In this paper, we have proposed an effective multi-criteria decision-making (MCDM) based solution for resource ranking and resource allocation in the Fog environment. The Proposed algorithms implement the modified edition of the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and Analytical Hierarchical Process (AHP) and consider Quality of Experience parameters (QoE), i.e., network bandwidth, average latency, and cores for ranking and mapping of resources. The obtained results reveal that the proposed approach utilizes 70% resources, and reduces the response time by an average of 7.5s as compared to the Cloud model and the Fog model, respectively. Similarly, the completion time of the proposed framework is minimum in comparison with the cloud and Fog models with a difference of 9s and 16s.
Fog computing acts as an intermediate component to reduce the delays in communication among end-users and the cloud that offer local processing of requests among end-users through fog devices. Thus, the primary aim of fog devices is to ensure the authenticity of incoming network traffic. Anyhow, these fog devices are susceptible to malicious attacks. An efficient Intrusion Detection System (IDS) or Intrusion Prevention System (IPS) is necessary to offer secure functioning of fog for improving efficiency. IDSs are a fundamental component for any security system like the Internet of things (IoT) and fog networks for ensuring the Quality of Service (QoS). Even though different machine learning and deep learning models have shown their efficiency in intrusion detection, the deep insight of managing the incremental data is a complex part. Therefore, the main intent of this paper is to implement an effective model for intrusion detection in a fog computing platform. Initially, the data dealing with intrusion are collected from diverse benchmark sources. Further, data cleaning is performed, which is to identify and remove errors and duplicate data, to create a reliable dataset. This improves the quality of the training data for analytics and enables accurate decision making. The conceptual and temporal features are extracted. Concerning reducing the data length for reducing the training complexity, optimal feature selection is performed based on an improved meta-heuristic concept termed Modified Active Electrolocation-based Electric Fish Optimization (MAE-EFO). With the optimally selected features or data, incremental learning-based detection is accomplished by Incremental Deep Neural Network (I-DNN). This deep learning model optimises the testing weight using the proposed MAE-EFO by concerning the objective as to minimise the error difference between the predicted and actual results, thus enhancing the performance of new incremental data. The validation of the proposed model on the benchmark datasets and other datasets achieves an attractive performance when compared over other state-of-the-art IDSs.
The Fog computing is rising as a dominant and modern computing model to deliver Internet of Things (IoT) computations, which is an addition to the cloud computing standard to get it probable to perform the IoT requests in the network of edge. In those above independent and dispersed environment, resource allocation is vital. Therefore, scheduling will be a test to enhance potency and allot resources properly to the tasks. This paper offers a distinct task scheduling algorithm in the fog computing environment that tries to depreciate the makespan and maximize resource utilization. This algorithm catalogues the task based on the mean Suffrage value. The suggested algorithm gives much resource utilization and diminishes makespan. Our offered algorithm is compared with different alive scheduling for performance investigation, and test results confirm that our algorithm has a more significant resource utilization rate and low makespan than other familiar algorithms.
In the dynamic field of fog computing, there is a clear trend toward exploiting local, resource-rich nodes to bypass traditional cloud infrastructure limitations. This study introduces an innovative method for enhancing the reliability of Wi-Fi systems, crucial in fog computing, by integrating quality-focused user feedback. Our approach significantly enhances the assessment of Wi-Fi system trustworthiness by emphasizing user perspectives. While traditional metrics such as Availability, Performance, and Security Parameters are crucial for defining Quality of Service (QoS), our research also integrates user feedback as a key, albeit secondary, factor in assessing Wi-Fi node trustworthiness. Designed to improve the evaluation process, this method combines system-generated QoS metrics with user feedback to subtly increase trust assessment objectivity. This not only connects technical service quality with user experiences but also strengthens trust and reliability in fog computing. Utilizing sophisticated cloud theory techniques, our model employs both backward and forward cloud generators — the backward generator converts QoS data into qualitative insights, while the forward generator combines these insights with user feedback to thoroughly evaluate Wi-Fi node service quality. The integration of user feedback allows for a more dynamic and responsive system evaluation, addressing the limitations of previous models by providing a comprehensive assessment that aligns technical service quality with user experiences. This enhancement in trust evaluation is achieved by skillfully blending QoS metrics and user feedback to create a more objective trust value.
The rapid growth of Industrial Internet of Things (IIoT) architecture presents a unique scope for developing a broad field of networking to connect multiple interconnected nodes to the Internet. The majority of current IIoT technologies are focused on unified architecture, which is easier to maintain but cannot leverage to facilitate immutable and verifiable networks between different groups. The blockchain framework is built on many desirable features for large-scale IIoT technologies, such as centralization, reliability, tracking ability, and immutability. This chapter proposes an IIoT blockchain-based infrastructure designed to encourage unchanging and empirical transactions. Nevertheless, while abandoning blockchain technology to the IIoT framework, the necessary storage space is subject to a subsidizing challenge to the cluster-based IIoT architecture. The proposed frame has a centralized storage structure where most of the blockchain settles into clouds, such as Global, Fog, and Edge. Nearly all notable nodes are processed in the superimposed network of independent industrial IoT networks. The proposed framework constantly links low-level IIoT networks, blockchain overlay networks, and combined cloud architecture through two connectors. The blockchain interface and fog interface port are interconnected for continuous data transmission. The blockchain interface in the stacked network extends blockchain blocks from the information gathered in IIoT nodes. And the cloud interface reconciles the constraints of optimizing the blockchain between the overlay network and the clouds. This is a test case to be provided to demonstrate the efficiency of the Edge Central Network Repository proposed in a practical example of IIoT.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.