FOG and Cloud computing has established itself as an important component of the computational domain, providing a wide range of server proficiencies as virtualized scalable services. Cloud datacenters utilize VM consolidation to consolidate VMs to a smaller number of physical servers in order to enhance resource utilization and energy efficiency. Incorrect VM placement, on the other hand, might result in frequent VM migrations and continual on-off switching on physical machines (PMs), lowering service quality and increasing energy usage. To address this issue, we present a VM consolidation strategy based on Sailfish Optimization that is both effective and efficient. The performance evaluation of the proposed method observed a significant reduction in power consumption, SLA violation and execution time up to 25%, 22% and 6%, respectively, and increased resource utilization up to 17% as compared to existing models.
This study is devoted to exploring the method of optimizing engineering cost construction system by using computer cloud computing algorithm. By synthesizing relevant studies at home and abroad, it is found that the current engineering cost construction system generally suffers from slow data processing speed, high computational complexity and insufficient accuracy. In terms of system design, specific technologies, methods and algorithms are adopted. First, a stable cloud computing platform was established to ensure the reliability and stability of the system. Then, techniques such as data mining and machine learning are used to process and analyze a large amount of engineering data to achieve accurate prediction of engineering costs. At the same time, a series of algorithms are designed to optimize the computational efficiency and accuracy of the system. The effectiveness and feasibility of the proposed system are verified through experimental design and analysis. The research results show that the engineering cost construction system based on cloud computing algorithms can significantly improve the efficiency and accuracy of the system, which has high application value and provides a new solution for the engineering construction industry.
In sports events, a large amount of data closely related to the event is generated due to the involvement of a wide range of audiences, media and participants. These data contain rich emotional information, reflecting the emotional tendencies of the group toward the event. Especially in the crisis of sports events, changes in group emotions have a significant impact on the direction and outcome of the event. Therefore, in order to effectively monitor and analyze the emotional dynamics in sports events, a cloud computing based method for identifying group emotions in sports event crises is proposed. By collecting data related to the event, including audience feedback, media reports and social media comments, and integrating these data into a unified cloud computing platform. Subsequently, the data are cleaned and denoised to eliminate irrelevant information and noise. In order to unify data formats and standardize processing, the optimal segmentation function is adopted, which adaptively segments data based on its characteristics and distribution, thereby improving the accuracy and efficiency of data processing. On this basis, extract features related to crisis emotion recognition in sports events and complete data preprocessing work. Next, based on the dynamic evolution structure of the development law of crisis public opinion in sports events, the weights of each indicator were calculated by constructing group emotion recognition indicators. These indicators and weights together constitute the group emotion recognition model in sports event crises. Through cloud computing technology, real-time monitoring and recognition of group emotions during sports event crises have been achieved. The experimental results show that the recognition trend using this method is highly consistent with the actual development trend of the event, demonstrating excellent stability and convergence. By comparing the average fitness and mean square error of different methods, it was found that the recognition performance of this method is significantly better than other methods in the reference literature. In the recognition results of specific competition events, the recognition results of this method are completely consistent with the actual situation, accurately identifying positive and negative emotions in the actual situation. In addition, this method can effectively identify athletes’ emotional tendencies in different types of competitions, and has good robustness and practical application value. Therefore, this method provides a more reliable and effective tool for research and application in related fields, with broad application prospects.
In order to improve the sharing level of online physical education teaching resources, the integration design of online physical education teaching resources is carried out by using MOOC resource information fusion technology and closed frequent item fusion calculation method, and the intelligent scheduling and management ability of online physical education teaching resources is improved. An online physical education teaching resource integration model based on big data information fusion and cluster scheduling is proposed. The factors of online physical education teaching demand scale are initialized, the structure of common factors of online physical education teaching resources is determined by factor analysis method, the orthogonal rotation of indicators by maximum variance method is adopted, the cloud computing model is adopted to analyze the integration structure of online physical education teaching resources, and the feature quantity of cross frequent items rules of online physical education teaching resources is extracted. The parallel K-means clustering model of online physical education resources integration is constructed, the fuzzy correlation degree features of online physical education resources are calculated, the extracted correlation features of online physical education resources are processed by C-means clustering method for big data fusion, the multi-objective optimization technology is adopted for online physical education resources integration and adaptive scheduling, and the adaptive scheduling ability of online physical education resources is improved. The SPSS statistical software is used for empirical analysis, and the results show that the integration degree of online physical education teaching resources by this method is high, which promotes the development of online physical education teaching.
When it comes to filtering and compressing data before sending it to a cloud server, fog computing is a rummage sale. Fog computing enables an alternate method to reduce the complexity of medical image processing and steadily improve its dependability. Medical images are produced by imaging processing modalities using X-rays, computed tomography (CT) scans, magnetic resonance imaging (MRI) scans, and ultrasound (US). These medical images are large and have a huge amount of storage. This problem is being solved by making use of compression. In this area, lots of work is done. However, before adding more techniques to Fog, getting a high compression ratio (CR) in a shorter time is required, therefore consuming less network traffic. Le Gall5/3 integer wavelet transform (IWT) and a set partitioning in hierarchical trees (SPIHT) encoder were used in this study’s implementation of an image compression technique. MRI is used in the experiments. The suggested technique uses a modified CR and less compression time (CT) to compress the medical image. The proposed approach results in an average CR of 84.8895%. A 40.92% peak signal-to-noise ratio (PSNR) PNSR value is present. Using the Huffman coding, the proposed approach reduces the CT by 36.7434 s compared to the IWT. Regarding CR, the suggested technique outperforms IWTs with Huffman coding by 12%. The current approach has a 72.36% CR. The suggested work’s shortcoming is that the high CR caused a decline in the quality of the medical images. PSNR values can be raised, and more effort can be made to compress colored medical images and 3-dimensional medical images.
To ensure the confidentiality of the data that are transferred over cloud networks including sensitive information like medical images and money transaction is significant and requires severe security mechanism to protect the personal information from hackers as well as malicious users as breaches in security affect the privacy and reputation of the user. Hence, an image-based cryptographic method is developed for improving security performance. Here, the time-bound encryption approach is established for protecting confidential medical images which is implemented in e-healthcare. Further, it traverses through a Lupus Coyote optimization (LCO) algorithm which utilizes the systematic chaotic maps (CM), since it performs based on the parameters which may be either discrete or consistent time parameters, and the inception of chaotic signals prevents the prohibited accesses. The proposed LCO optimizes the process of pixel assembling by determining the right parameter values for formulating the chaotic patterns to provide a robust cryptography method. Finally, the permutation as well as the properties of diffusion is controlled in the entire system to deal with the complexity of control pixel shuffling as well as the operations of substitution. Additionally, the correlation between neighboring pixels is interrupted which adds complexity to extracting the information by the attackers and the diffusion algorithm assists in eliminating the storage issue. Thus, the performance of the LCO-based time bound image encryption using the CM model is improved than the other systems, in which the performance metrics such as cosine similarity (CS), histogram correlation (HC), mean square error (MSE), peak signal to noise ratio (PSNR), root mean square error (RMSE), and structural similarity index measure (SSIM) attained the values of 86.53, 0.935, 4.12, 39.51, 4.54, and 0.94 dB, respectively, at number of images 50 with the size of population 250.
Cloud-based solutions for software development activities have been emerging in the last decade. This study aims to develop a hybrid technology adoption model for cloud use in software development activities. It is based on Technology Acceptance Model (TAM), Technology–Organization–Environment (TOE) framework, and the proposed extension Personal–Organization–Project (POP) structure. The methodology selected is a questionnaire-based survey and data are collected through personally administered questionnaire sessions with developers and managers, resulting in 268 responses regarding 84 software development projects from 30 organizations in Turkey, selected by considering company and project sizes and geographical proximity to allow face-to-face response collection. Structural Equation Modeling (SEM) is used for statistical evaluation and hypothesis testing. The final model was reached upon modifications and it was found to explain the intention to adopt and use the cloud for software development meaningfully. To the best of our knowledge, this is the first study to identify and understand factors that affect the intention of developing software on the cloud. The developed hybrid model was validated to be used in further technology adoption studies. Upon modifying the conceptual model and discovering new relations, a novel model is proposed to draw the relationships between the identified factors and the actual use, intention to use and perceived suitability. Practical and social implications are drawn from the results to help organizations and individuals make decisions on cloud adoption for software development.
The rising incidence of heart disease requires effective and robust prediction algorithms, especially in Internet of Things (IoT)-cloud-based smart healthcare frameworks. This study presents a novel method for forecasting cardiovascular disease using superior data preprocessing, feature selection, and deep learning techniques. First, preprocessing is done using the Z-score min–max normalization technique to ensure consistent data scaling and standardize the dataset. After preprocessing, an innovative hybrid feature selection technique that combines Black Widow Optimization (BWO) and Influencer Buddy Optimization (IBO) is utilized. By achieving equilibrium between invention and execution, the BWO-IBO technique enhances feature selection and extracts the most pertinent information for heart disease prediction. The Gates-Controlled Deep Unfolding Network (GCDUN), which is based on the Crayfish Optimization Algorithm (COA), is an innovative framework for prediction. Through the use of a gates-controlled mechanism and a COA component that speeds up network parameter tuning based on crayfish behavior, GCDUN-COA increases feature representation and enhances the decision plane. The fusion of the IoT and a cloud-based framework takes the present data collection, processing, and remote monitoring a notch higher, thus making the system highly scalable and efficient for clinical use. When predicting cardiac disease, the method recommended shows improved F1-score, specificity, accuracy, recall, and precision continuously achieving above 99% across all performance metrics. By providing prompt diagnosis and intervention via an intelligent, adaptive prediction system, an IoT-driven cloud-based medical technology has the potential to revolutionize cardiac care.
Smart Grid Cyber Physical Systems (SG-CPS) have a substantial impact on power grid infrastructure upgrading. Nonetheless, due to the sophisticated nature of the infrastructure and the critical demand for resilient intrusion prevention systems, the task of protecting its security against data integrity attacks is a significant challenge. The simulation and assessment of security performance in SG-CPS present substantial hurdles in real-world power grid systems, owing mostly to experimental constraints. This necessitates the development of novel ways to improve distribution chain security. This research introduces a novel approach, employing a Deep Adversarial Probabilistic Neural Network (DAPNN)-based Intrusion prevention system in a cloud environment. Combining Bayesian Probabilistic Neural Networks (BEPNNs) with adversarial training and rule-based decision-making enhances precision and resilience. The major goal of this research is to detect and counteract false data injection (FDI) attacks that have the potential to compromise the integrity of power grid data. This paper proposes a novel methodology for intrusion detection in SG-CPS that combines BEPNNs with adversarial training. The addition of rule-based decision-making improves the system’s precision and resilience. The IEEE 24-bus system provides the foundation for providing data points relevant to normal operating conditions, contingency scenarios, and intentional attacks. The training procedure includes the use of a BEPNN for feature extraction, as well as the use of adversarial training approaches. The intrusion detection system has decision-making logic based on rules. The cloud infrastructure solution used in the study is Microsoft Azure. The results show that the DAPNN-based Intrusion Prevention System is effective in detecting and mitigating FDI attacks in SG-CPS. The system outperforms in terms of accuracy, precision, recall and F-measure, hence improving the security of the power grid infrastructure.
Cloud service companies can increase the return on their investments by lowering the energy usage of their data centers, but they must also make sure that the services they supply satisfy the diverse needs of their customers. In this paper, we provide a resource management technique to lower cloud data centers energy consumption and Service Level Agreement (SLA) violations. SLA violations owing to workload are also not taken into account by current methods. We developed a novel strategy to address the aforementioned problems, which is extremely beneficial for decreasing SLA violation as well as lowering energy loss. The results were significantly improved when compared to the current state-of-the-art approaches in terms of OSLAV, VM Migrations (Virtual Machine), Throughput, Average response Time, and Performance degrade. From the proposed method the average values obtained in terms of the parameters OSLAV, performance degradation, throughput, average response time and VM migration are 2.3%, 6.76%, 37%, 247.5ms, 582.
Federated Internet of Things (IoT) presents both unprecedented opportunities and challenges in security and data management. This study explores the integration of big data analytics and Quantum Computing as potential solutions to address security concerns within the Federated IoT ecosystem. The study examines the implications of leveraging big data analytics to process and analyze the massive volume of data generated by IoT devices. Advanced analytics techniques, including machine learning and anomaly detection algorithms, are employed to enhance the detection and mitigation of security threats such as unauthorized access, data breaches and malicious attacks. Furthermore, the study investigates the role of Quantum Computing infrastructure in providing scalable and reliable resources for securely storing, processing and transmitting IoT data. By offloading computational tasks to quantum-based platforms, the aim is to alleviate the burden on edge devices while ensuring robust security measures are in place to safeguard sensitive information. A comprehensive review of existing literature and case studies identifies key challenges and opportunities in implementing big data and Quantum Computing solutions within the Federated IoT environment. The study also proposes potential frameworks and methodologies for integrating these technologies effectively, considering factors such as data privacy, scalability and interoperability. Overall, this research aims to advance secure IoT systems by leveraging big data analytics and cloud computing. By addressing security concerns proactively and adopting innovative approaches, the goal is to create a more resilient and trustworthy Federated IoT ecosystem, benefiting society at large.
The AREA of a schedule for executing DAGs is the average number of DAG-chores that are eligible for execution at each step of the computation. AREA maximization is a new optimization goal for schedules that execute DAGs within computational environments, such as Internet-based computing, clouds, and volunteer computing projects, that are dynamically heterogeneous, in the sense that the environments' constituent computers can change their effective powers at times and in ways that are not predictable. This paper is motivated by the thesis that, within dynamically heterogeneous environments, DAG-schedules that have larger AREAs execute a computation-DAG with smaller completion time under many circumstances; this thesis is supported by preliminary simulation-based experiments. While every DAG admits an AREA-maximizing schedule, it is likely computationally difficult to find such a schedule for an arbitrary DAG. Earlier work has shown how to craft AREA-maximizing schedules efficiently for a number of families of DAGs whose structures are reminiscent of many scientific computations. The current paper extends this work by showing how to efficiently craft AREA-maximizing schedules for series-parallel DAGs, a family that models a multithreading computing paradigm. The techniques for crafting these schedules promise to apply also to other large families of recursively defined DAGs. Moreover, the ability to derive these schedules efficiently leads to an efficient AREA-oriented heuristic for scheduling arbitrary DAGs.
In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago.
This paper focuses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project.
The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.
The emerging cloud computing model has recently gained a lot of interest both from commercial companies and from the research community. XtreemOS is a distributed operating system for large-scale wide-area dynamic infrastructures spanning multiple administrative domains. XtreemOS, which is based on the Linux operating system, has been designed as a Grid operating system providing native support for virtual organizations. In this paper, we discuss the positioning of XtreemOS technologies with regard to cloud computing. More specifically, we investigate a scenario where XtreemOS could help users take full advantage of clouds in a global environment including their own resources and cloud resources. We also discuss how the XtreemOS system could be used by cloud service providers to manage their underlying infrastructure. This study shows that the XtreemOS distributed operating system is a highly relevant technology in the new era of cloud computing where future clouds seamlessly span multiple bare hardware providers and where customers extend their IT infrastructure by provisioning resources from different cloud service providers.
Commercial cloud offerings, such as Amazon's EC2, let users allocate compute resources on demand, charging based on reserved time intervals. While this gives great flexibility to elastic applications, users lack guidance for choosing between multiple offerings, in order to complete their computations within given budget constraints. In this work, we present BaTS, our budget-constrained scheduler. Using a small task sample, BaTS can estimate costs and makespan for a given bag on different cloud offerings. It provides the user with a choice of options before execution and then schedules the bag according to the user's preferences. BaTS requires no a-priori information about task completion times. We evaluate BaTS by emulating different cloud environments on the DAS-3 multi-cluster system. Our results show that BaTS correctly estimates budget and makespan for the scenarios investigated; the user-selected schedule is then executed within the given budget limitations.
Rapid advances in cloud computing have made the vision of utility computing a near-reality, but only in certain domains. For science and engineering parallel or distributed applications, on-demand access to resources within grids and clouds is hampered by two major factors: communication performance and paradigm mismatch issues. We propose a framework for addressing the latter aspect via software adaptations that attempt to reconcile model and interface differences between application needs and resource platforms. Such matching can greatly enhance flexibility in choice of execution platforms — a key characteristic of utility computing — even though they may not be a natural fit or may incur some performance loss. Our design philosophy, middleware components, and experiences from a cross-paradigm experiment are described.
Scientists today are exploring the use of new tools and computing platforms to do their science. They are using workflow management tools to describe and manage complex applications and are evaluating the features and performance of clouds to see if they meet their computational needs. Although today, hosting is limited to providing virtual resources and simple services, one can imagine that in the future entire scientific analyses will be hosted for the user. The latter would specify the desired analysis, the timeframe of the computation, and the available budget. Hosted services would then deliver the desired results within the provided constraints. This paper describes current work on managing scientific applications on the cloud, focusing on workflow management and related data management issues. Frequently, applications are not represented by single workflows but rather as sets of related workflowsworkflow ensembles. Thus, hosted services need to be able to manage entire workflow ensembles, evaluating tradeoffs between completing as many high-value ensemble members as possible and delivering results within a certain time and budget. This paper gives an overview of existing hosted science issues, presents the current state of the art on resource provisioning that can support it, as well as outlines future research directions in this field.
Hadoop is a widely used open source implementation of MapReduce which is a popular programming model for parallel processing large scale data intensive applications in a cloud environment. Sharing Hadoop clusters has a tradeoff between fairness and data locality. When launching a local task is not possible, Hadoop Fair Scheduler (HFS) with delay scheduling postpones the node allocation for a while to a job which is to be scheduled next as per fairness to achieve high locality. This waiting becomes waste when the desired locality could not be achieved within a reasonable period. In this paper, a modified delay scheduling in HFS is proposed and implemented in Hadoop. It avoids the aforementioned waiting of delay scheduler if achieving locality is not possible. Instead of blindly waiting for a local node, the proposed algorithm first estimates the time to wait for a local node for the job and avoids waiting wherever achieving locality is not possible within the predefined delay threshold while accomplishing same locality. The performance of the proposed algorithm is evaluated by extensive experiments and it has been observed that the algorithm works significantly better in terms of response time and fairness achieving up to 20% speedup and up to 38% fairness in certain cases.
With on-demand access to compute resources, pay-per-use, and elasticity, the cloud evolved into an attractive execution environment for High Performance Computing (HPC). Whereas elasticity, which is often referred to as the most beneficial cloud-specific property, has been heavily used in the context of interactive (multi-tier) applications, elasticity-related research in the HPC domain is still in its infancy. Existing parallel computing theory as well as traditional metrics to analytically evaluate parallel systems do not comprehensively consider elasticity, i.e., the ability to control the number of processing units at runtime. To address these issues, we introduce a conceptual framework to understand elasticity in the context of parallel systems, define the term elastic parallel system, and discuss novel metrics for both elasticity control at runtime as well as the ex-post performance evaluation of elastic parallel systems. Based on the conceptual framework, we provide an in-depth analysis of existing research in the field to describe the state-of-the-art and compile our findings into a research agenda for future research on elastic parallel systems.
This paper presents a hybrid approach based discrete Particle Swarm Optimization (PSO) and chaotic strategies for solving multi-objective task scheduling problem in cloud computing. The main purpose is to allocate the summited tasks to the available resources in the cloud environment with minimum makespan (i.e. schedule length) and processing cost while maximizing resource utilization without violating Service Level Agreement (SLA) among users and cloud providers. The main challenges faced by Particle Swarm Optimization (PSO) when used to solve scheduling problems are premature convergence and trapping into local optimum. This paper presents an enhanced Particle Swarm Optimization algorithm hybridized with Chaotic Map strategies. The proposed approach is called Enhanced Particle Swarm Optimization based Chaotic Strategies (EPSOCHO) algorithm. Our proposed approach suggests two Chaotic Map strategies: sinusoidal iterator and Lorenz attractor to enhanced PSO algorithm in order to get good convergence and diversity for optimizing the task scheduling in cloud computing. The proposed approach is simulated and implemented in Cloudsim simulator. The performance of the proposed approach is compared with the standard PSO algorithm, the improved PSO algorithm with Longest job to fastest processor (LJFP-PSO), and the improved PSO algorithm with minimum completion time (MCT-PSO) using different sizes of tasks and various benchmark datasets. The results clearly demonstrate the efficiency of the proposed approach in terms of makespan, processing cost and resources utilization.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.