Please login to be able to save your searches and receive alerts for new content matching your search criteria.
We consider the problem of scheduling an application on a parallel computational platform. The application is a particular task graph, either a linear chain of tasks, or a set of independent tasks. The platform is made of identical processors, whose speed can be dynamically modified. It is also subject to failures: if a processor is slowed down to decrease the energy consumption, it has a higher chance to fail. Therefore, the scheduling problem requires us to re-execute or replicate tasks (i.e., execute twice the same task, either on the same processor, or on two distinct processors), in order to increase the reliability. It is a tri-criteria problem: the goal is to minimize the energy consumption, while enforcing a bound on the total execution time (the makespan), and a constraint on the reliability of each task. Our main contribution is to propose approximation algorithms for linear chains of tasks and independent tasks. For linear chains, we design a fully polynomial-time approximation scheme. However, we show that there exists no constant factor approximation algorithm for independent tasks, unless P=NP, and we propose in this case an approximation algorithm with a relaxation on the makespan constraint.
This paper presents a hybrid approach based discrete Particle Swarm Optimization (PSO) and chaotic strategies for solving multi-objective task scheduling problem in cloud computing. The main purpose is to allocate the summited tasks to the available resources in the cloud environment with minimum makespan (i.e. schedule length) and processing cost while maximizing resource utilization without violating Service Level Agreement (SLA) among users and cloud providers. The main challenges faced by Particle Swarm Optimization (PSO) when used to solve scheduling problems are premature convergence and trapping into local optimum. This paper presents an enhanced Particle Swarm Optimization algorithm hybridized with Chaotic Map strategies. The proposed approach is called Enhanced Particle Swarm Optimization based Chaotic Strategies (EPSOCHO) algorithm. Our proposed approach suggests two Chaotic Map strategies: sinusoidal iterator and Lorenz attractor to enhanced PSO algorithm in order to get good convergence and diversity for optimizing the task scheduling in cloud computing. The proposed approach is simulated and implemented in Cloudsim simulator. The performance of the proposed approach is compared with the standard PSO algorithm, the improved PSO algorithm with Longest job to fastest processor (LJFP-PSO), and the improved PSO algorithm with minimum completion time (MCT-PSO) using different sizes of tasks and various benchmark datasets. The results clearly demonstrate the efficiency of the proposed approach in terms of makespan, processing cost and resources utilization.
Scientific workflow is a common model to organize large scientific computations. It borrows the concept of workflow in business activities to manage the complicated processes in scientific computing automatically or semi-automatically. The workflow scheduling, which maps tasks in workflows to parallel computing resources, has been extensively studied over years. In recent years, with the rise of cloud computing as a new large-scale distributed computing model, it is of great significance to study workflow scheduling problem in the cloud. Compared with traditional distributed computing platforms, cloud platforms have unique characteristics such as the self-service resource management model and the pay-as-you-go billing model. Therefore, the workflow scheduling in cloud needs to be reconsidered. When scheduling workflows in clouds, the monetary cost and the makespan of the workflow executions are concerned with both the cloud service providers (CSPs) and the customers. In this paper, we study a series of cost-and-time-aware workflow scheduling algorithms in cloud environments, which aims to provide researchers with a choice of appropriate cloud workflow scheduling approaches in various scenarios. We conducted a broad review of different cloud workflow scheduling algorithms and categorized them based on their optimization objectives and constraints. Also, we discuss the possible future research direction of the clouds workflow scheduling.
The main objective of the proposed methodology is multi-objective job scheduling using hybridization of whale and BAT optimization algorithm (WBAT) which is used to change existing solution and to adopt a new good solution based on the objective function. The scheduling function in the proposed job scheduling strategy first creates a set of jobs and cloud node to generate the population by assigning jobs to cloud node randomly and evaluate the fitness function which minimizes the makespan and maximizes the quality of jobs. Second, the function uses iterations to regenerate populations based on WBAT behavior to produce the best job schedule that gives minimum makespan and good quality of jobs. The experimental results show that the performance of the proposed methods is better than the other methods of job scheduling problems.
The rebel of global networked resource is Cloud computing and it shared the data to the users easily. With the widespread availability of network technologies, the user requests increase day by day. Nowadays, the foremost complication in Cloud technology is task scheduling. The cargo position and arrangement of the tasks are the two important parameters in the Cloud domain, which can provide the Quality of Service (QoS). In this paper, we formulated the optimal minimization of makespan and energy consumption in task scheduling using Local Pollination-based Gray Wolf Optimizer (LPGWO) algorithm. In the hybrid concept, Gray Wolf Optimizer (GWO) algorithm and Flower Pollination Algorithm (FPA) are combined and used. In the presence of GWO, the best searching factor is used to increase the convergence speed and FPA is used to distribute the data to the next packet of candidate solution using local pollination concept. Chaotic mapping and OBL are used to provide a suitable initial candidate for task solutions. Therefore, the experiments delivered better task scheduling results in low and high heterogeneities of physical machines. Ultimately, the comparison with the simulation results had shown the minimum convergence speed of makespan and energy consumption.
The emergence of cloud computing in big data era has exerted a substantial impact on our daily lives. The conventional reliability-aware workflow scheduling (RWS) is capable of improving or maintaining system reliability by fault tolerance techniques such as replication and checkpointing based recovery. However, the fault tolerant techniques used in RWS would inevitably result in higher system energy consumption, longer execution time, and worse thermal profiles that would in turn lead to a decreased hardware lifespan. To mitigate the lifetime-energy-makespan issues of RWS in cloud computing systems for big data, we propose a novel methodology that decomposes the complicated studied problem. In this methodology, we provide three procedures to solve the energy consumption, execution makespan, and hardware lifespan issues in cloud systems executing real-time workflow applications. We implement numerous simulation experiments to validate the proposed methodology for RWS. Simulation results clearly show that the proposed RWS strategies outperform comparative approaches in reducing energy consumption, shortening execution makespan, and prolonging system lifespan while maintaining high reliability. The improvements on energy saving, reduction on makespan, and increase in lifespan can be up to 23.8%, 18.6%, and 69.2%, respectively. Results also show the potentiality of the proposed method to develop a distributed analysis system for big data that serves satellite signal processing, earthquake early warning, and so on.
Task scheduling is one of the most difficult problems which is associated with cloud computing. Due to its nature, as it belongs to nondeterministic polynomial time (NP)-hard class of problem. Various heuristic as well as meta-heuristic approaches have been used to find the optimal solution. Task scheduling basically deals with the allocation of the task to the most efficient machine for optimal utilization of the computing resources and results in better makespan. As per literature, various meta-heuristic algorithms like genetic algorithm (GA), particle swarm optimization (PSO), ant colony optimization (ACO) and their other hybrid techniques have been applied. Through this paper, we are presenting a novel meta-heuristic technique — genetic algorithm enabled particle swarm optimization (PSOGA), a hybrid version of PSO and GA algorithm. PSOGA uses the diversification property of PSO and intensification property of the GA. The proposed algorithm shows its supremacy over other techniques which are taken into consideration by presenting less makespan time in majority of the cases which leads up to 22.2% improvement in performance of the system and also establishes that proposed PSOGA algorithm converges faster than the others.
This paper proposes a new tri-objective scheduling algorithm called Heterogeneous Reliability-Driven Energy-Efficient Duplication-based (HRDEED) algorithm for heterogeneous multiprocessors. The goal of the algorithm is to minimize the makespan (schedule length) and energy consumption, while maximizing the reliability of the generated schedule. Duplication has been employed in order to minimize the makespan. There is a strong interest among researchers to obtain high-performance schedules that consume less energy. To address this issue, the proposed algorithm incorporates energy consumption as an objective. Moreover, in order to deal with processor and link failures, a system reliability model is proposed. The three objectives, i.e., minimizing the makespan and energy, while maximizing the reliability, have been met by employing a method called Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS). TOPSIS is a popular Multi-Criteria Decision-Making (MCDM) technique that has been employed to rank the generated Pareto optimal schedules. Simulation results demonstrate the capability of the proposed algorithm in generating short, energy-efficient and reliable schedules. Based on simulation results, we observe that HRDEED algorithm demonstrates an improvement in both the energy consumption and reliability, with a reduced makespan. Specifically, it has been shown that the energy consumption can be reduced by 5–47%, and reliability can be improved by 1–5% with a 1–3% increase in makespan.
This paper aims to develop a hybrid grey wolf optimization algorithm (HGWO) for solving the job shop scheduling problem (JSP) with the objective of minimizing the makespan. Firstly, to make the GWO suitable for the discrete nature of JSP, an encoding mechanism is proposed to implement the continuous encoding of the discrete scheduling problem, and a ranked-order value (ROV) rule is used to conduct the conversion between individual position and operation permutation. Secondly, a heuristic algorithm and the random rule are combined to implement the population initialization in order to ensure the quality and diversity of initial solutions. Thirdly, a variable neighborhood search algorithm is embedded to improve the local search ability of our algorithm. In addition, to further improve the solution quality, genetic operators (crossover and mutation) are introduced to balance the exploitation and exploration ability. Finally, experimental results demonstrate the effectiveness of the proposed algorithm based on 23 benchmark instances.
In this paper, a bi-population competition adaptive interior search algorithm (BCAISA) based on a reinforcement learning strategy is proposed for the classical flexible job shop scheduling problem (FJSP) to optimize the makespan. First, the scheduling solution is represented using a machine-job-based two-segment integer encoding method, and various heuristic rules are then applied to generate the initial population. Secondly, a bi-population mechanism is introduced to partition the population into two distinct sub-populations. These sub-populations are specifically tailored for machine assignment and operation permutation, employing different search strategies respectively, aiming to facilitate an efficient implementation of parallel search. A competition mechanism is introduced to facilitate the information exchange between the two sub-populations. Thirdly, the ISA is adapted for the discrete scheduling problem by discretizing a series of search operators, which include composition optimization, mirror search, and random walk. A Q-learning-based approach is proposed to dynamically adjust a key parameter, aiming to strike a balance between the capacity for global exploration and local exploitation. Finally, extensive experiments are conducted based on 10 well-known benchmark instances of the FJSP. The design of the experiment (DOE) method is employed to determine the algorithm’s parameters. Based on the computational results, the effectiveness of four improvement strategies is first validated. The BCAISA is then compared with fifteen published algorithms. The comparative data demonstrate that our algorithm outperforms other algorithms in 50% of benchmark instances. Additionally, according to the relative percentage deviation (RPD) from the state-of-the-art results, the BCAISA also exhibits superior performance. This highlights the effectiveness of our algorithm for solving the classical FJSP. To enhance the practical application, the scope of the ISA will be broadened in future work to more complex problems in real-world scenarios.