Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Scientific workflow is a common model to organize large scientific computations. It borrows the concept of workflow in business activities to manage the complicated processes in scientific computing automatically or semi-automatically. The workflow scheduling, which maps tasks in workflows to parallel computing resources, has been extensively studied over years. In recent years, with the rise of cloud computing as a new large-scale distributed computing model, it is of great significance to study workflow scheduling problem in the cloud. Compared with traditional distributed computing platforms, cloud platforms have unique characteristics such as the self-service resource management model and the pay-as-you-go billing model. Therefore, the workflow scheduling in cloud needs to be reconsidered. When scheduling workflows in clouds, the monetary cost and the makespan of the workflow executions are concerned with both the cloud service providers (CSPs) and the customers. In this paper, we study a series of cost-and-time-aware workflow scheduling algorithms in cloud environments, which aims to provide researchers with a choice of appropriate cloud workflow scheduling approaches in various scenarios. We conducted a broad review of different cloud workflow scheduling algorithms and categorized them based on their optimization objectives and constraints. Also, we discuss the possible future research direction of the clouds workflow scheduling.
The main objective of the proposed methodology is multi-objective job scheduling using hybridization of whale and BAT optimization algorithm (WBAT) which is used to change existing solution and to adopt a new good solution based on the objective function. The scheduling function in the proposed job scheduling strategy first creates a set of jobs and cloud node to generate the population by assigning jobs to cloud node randomly and evaluate the fitness function which minimizes the makespan and maximizes the quality of jobs. Second, the function uses iterations to regenerate populations based on WBAT behavior to produce the best job schedule that gives minimum makespan and good quality of jobs. The experimental results show that the performance of the proposed methods is better than the other methods of job scheduling problems.
The rebel of global networked resource is Cloud computing and it shared the data to the users easily. With the widespread availability of network technologies, the user requests increase day by day. Nowadays, the foremost complication in Cloud technology is task scheduling. The cargo position and arrangement of the tasks are the two important parameters in the Cloud domain, which can provide the Quality of Service (QoS). In this paper, we formulated the optimal minimization of makespan and energy consumption in task scheduling using Local Pollination-based Gray Wolf Optimizer (LPGWO) algorithm. In the hybrid concept, Gray Wolf Optimizer (GWO) algorithm and Flower Pollination Algorithm (FPA) are combined and used. In the presence of GWO, the best searching factor is used to increase the convergence speed and FPA is used to distribute the data to the next packet of candidate solution using local pollination concept. Chaotic mapping and OBL are used to provide a suitable initial candidate for task solutions. Therefore, the experiments delivered better task scheduling results in low and high heterogeneities of physical machines. Ultimately, the comparison with the simulation results had shown the minimum convergence speed of makespan and energy consumption.
The emergence of cloud computing in big data era has exerted a substantial impact on our daily lives. The conventional reliability-aware workflow scheduling (RWS) is capable of improving or maintaining system reliability by fault tolerance techniques such as replication and checkpointing based recovery. However, the fault tolerant techniques used in RWS would inevitably result in higher system energy consumption, longer execution time, and worse thermal profiles that would in turn lead to a decreased hardware lifespan. To mitigate the lifetime-energy-makespan issues of RWS in cloud computing systems for big data, we propose a novel methodology that decomposes the complicated studied problem. In this methodology, we provide three procedures to solve the energy consumption, execution makespan, and hardware lifespan issues in cloud systems executing real-time workflow applications. We implement numerous simulation experiments to validate the proposed methodology for RWS. Simulation results clearly show that the proposed RWS strategies outperform comparative approaches in reducing energy consumption, shortening execution makespan, and prolonging system lifespan while maintaining high reliability. The improvements on energy saving, reduction on makespan, and increase in lifespan can be up to 23.8%, 18.6%, and 69.2%, respectively. Results also show the potentiality of the proposed method to develop a distributed analysis system for big data that serves satellite signal processing, earthquake early warning, and so on.
In this paper, we study the unrelated parallel machine problem for minimizing the makespan, which is NP-hard. We used Simulated Annealing (SA) and Tabu Search (TS) with Neighborhood Search (NS) based on the structure of the problem. We also used a modified SA algorithm, which gives better results than the traditional SA and developed an effective heuristic for the problem: Squeaky Wheel Optimization (SWO) hybrid with TS. Experimental results average 2.52% from the lower bound and are within acceptable timescales improving current best results for the problem.
Task scheduling is one of the most difficult problems which is associated with cloud computing. Due to its nature, as it belongs to nondeterministic polynomial time (NP)-hard class of problem. Various heuristic as well as meta-heuristic approaches have been used to find the optimal solution. Task scheduling basically deals with the allocation of the task to the most efficient machine for optimal utilization of the computing resources and results in better makespan. As per literature, various meta-heuristic algorithms like genetic algorithm (GA), particle swarm optimization (PSO), ant colony optimization (ACO) and their other hybrid techniques have been applied. Through this paper, we are presenting a novel meta-heuristic technique — genetic algorithm enabled particle swarm optimization (PSOGA), a hybrid version of PSO and GA algorithm. PSOGA uses the diversification property of PSO and intensification property of the GA. The proposed algorithm shows its supremacy over other techniques which are taken into consideration by presenting less makespan time in majority of the cases which leads up to 22.2% improvement in performance of the system and also establishes that proposed PSOGA algorithm converges faster than the others.
This paper proposes a new tri-objective scheduling algorithm called Heterogeneous Reliability-Driven Energy-Efficient Duplication-based (HRDEED) algorithm for heterogeneous multiprocessors. The goal of the algorithm is to minimize the makespan (schedule length) and energy consumption, while maximizing the reliability of the generated schedule. Duplication has been employed in order to minimize the makespan. There is a strong interest among researchers to obtain high-performance schedules that consume less energy. To address this issue, the proposed algorithm incorporates energy consumption as an objective. Moreover, in order to deal with processor and link failures, a system reliability model is proposed. The three objectives, i.e., minimizing the makespan and energy, while maximizing the reliability, have been met by employing a method called Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS). TOPSIS is a popular Multi-Criteria Decision-Making (MCDM) technique that has been employed to rank the generated Pareto optimal schedules. Simulation results demonstrate the capability of the proposed algorithm in generating short, energy-efficient and reliable schedules. Based on simulation results, we observe that HRDEED algorithm demonstrates an improvement in both the energy consumption and reliability, with a reduced makespan. Specifically, it has been shown that the energy consumption can be reduced by 5–47%, and reliability can be improved by 1–5% with a 1–3% increase in makespan.
Scheduling is considered to be a major task to improve the shop-floor productivity. The job shop problem is under this category and is combinatorial in nature. Research on optimization of job shop problem is one of the most significant and promising areas of optimization. This paper presents an application of the Ant Colony Optimization meta heuristic to job shop problem. The main characteristics of this model are positive feedback and distributed computation. The settings of parameter values have more influence in solving instances of job shop problem. An algorithm is introduced to improve the basic ant colony system by using a pheromone updating strategy and also to analyze the quality of the solution for different values of the parameters. In this paper, we present statistical analysis for parameter tuning and we compare the quality of obtained solutions by the proposed method with the competing algorithms given in the literature for well known benchmark problems in job shop scheduling.
This paper discusses about the flow shop scheduling problems using shortest processing time, earliest due date (EDD), Nawaz, Enscore, and Ham (NEH), NEH-EDD, and modified-NEH methods. The objective of this research is to determine the performance of these methods in minimizing makespan and total tardiness. Processing times and due dates were randomly generated, and computational studies were performed in Microsoft Visual Basic 6.0. The experiments are performed for small and medium data sets. Efficiency index, relative error, and run time measure the performance of each method. Experimental results showed that NEH has the best performance in minimizing the makespan in both data sets; these are 53.35 time unit for small data sets and 83.803 time unit for medium data sets. NEH-EDD has the best performance in minimizing total tardiness with 9.37 time unit for small data sets and 231.02 time unit for medium data sets. Modified-NEH, as the proposed method for minimizing makespan and total tardiness at the same time, has good enough result. For minimizing the makespan, modified-NEH results in 57.15 time unit for small data sets and 88.107 time unit for medium data sets. For minimizing total tardiness, the modified-NEH results in 14.21 time unit for small data sets and 246.57 time unit for medium sets.
We address a generalization of the classical job-shop problem which is called a hybrid job-shop problem. The criteria under consideration are the minimization of the makespan and mean flow time. In the hybrid job-shop, machines of type k are available for processing the specific subset 𝒪k of the given operations. Each set 𝒪k may be partitioned into subsets for their processing on the machines of type k. Solving the hybrid job-shop problem implies the solution of two subproblems: an assignment of all operations from the set 𝒪k to the machines of type k and finding optimal sequences of the operations for their processing on each machine. In this paper, a genetic algorithm is developed to solve these two subproblems simultaneously. For solving the subproblems, a special chromosome is used in the genetic algorithm based on a mixed graph model. We compare our genetic algorithms with a branch-and-bound algorithm and three other recent heuristic algorithms from the literature. Computational results for benchmark instances with 10 jobs and up to 50 machines show that the proposed genetic algorithm is rather efficient for both criteria. Compared with the other heuristics, the new algorithm gives most often an optimal solution and the average percentage deviation from the optimal function value is about 4%.
This paper develops a new harmony search (HS) algorithm for the hybrid flow shop scheduling problem with multiprocessor tasks, which is known to be NP-hard in the strong sense. The goal is to minimize the makespan. Thus, the proposed HS starts with the application of an opposition based learning (OBL) method to increase the diversity level of the initial harmony memory (HM). Furthermore, we use novel improvisation rules of producing new harmonies based on crossover and mutation operators. Also, neighborhood techniques are employed to enhance the searching and improve the solution quality. The proposed HS is compared with three algorithms from the literature. The computational experiments demonstrate the effectiveness of the proposed HS algorithm in terms of makespan.
Scheduling jobs and tools is a significant problem for manufacturing systems. Inefficient job scheduling and tool loading planning may result in under utilization of capital intensive machines and a high level of machine idle time. Therefore, efficient scheduling of jobs and tools enables a manufacturing system to increase machines’ utilization and decrease their idle times. This paper addresses machines’ and tools’ joint scheduling with alternate machines in a multimachine flexible manufacturing system (FMS) to minimize makespan (MSN). Only one copy of each type of tool is made available in FMS where tools are expensive. The tools are stored in the central tool magazine (CTM), which shares and serves them to several machines in order to reduce the cost of duplicating the tools in every machine. The problem is to select machines from alternate machines for job-operations, allocation of tools to the job-operations and job-operations’ sequencing on machines for MSN minimization. This paper presents nonlinear mixed integer programming (MIP) formulation to model this simultaneous scheduling problem and crow search algorithm (CSA) built on the crows’ intelligent behavior for solving this problem. The results show that CSA is providing better solutions than Jaya algorithm and the usage of alternate machines for the operations can reduce MSN.
Scheduling jobs and tools is a significant problem for manufacturing systems. Inefficient job scheduling and tool loading planning may result in the underutilization of capital-intensive machines and a high machine idle time. Tools are transferred by a tool transporter (TT) between machines. Therefore, efficient scheduling of jobs, TT, and the tools with alternate machines enables a manufacturing system to increase machines’ utilization and decrease their idle times. This paper addresses machines, TT, and tools concurrent scheduling with alternate machines and only one copy of every tool variety is made available where the tools are costly, in a multi-machine flexible manufacturing system, taking into account tool transfer times for makespan (MS) minimization. The tools are placed in a central tool magazine, which shares and serves them to many machines to cut down the price of duplicating the tools in each machine. The problem is to assign machines from alternate machines, tools, and corresponding TT trips, including the deadheading trip and loaded trip, to job operations for MS minimization. This paper uses a nonlinear mixed-integer programming framework to present the problem, and a flower pollination algorithm (FPA) is employed to solve it. The results show that FPA outperforms the Jaya algorithm, and the usage of alternate machines for the operations reduces the MS. Reduction in MS indicates an improvement in utilization of resources.
This paper addresses machines and automated guided vehicles (AGVs) simultaneous scheduling with alternative machines in a multi-machine flexible manufacturing system (FMS) to produce the best optimal sequences for the minimization of makespan (MKSN). This problem is highly complex to solve because it involves the selection of machines for job operations (jb-ons), the sequencing of jb-ons on the machines, the assignment of AGVs, and associated trips such as AGVs’ deadheaded trip and loaded trip times to jb-ons. This paper offers a nonlinear mixed integer programming (MIP) formulation for modeling the problem and the symbiotic organisms search algorithm (SOSA) for solving the problem. For verification, a manufacturing company’s industrial problem is employed. The results show that SOSA outperforms the existing methods and the Jaya algorithm, and using alternate machines for the operations can reduce the MKSN.
In this paper, a multi-objective permutation flow shop scheduling problem with sequence independent setup time is studied. The bi-objective function that we consider is a linear combination of the makespan and total flow time with a weighted factor for each criterion. We propose a set of exact and approximate methods to minimize the makespan and total flow time in our flow shop optimization problem case. Hence, our main goal is to find the sequence of jobs that minimizes the two criteria of makespan and total flow time. The purpose is to solve this problem, with a mixed integer linear programming model (MILP) and a collection of efficient metaheuristics for different sizes of instances. Moreover, three metaheuristics are used: the Genetic Algorithm (GA), the Iterative Local Search (ILS) algorithm and the Iterated Greedy (IG) algorithm. The three last algorithms GA, ILS and IG are suggested in two ways for exploring the neighborhood. In order to test the efficacy of our resolution approach, different series of instances containing n jobs and m machines are generated randomly ranging from small to relatively large instances. The examination of the suggested simulations allowed us to remark that, for large and medium-scale instances, IG based on the exploration of the neighborhood records the best performances in terms of comparison with the other metaheuristics.
Non-permutation flow shop scheduling (NPFS) is an extension of the traditional permutation flow shop scheduling, with a broader solution space. The effectiveness of reinforcement learning in solving flow shop scheduling problems has been demonstrated through its powerful combinatorial optimization capabilities. However, the design and training of the end-to-end policy network is complex, leading to long online training time and limited adaptability of offline training. To overcome these problems, we introduced a NPFS dynamic decision-making process and then proposed a novel NPFS method that combines the Q-learning algorithm with the priority dispatching rule (PDR) set. The NPFS dynamic decision-making process involves decomposing the entire process into multiple sub-job queues for scheduling. The PDR demonstrates better scheduling performance when applied to smaller job queues. By utilizing the Q-learning algorithm, PDRs with superior performance are assigned to sub-scheduling queues based on the generation process of sub-job sequences, resulting in an optimized NPFS strategy. The limited number of PDRs in the PDR set and the small number of sub-job queues in the NPFS process contribute to the efficiency of Q-learning in solving NPFS problem. Finally, we demonstrate the superiority of the proposed NPFS method by a series of numerical experiments using the machine-based assignment PDR method and NSGA-II algorithms as performance benchmarks.
This paper aims to develop a hybrid grey wolf optimization algorithm (HGWO) for solving the job shop scheduling problem (JSP) with the objective of minimizing the makespan. Firstly, to make the GWO suitable for the discrete nature of JSP, an encoding mechanism is proposed to implement the continuous encoding of the discrete scheduling problem, and a ranked-order value (ROV) rule is used to conduct the conversion between individual position and operation permutation. Secondly, a heuristic algorithm and the random rule are combined to implement the population initialization in order to ensure the quality and diversity of initial solutions. Thirdly, a variable neighborhood search algorithm is embedded to improve the local search ability of our algorithm. In addition, to further improve the solution quality, genetic operators (crossover and mutation) are introduced to balance the exploitation and exploration ability. Finally, experimental results demonstrate the effectiveness of the proposed algorithm based on 23 benchmark instances.
In this paper, a bi-population competition adaptive interior search algorithm (BCAISA) based on a reinforcement learning strategy is proposed for the classical flexible job shop scheduling problem (FJSP) to optimize the makespan. First, the scheduling solution is represented using a machine-job-based two-segment integer encoding method, and various heuristic rules are then applied to generate the initial population. Secondly, a bi-population mechanism is introduced to partition the population into two distinct sub-populations. These sub-populations are specifically tailored for machine assignment and operation permutation, employing different search strategies respectively, aiming to facilitate an efficient implementation of parallel search. A competition mechanism is introduced to facilitate the information exchange between the two sub-populations. Thirdly, the ISA is adapted for the discrete scheduling problem by discretizing a series of search operators, which include composition optimization, mirror search, and random walk. A Q-learning-based approach is proposed to dynamically adjust a key parameter, aiming to strike a balance between the capacity for global exploration and local exploitation. Finally, extensive experiments are conducted based on 10 well-known benchmark instances of the FJSP. The design of the experiment (DOE) method is employed to determine the algorithm’s parameters. Based on the computational results, the effectiveness of four improvement strategies is first validated. The BCAISA is then compared with fifteen published algorithms. The comparative data demonstrate that our algorithm outperforms other algorithms in 50% of benchmark instances. Additionally, according to the relative percentage deviation (RPD) from the state-of-the-art results, the BCAISA also exhibits superior performance. This highlights the effectiveness of our algorithm for solving the classical FJSP. To enhance the practical application, the scope of the ISA will be broadened in future work to more complex problems in real-world scenarios.
The Flow Shop Scheduling Problem (FSSP) is a problem that is commonly found by master production scheduling planners in Flexible Manufacturing Systems (FMS). The planner should find the optimal scheduling to carry out a set of jobs in order to satisfy the predefined objective (e.g., makespan). All the jobs are processed in a production line composed of a set of shared machines. Furthermore, the jobs are processed in the same sequence. In order to be able to analyze this problem in a better way, this problem needs to be represented adequately for understanding the relationship among the operations that are carried out. Thus, an FMS presenting the FSSP can be modeled by Petri nets (PNs), which are a powerful tool that has been used to model and analyze discrete event systems. Then, the makespan can be obtained by simulating the PN through the token game animation. In this work, we propose a new way to calculate the makespan of FSSP based on timed place PNs.