In this paper, a multi-objective permutation flow shop scheduling problem with sequence independent setup time is studied. The bi-objective function that we consider is a linear combination of the makespan and total flow time with a weighted factor for each criterion. We propose a set of exact and approximate methods to minimize the makespan and total flow time in our flow shop optimization problem case. Hence, our main goal is to find the sequence of jobs that minimizes the two criteria of makespan and total flow time. The purpose is to solve this problem, with a mixed integer linear programming model (MILP) and a collection of efficient metaheuristics for different sizes of instances. Moreover, three metaheuristics are used: the Genetic Algorithm (GA), the Iterative Local Search (ILS) algorithm and the Iterated Greedy (IG) algorithm. The three last algorithms GA, ILS and IG are suggested in two ways for exploring the neighborhood. In order to test the efficacy of our resolution approach, different series of instances containing n jobs and m machines are generated randomly ranging from small to relatively large instances. The examination of the suggested simulations allowed us to remark that, for large and medium-scale instances, IG based on the exploration of the neighborhood records the best performances in terms of comparison with the other metaheuristics.
In this paper, we consider the scheduling of high multiplicity jobs on parallel multi-purpose machines with setup times and machine available times, with the objective of minimizing makespan. High multiplicity means that jobs are partitioned into several groups and in each group all jobs are identical. Whenever there is a switch from processing a job of one group to a job of another group, a setup time is needed. Multi-purpose machine implies that each job can only be processed by a specific subset of all the machines, called processing set. A mixed integer programming is formulated for this NP-hard problem. A heuristic is proposed to solve the problem. Lower bounds are developed to evaluate the heuristic algorithm. Extensive numerical computations are performed and the results show that the heuristic generates solutions with makespan within 2% above the lower bounds in average, and outperforms CPLEX 12.6 for large scale and complex problems.
This paper presents a hybrid approach based discrete Particle Swarm Optimization (PSO) and chaotic strategies for solving multi-objective task scheduling problem in cloud computing. The main purpose is to allocate the summited tasks to the available resources in the cloud environment with minimum makespan (i.e. schedule length) and processing cost while maximizing resource utilization without violating Service Level Agreement (SLA) among users and cloud providers. The main challenges faced by Particle Swarm Optimization (PSO) when used to solve scheduling problems are premature convergence and trapping into local optimum. This paper presents an enhanced Particle Swarm Optimization algorithm hybridized with Chaotic Map strategies. The proposed approach is called Enhanced Particle Swarm Optimization based Chaotic Strategies (EPSOCHO) algorithm. Our proposed approach suggests two Chaotic Map strategies: sinusoidal iterator and Lorenz attractor to enhanced PSO algorithm in order to get good convergence and diversity for optimizing the task scheduling in cloud computing. The proposed approach is simulated and implemented in Cloudsim simulator. The performance of the proposed approach is compared with the standard PSO algorithm, the improved PSO algorithm with Longest job to fastest processor (LJFP-PSO), and the improved PSO algorithm with minimum completion time (MCT-PSO) using different sizes of tasks and various benchmark datasets. The results clearly demonstrate the efficiency of the proposed approach in terms of makespan, processing cost and resources utilization.
In this paper, a bi-population competition adaptive interior search algorithm (BCAISA) based on a reinforcement learning strategy is proposed for the classical flexible job shop scheduling problem (FJSP) to optimize the makespan. First, the scheduling solution is represented using a machine-job-based two-segment integer encoding method, and various heuristic rules are then applied to generate the initial population. Secondly, a bi-population mechanism is introduced to partition the population into two distinct sub-populations. These sub-populations are specifically tailored for machine assignment and operation permutation, employing different search strategies respectively, aiming to facilitate an efficient implementation of parallel search. A competition mechanism is introduced to facilitate the information exchange between the two sub-populations. Thirdly, the ISA is adapted for the discrete scheduling problem by discretizing a series of search operators, which include composition optimization, mirror search, and random walk. A Q-learning-based approach is proposed to dynamically adjust a key parameter, aiming to strike a balance between the capacity for global exploration and local exploitation. Finally, extensive experiments are conducted based on 10 well-known benchmark instances of the FJSP. The design of the experiment (DOE) method is employed to determine the algorithm’s parameters. Based on the computational results, the effectiveness of four improvement strategies is first validated. The BCAISA is then compared with fifteen published algorithms. The comparative data demonstrate that our algorithm outperforms other algorithms in 50% of benchmark instances. Additionally, according to the relative percentage deviation (RPD) from the state-of-the-art results, the BCAISA also exhibits superior performance. This highlights the effectiveness of our algorithm for solving the classical FJSP. To enhance the practical application, the scope of the ISA will be broadened in future work to more complex problems in real-world scenarios.
The emergence of cloud computing in big data era has exerted a substantial impact on our daily lives. The conventional reliability-aware workflow scheduling (RWS) is capable of improving or maintaining system reliability by fault tolerance techniques such as replication and checkpointing based recovery. However, the fault tolerant techniques used in RWS would inevitably result in higher system energy consumption, longer execution time, and worse thermal profiles that would in turn lead to a decreased hardware lifespan. To mitigate the lifetime-energy-makespan issues of RWS in cloud computing systems for big data, we propose a novel methodology that decomposes the complicated studied problem. In this methodology, we provide three procedures to solve the energy consumption, execution makespan, and hardware lifespan issues in cloud systems executing real-time workflow applications. We implement numerous simulation experiments to validate the proposed methodology for RWS. Simulation results clearly show that the proposed RWS strategies outperform comparative approaches in reducing energy consumption, shortening execution makespan, and prolonging system lifespan while maintaining high reliability. The improvements on energy saving, reduction on makespan, and increase in lifespan can be up to 23.8%, 18.6%, and 69.2%, respectively. Results also show the potentiality of the proposed method to develop a distributed analysis system for big data that serves satellite signal processing, earthquake early warning, and so on.
Non-permutation flow shop scheduling (NPFS) is an extension of the traditional permutation flow shop scheduling, with a broader solution space. The effectiveness of reinforcement learning in solving flow shop scheduling problems has been demonstrated through its powerful combinatorial optimization capabilities. However, the design and training of the end-to-end policy network is complex, leading to long online training time and limited adaptability of offline training. To overcome these problems, we introduced a NPFS dynamic decision-making process and then proposed a novel NPFS method that combines the Q-learning algorithm with the priority dispatching rule (PDR) set. The NPFS dynamic decision-making process involves decomposing the entire process into multiple sub-job queues for scheduling. The PDR demonstrates better scheduling performance when applied to smaller job queues. By utilizing the Q-learning algorithm, PDRs with superior performance are assigned to sub-scheduling queues based on the generation process of sub-job sequences, resulting in an optimized NPFS strategy. The limited number of PDRs in the PDR set and the small number of sub-job queues in the NPFS process contribute to the efficiency of Q-learning in solving NPFS problem. Finally, we demonstrate the superiority of the proposed NPFS method by a series of numerical experiments using the machine-based assignment PDR method and NSGA-II algorithms as performance benchmarks.
Scientific workflow is a common model to organize large scientific computations. It borrows the concept of workflow in business activities to manage the complicated processes in scientific computing automatically or semi-automatically. The workflow scheduling, which maps tasks in workflows to parallel computing resources, has been extensively studied over years. In recent years, with the rise of cloud computing as a new large-scale distributed computing model, it is of great significance to study workflow scheduling problem in the cloud. Compared with traditional distributed computing platforms, cloud platforms have unique characteristics such as the self-service resource management model and the pay-as-you-go billing model. Therefore, the workflow scheduling in cloud needs to be reconsidered. When scheduling workflows in clouds, the monetary cost and the makespan of the workflow executions are concerned with both the cloud service providers (CSPs) and the customers. In this paper, we study a series of cost-and-time-aware workflow scheduling algorithms in cloud environments, which aims to provide researchers with a choice of appropriate cloud workflow scheduling approaches in various scenarios. We conducted a broad review of different cloud workflow scheduling algorithms and categorized them based on their optimization objectives and constraints. Also, we discuss the possible future research direction of the clouds workflow scheduling.
Scheduling jobs and tools is a significant problem for manufacturing systems. Inefficient job scheduling and tool loading planning may result in under utilization of capital intensive machines and a high level of machine idle time. Therefore, efficient scheduling of jobs and tools enables a manufacturing system to increase machines’ utilization and decrease their idle times. This paper addresses machines’ and tools’ joint scheduling with alternate machines in a multimachine flexible manufacturing system (FMS) to minimize makespan (MSN). Only one copy of each type of tool is made available in FMS where tools are expensive. The tools are stored in the central tool magazine (CTM), which shares and serves them to several machines in order to reduce the cost of duplicating the tools in every machine. The problem is to select machines from alternate machines for job-operations, allocation of tools to the job-operations and job-operations’ sequencing on machines for MSN minimization. This paper presents nonlinear mixed integer programming (MIP) formulation to model this simultaneous scheduling problem and crow search algorithm (CSA) built on the crows’ intelligent behavior for solving this problem. The results show that CSA is providing better solutions than Jaya algorithm and the usage of alternate machines for the operations can reduce MSN.
This paper addresses machines and automated guided vehicles (AGVs) simultaneous scheduling with alternative machines in a multi-machine flexible manufacturing system (FMS) to produce the best optimal sequences for the minimization of makespan (MKSN). This problem is highly complex to solve because it involves the selection of machines for job operations (jb-ons), the sequencing of jb-ons on the machines, the assignment of AGVs, and associated trips such as AGVs’ deadheaded trip and loaded trip times to jb-ons. This paper offers a nonlinear mixed integer programming (MIP) formulation for modeling the problem and the symbiotic organisms search algorithm (SOSA) for solving the problem. For verification, a manufacturing company’s industrial problem is employed. The results show that SOSA outperforms the existing methods and the Jaya algorithm, and using alternate machines for the operations can reduce the MKSN.
Scheduling jobs and tools is a significant problem for manufacturing systems. Inefficient job scheduling and tool loading planning may result in the underutilization of capital-intensive machines and a high machine idle time. Tools are transferred by a tool transporter (TT) between machines. Therefore, efficient scheduling of jobs, TT, and the tools with alternate machines enables a manufacturing system to increase machines’ utilization and decrease their idle times. This paper addresses machines, TT, and tools concurrent scheduling with alternate machines and only one copy of every tool variety is made available where the tools are costly, in a multi-machine flexible manufacturing system, taking into account tool transfer times for makespan (MS) minimization. The tools are placed in a central tool magazine, which shares and serves them to many machines to cut down the price of duplicating the tools in each machine. The problem is to assign machines from alternate machines, tools, and corresponding TT trips, including the deadheading trip and loaded trip, to job operations for MS minimization. This paper uses a nonlinear mixed-integer programming framework to present the problem, and a flower pollination algorithm (FPA) is employed to solve it. The results show that FPA outperforms the Jaya algorithm, and the usage of alternate machines for the operations reduces the MS. Reduction in MS indicates an improvement in utilization of resources.
This paper aims to develop a hybrid grey wolf optimization algorithm (HGWO) for solving the job shop scheduling problem (JSP) with the objective of minimizing the makespan. Firstly, to make the GWO suitable for the discrete nature of JSP, an encoding mechanism is proposed to implement the continuous encoding of the discrete scheduling problem, and a ranked-order value (ROV) rule is used to conduct the conversion between individual position and operation permutation. Secondly, a heuristic algorithm and the random rule are combined to implement the population initialization in order to ensure the quality and diversity of initial solutions. Thirdly, a variable neighborhood search algorithm is embedded to improve the local search ability of our algorithm. In addition, to further improve the solution quality, genetic operators (crossover and mutation) are introduced to balance the exploitation and exploration ability. Finally, experimental results demonstrate the effectiveness of the proposed algorithm based on 23 benchmark instances.
This paper develops a new harmony search (HS) algorithm for the hybrid flow shop scheduling problem with multiprocessor tasks, which is known to be NP-hard in the strong sense. The goal is to minimize the makespan. Thus, the proposed HS starts with the application of an opposition based learning (OBL) method to increase the diversity level of the initial harmony memory (HM). Furthermore, we use novel improvisation rules of producing new harmonies based on crossover and mutation operators. Also, neighborhood techniques are employed to enhance the searching and improve the solution quality. The proposed HS is compared with three algorithms from the literature. The computational experiments demonstrate the effectiveness of the proposed HS algorithm in terms of makespan.
The Fog computing is rising as a dominant and modern computing model to deliver Internet of Things (IoT) computations, which is an addition to the cloud computing standard to get it probable to perform the IoT requests in the network of edge. In those above independent and dispersed environment, resource allocation is vital. Therefore, scheduling will be a test to enhance potency and allot resources properly to the tasks. This paper offers a distinct task scheduling algorithm in the fog computing environment that tries to depreciate the makespan and maximize resource utilization. This algorithm catalogues the task based on the mean Suffrage value. The suggested algorithm gives much resource utilization and diminishes makespan. Our offered algorithm is compared with different alive scheduling for performance investigation, and test results confirm that our algorithm has a more significant resource utilization rate and low makespan than other familiar algorithms.
The n-job and m-machine permutation flow shop scheduling problem with a proportional deterioration is considered in which all machines process the jobs in the same order, i.e., a permutation schedule. A proportional deterioration means that the job deterioration as an increasing function that is proportional to a linear function of time. The objective is to minimize the makespan, i.e., the maximum completion time. When some dominant relationships between m−1 machines can be satisfied, we show that some special cases of the problem can be polynomial solvable. For the general case, we also propose a heuristic algorithm and give the computational experiments.
Task scheduling is one of the most difficult problems which is associated with cloud computing. Due to its nature, as it belongs to nondeterministic polynomial time (NP)-hard class of problem. Various heuristic as well as meta-heuristic approaches have been used to find the optimal solution. Task scheduling basically deals with the allocation of the task to the most efficient machine for optimal utilization of the computing resources and results in better makespan. As per literature, various meta-heuristic algorithms like genetic algorithm (GA), particle swarm optimization (PSO), ant colony optimization (ACO) and their other hybrid techniques have been applied. Through this paper, we are presenting a novel meta-heuristic technique — genetic algorithm enabled particle swarm optimization (PSOGA), a hybrid version of PSO and GA algorithm. PSOGA uses the diversification property of PSO and intensification property of the GA. The proposed algorithm shows its supremacy over other techniques which are taken into consideration by presenting less makespan time in majority of the cases which leads up to 22.2% improvement in performance of the system and also establishes that proposed PSOGA algorithm converges faster than the others.
We consider two-machine flow shop problems with exact delays. In this model, there are two machines, the upstream machine and the downstream machine. Each job j has two operations: the first operation has to be processed on the upstream machine and the second operation has to be processed on the downstream machine, subject to the constraint that the time interval between the completion time of the first operation and the start time of the second operation is exactly . We concentrate on the objectives of makespan and total completion time. For the makespan objective, we first show that the problem is strongly NP-hard even if there are only two possible delay values. We then show that some special cases of the problem are solvable in polynomial time. Finally, we design efficient approximation algorithms for the general case and some special cases. For the total completion time objective, we give optimal polynomial-time algorithm for a special case and an efficient approximation algorithm for another one.
The rebel of global networked resource is Cloud computing and it shared the data to the users easily. With the widespread availability of network technologies, the user requests increase day by day. Nowadays, the foremost complication in Cloud technology is task scheduling. The cargo position and arrangement of the tasks are the two important parameters in the Cloud domain, which can provide the Quality of Service (QoS). In this paper, we formulated the optimal minimization of makespan and energy consumption in task scheduling using Local Pollination-based Gray Wolf Optimizer (LPGWO) algorithm. In the hybrid concept, Gray Wolf Optimizer (GWO) algorithm and Flower Pollination Algorithm (FPA) are combined and used. In the presence of GWO, the best searching factor is used to increase the convergence speed and FPA is used to distribute the data to the next packet of candidate solution using local pollination concept. Chaotic mapping and OBL are used to provide a suitable initial candidate for task solutions. Therefore, the experiments delivered better task scheduling results in low and high heterogeneities of physical machines. Ultimately, the comparison with the simulation results had shown the minimum convergence speed of makespan and energy consumption.
This paper addresses online scheduling of malleable parallel jobs to minimize the maximum completion time, i.e., makespan. It is assumed that the execution time of a job Jj with processing time pj is pj/k + (k-1)c if the job is assigned to k machines, where c > 0 is a constant setup time. We consider online algorithms for the scheduling problem on two identical machines. Namely, the job Jj can be processed on one machine with execution time pj or alternatively two machines in parallel with execution time pj/2+c. For the asymptotical competitive ratio, we provide an improved online algorithm with makespan no more than (3/2)C* +c/2, where C* is the optimal makespan. For the strict competitive ratio, we propose an online algorithm with competitive ratio of 1.54, which is close to the lower bound of 1.5.
In this paper, we consider a three-machine makespan minimization permutation flow shop scheduling problem with shortening job processing times. Shortening job processing times means that its processing time is a nonincreasing function of its execution start time. Optimal solutions are obtained for some special cases. For the general case, several dominance properties and two lower bounds are developed to construct a branch-and-bound (B&B) algorithm. Furthermore, we propose a heuristic algorithm to overcome the inefficiency of the branch-and-bound algorithm.
This paper proposes a new tri-objective scheduling algorithm called Heterogeneous Reliability-Driven Energy-Efficient Duplication-based (HRDEED) algorithm for heterogeneous multiprocessors. The goal of the algorithm is to minimize the makespan (schedule length) and energy consumption, while maximizing the reliability of the generated schedule. Duplication has been employed in order to minimize the makespan. There is a strong interest among researchers to obtain high-performance schedules that consume less energy. To address this issue, the proposed algorithm incorporates energy consumption as an objective. Moreover, in order to deal with processor and link failures, a system reliability model is proposed. The three objectives, i.e., minimizing the makespan and energy, while maximizing the reliability, have been met by employing a method called Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS). TOPSIS is a popular Multi-Criteria Decision-Making (MCDM) technique that has been employed to rank the generated Pareto optimal schedules. Simulation results demonstrate the capability of the proposed algorithm in generating short, energy-efficient and reliable schedules. Based on simulation results, we observe that HRDEED algorithm demonstrates an improvement in both the energy consumption and reliability, with a reduced makespan. Specifically, it has been shown that the energy consumption can be reduced by 5–47%, and reliability can be improved by 1–5% with a 1–3% increase in makespan.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.