In this paper, we study the single-machine scheduling problems with DeJong’s learning effect and deteriorating maintenance activity, where DeJong’s learning effect is a special position-based learning effect and the duration of the deteriorating maintenance activity is a linear increasing function of its starting time. Our goal is to determine the job sequence of all jobs and the position of the maintenance activity to minimize some performance measures. When the performance measures are the makespan and the total completion time, we show that both of them can be solved in polynomial time. When the performance measure is the total weighted completion time, we develop a pseudo-polynomial time dynamic programming algorithm under a special case. When the performance measure is the maximum lateness, we show that the earliest due date first (EDD) order is a bad algorithm for the general case, and develop a polynomial time algorithm under a special case.
With the use of digital innovations including artificial intelligence (AI), the manufacturing business has been reshaped by the fourth industrial revolution. Yet, since rescheduling determinations are unpredictable and random mechanisms exist, controlling enhanced production scheduling still presents challenges. So, this study presents an innovative digital technologies-driven scheduling mechanism leveraging AI to tackle the difficulties of rescheduling in the aspect of production scheduling. A genetic-assisted beetle swarm optimization (GBSO) strategy is introduced for the flexible job shop problem (FJSP) with sequence-dependent configuration and constrained dual supplies. Then, rescheduling sequences are discovered using the support vector machine (SVM) approach. The Python environment is utilized for testing the suggested procedures, and the effectiveness of these approaches is examined in terms of scheduling latencies and the rate of rescheduling. This research demonstrated that the developed methodologies achieved the best performance in rescheduling frequency and scheduling latencies, thereby enhancing the manufacturing sector operations.
This paper addresses single-machine scheduling problems with past-sequence-dependent (denoted by ̃psd˜psd) delivery times, due-window assignments and deteriorating jobs simultaneously. The ̃psd˜psd delivery time of a job is proportional to the job’s waiting time, and the deteriorating jobs mean that the processing time of a job is a proportional increasing function of its starting time. For common, slack and different due-window assignments, the goal is to minimize the weighted sum of earliness, tardiness, due-window starting time and size. Some optimal propositions of the problems are given, and by these propositions, polynomial time algorithms are proposed to optimally solve the above problems.
In this paper, we deal with the different due windows assignment proportionate flow shop problem with position-dependent weights, where the sequence is a permutation. The objective is to determine an optimal job sequence and due windows of all jobs such that the weighted sum of earliness–tardiness, the starting time and size of all due windows is to be minimized, where the weight is not related to the job but to the position in which some job is scheduled, i.e., position-dependent weights. According to a series of optimal properties, we prove that the problem can be solved in polynomial time O(n2)O(n2), where nn is the number of jobs.
The manufacturing industry has evolved with the emergence of cloud computing, leading to the development of cloud manufacturing — a customer-oriented paradigm. The integration of scheduling and logistics in cloud manufacturing is paramount and distinguishes it from traditional manufacturing. Tasks can have three different structures, so this study presents three models that integrate scheduling and logistics for tasks with sequential, parallel, and loop structures, respectively. These models aim to minimize the total cost of the manufacturing system, which includes the implementation cost of subtasks, logistical services cost among factories in different geographical locations, logistical services cost for delivery of the order (task) to the customers, and the earliness/tardiness cost of orders. The numerical examples in small and medium sizes are solved using the CPLEX solver in the GAMS software. However, due to the high complexity of the models presented, a genetic algorithm is developed to solve large examples. To showcase the significance of the main features of the proposed models, two comparable models are employed, each with one feature removed. In addition to the factors that are considered to get the models close to reality, a sensitivity analysis is conducted to design effective guidelines for cloud manufacturing managers.
In wireless sensor networks (WSNs), barrier coverage is used to detect objects crossing a protected area or to monitor an area of interest. It is a critical application within WSNs. Due to cost considerations, sensor nodes are often randomly deployed along the boundary of the monitoring area, constructing multiple barriers to maximize network lifetime. These barriers operate based on a sleep-wakeup schedule. However, a new security issue, known as the barrier-breach problem, can arise during this schedule. In this paper, we propose a novel barrier coverage construction algorithm (BCC_VL) that addresses the barrier-breach problem using Virtual Lines (VLs), a concept that has not been explored before. By employing VLs, our algorithm can construct a barrier coverage without any breaches. We conducted simulations to validate the proposed method, and the results were compared and verified against the MaxFlow model. In the MaxFlow_RM method, our approach demonstrated superior performance in identifying barrier coverage.
The core value of cloud manufacturing is to enable optimal allocation of manufacturing resources across enterprises in a wide-area environment. Scheduling is the key technology to realize the core value of cloud manufacturing. At present, large-scale enterprises are integrated in the cloud platform, which poses great difficulties for cloud manufacturing scheduling. How to perform collaborative scheduling of platform and enterprises is therefore a key question. Moreover, in the cloud manufacturing environment, distributed enterprises with autonomy on the edge side access their own resources to the cloud manufacturing service platform and carry out collaborative scheduling with the cloud manufacturing platform. Platform-enterprise collaborative scheduling provides support for large-scale resources and services within the cloud. Given this, the paper provides a platform-enterprise collaborative model that is adopted to study the scheduling problem of large-scale resources and services in cloud manufacturing. The model considers the platform-based service scheduling and enterprise-based resource scheduling. The collaborative scheduling mechanisms of the cloud service and enterprise resource are investigated. The former completes the scheduling of cloud services while collaborating on tasks with the latter, and the latter completes the scheduling of enterprise resources while delivering scheduling information to the former. Moreover, deep reinforcement learning (DRL) has been widely applied to cloud manufacturing scheduling. A platform-enterprise collaborative scheduling algorithm based on dueling deep QQ-Network with prioritized replay (CE-PDDQN) is proposed. To evaluate the effectiveness of our proposed algorithm, this paper selects DQN and dueling DQN for experiments. The experimental results show that the CE-PDDQN algorithm can obtain a better scheduling scheme after training and learning. And the CE-PDDQN algorithm is adaptive and scalable.
Parafrase-2 is a vectorizing/parallelizing compiler implemented as a source to source code restructurer. This paper discusses the organization of Parafrase-2 and goals of the project. Specific topics discussed are : dependence analysis, timing and overhead analysis, interprocedural analysis, automatic scheduling and the graphical user interface.
The parallelizing compiler for the B-HIVE loosely-coupled multiprocessor system uses a medium grain model to minimize the communication overhead. A medium grain model is shown to be an optimum way of merging fine grain operations into parallel tasks such that the parallelism obtained at the grain level is retained and communication overhead is decreased. A new communication model is introduced in this paper, allowing additional overlap between computation and communication. Simulation results indicate that the medium grain communication model shows promise for automatic parallelization for a loosely-coupled multiprocessor system.
In this paper we discuss different approaches for exploiting parallelism in the ICCG method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that, under certain circumstances, ignoring symmetry and storing all nonzero elements of a sparse matrix can reduce solution time substantially.
It has been already demonstrated that cost-effective multiprocessor designs may be obtained by combining in the same architecture processors of different speeds (heterogeneous architecture) so that the serial and critical portions of the application may benefit from a fast single processor. In such an environment, the problem of assigning tasks to processors becomes a very important one. This papers presents a systematic way to build static heuristic scheduling algorithms. Using this strategy, several algorithms are proposed and their performance are compared through simulation. One of the proposed algorithms is shown to achieve substantial performance gains as the degree of heterogeneity of the architecture increases.
The primary goal of processor scheduling is to assign tasks in a parallel program to processors, so as to minimize the execution time. Most existing approaches to processor scheduling for multiprocessors assume that the execution time of each task is fixed and is independent of processor scheduling. In this paper, we argue that the execution time of a given task is not fixed but is critically dependent on the performance of the caches, which have become an essential component of shared-memory multiprocessors and propose a scheduling algorithm, called data distribution constrained scheduling algorithm. The proposed scheduling algorithm tries to maximize the number of cache hits by scheduling the processors so that the task that brings a memory block into the cache and the tasks that subsequently access the same memory block are executed on the same processor.
In this paper, we consider the problem of solving sparse linear systems occurring in finite difference applications (or N × N grid problems, N being the size of the linear system). We propose a new algorithm for the problem which is based on the Cholesky factorization, a symmetric variant of Gaussian elimination tailored to symmetric positive definite systems. The algorithm employs a new technique called bidirectional factorization to produce the complete solution vector by solving only one triangular system against two triangular systems in the existing Cholesky factorization after the factorization phase. The effectiveness of the new algorithm is demonstrated by comparing its performance with that of the existing Cholesky factorization for solving regular finite difference grid problems on hypercube multiprocessors.
Prediction is a critical component in the achievement of application execution performance. The development of adequate and accurate prediction models is especially difficult in local-area clustered environments where resources are distributed and performance varies due to the presence of other users in the system. This paper discusses the use of stochastic values to parameterize cluster application performance models. Stochastic values represent a range of likely behavior and can be used effectively as model parameters. We describe two representations for stochastic model parameters and demonstrate their effectiveness in predicting the behavior of several applications under different workloads on a contended network of workstations.
This work presents approximation algorithms for scheduling the tasks of a parallel application that are subject to precedence constraints. The considered tasks are malleable which means that they may be executed on a varying number of processors in parallel. The considered objective criterion is the makespan, i.e., the largest task completion time. We demonstrate a close relationship between this scheduling problem and one of its subproblems, the allotment problem. By exploiting this relationship, we design a polynomial time approximation algorithm with performance guarantee arbitrarily close to for the special case of series parallel precedence constraints and for the special case of precedence constraints of bounded width. These special cases cover the important situation of tree structured precedence constraints. For arbitrary precedence constraints, we give a polynomial time approximation algorithm with performance guarantee
.
We study the scheduling problem on a batch machine which is capable of processing a batch of jobs at a time. For a batch machine of capacity c, we designed an algorithm for minimizing the total completion time in O(n6c) time (for sufficiently large c). This improves the best previous time bound of O(nc(c-1)) given by Brucker et al [1]. We also designed a new algorithm with running time .
The dominant trend in scientific computing today is the establishment of platforms that span multiple institutions to support applications at unprecedented scales. On most distributed computing platforms a requirement to achieve high performance is the careful scheduling of distributed application components onto the available resources. While scheduling has been an active area of research for many decades most of the platform models traditionally used in scheduling research, and in particular network models, break down for platforms spanning wide-area networks. In this paper we examine network modeling issues for large-scale platforms from the perspective of scheduling. The main challenge we address is the development of models that are sophisticated enough to be more realistic than those traditionally used in the field, but simple enough that they are still amenable to analysis. In particular, we discuss issues of bandwidth sharing and topology modeling. Also, while these models can be used to define and reason about realistic scheduling problems, we show that they also provide a good basis for fast simulation, which is the typical method to evaluate scheduling algorithms, as demonstrated in our implementation of the SIMGRID simulation framework.
Today, large scale parallel systems are available at low cost, Many powerful such systems have been installed all over the world and the number of users is always increasing. The difficulty of using them efficiently is growing with the complexity of the interactions between more and more architectural constraints and the diversity of the applications. The design of efficient parallel algorithms has to be reconsidered under the influence of new parameters of such platforms (namely, cluster, grid and global computing) which are characterized by a larger number of heterogeneous processors, often organized in several hierarchical sub-systems. At each step of the evolution of the parallel processing field, researchers designed adequate computational models whose objective was to abstract the real world in order to be able to analyze the behavior of algorithms.
In this paper, we will investigate two complementary computational models that have been proposed recently: Parallel Task (PT) and Divisible Load (DL). The Parallel Task (i.e. tasks that require more than one processor for their execution) model is a promising alternative for scheduling parallel applications, especially in the case of slow communication media. The basic idea is to consider the application at a coarse level of granularity. Another way of looking at the problem (which is somehow a dual view) is the Divisible Load model where an application is considered as a collection of a large number of elementary – sequential – computing units that will be distributed among the available resources. Unlike the PT model, the DL model corresponds to a fine level of granularity. We will focus on the PT model, and discuss how to mix it with simple Divisible Load scheduling.
As the main difficulty for distributing the load among the processors (usually known as the scheduling problem) in actual systems comes from handling efficiently the communications, these two models of the problem allow us to consider them implicitly or to mask them, thus leading to more tractable problems.
We will show that in spite of the enormous complexity of the general scheduling problem on new platforms, it is still useful to study theoretical models. We will focus on the links between models and actual implementations on a regional grid with more than 500 processors.
The scheduling problem deals with the optimal assignment of a set of tasks to processing elements in a distributed system such that the total execution time is minimized. One approach for solving the scheduling problem is task clustering. This involves assigning tasks to clusters where each cluster is run on a single processor. This paper aims to show the feasibility of using Genetic Algorithms for task clustering to solve the scheduling problem. Genetic Algorithms are robust optimization and search techniques that are used in this work to solve the task-clustering problem. The proposed approach shows great promise to solve the clustering problem for a wide range of clustering instances.
Dynamically reconfigurable architectures offer extremely fast solutions to various problems. The Circuit Switched Tree (CST) is an important interconnect used to implement such architectures. A CST has a binary tree structure with processing elements (PEs) as leaves and switches as internal nodes. PEs communicate among themselves using the links of the tree. Key components for successful communication are scheduling individual communications and configuring the CST switches. This paper presents a scheduling and configuration algorithm for communications on a CST where conflicts necessitate multiple rounds of routing to perform all communications. The algorithm is distributed and requires only local information, yet it captures the global picture to ensure proper communication. The paper also explains how to apply the algorithm to an important class, "well-nested communications", for which the algorithm is optimal and efficient.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.