For the long-term continuous monitoring of bridge-related indicators, it is necessary to arrange relatively perfect acquisition equipment on the bridge, which can feedback various information parameters of the bridge. However, there are many parameters to feedback the bridge information, which leads to the complex and overstaffed structure of the monitoring system. Furthermore, the huge amount of data collected and the complex calculation process also increase the difficulty of the operation of the monitoring system. In this regard, we should choose more scientific and reasonable indicators, lightweight data structure, stable data transmission, and analysis programs to improve the accuracy of continuous monitoring. To establish a stable and efficient bridge monitoring system, we use the distance coefficient-effective independent algorithm to optimize. Then, we calculate the relevant information of the strain environment with the help of a neural network model, strengthen the training of deep learning through the YOLOv5s model, and improve the task scheduling strategy of attention concentration. Through that, we solve the problem of embedded systems with relatively low computing power. Different weights are assigned to each fused feature map, and the nodes at the highest level and the lowest level are deleted so that a concise and efficient lightweight network model is constructed. Multiple iterations are performed to achieve deeper feature fusion. Therefore, the complexity of the model is effectively reduced, and the monitoring performance can be effectively improved. Finally, through the experimental analysis, it is proved that compared with the traditional fusion model, the number of parameters of the improved fusion network structure in bridge health monitoring is reduced by 7.37%. The detection speed is increased by 18.2%. The amount of computation is reduced by 42.92%, and the average detection accuracy is required to reach 95.33%. It is verified that the proposed method can effectively improve the accuracy and risk control ability of the detection data by learning from the samples with small labels. It also has great practical significance and market value for the design and optimization of the bridge health monitoring system, which is suitable for the monitoring data of large-scale construction projects.
Once triggered, Hardware Trojans (HT) can damage the performance or functionality of a heterogeneous multi-core system (HMCS) to fulfill the attacker’s intentions. Existing methods cannot guarantee a system to be 100% Trojan-free prior to deployment. This paper introduces a task mapping strategy based on application redundancy execution to address the HT threat in HMCS. The primary aim is to minimize the probability of HT triggering during computing unit (CU) operation while minimizing the makespans of application under security constraints. Initially, the probability of HT activation HT is converted into a function tied to CU runtime. Subsequently, a security enhanced NSGA-II algorithm (SEA-NSGA-II) is devised to solve this multi-objective optimization problem. Specifically, the algorithm introduces security enhanced adjustable heterogeneous earliest finish time (SEA-HEFT), security enhanced adjustable heterogeneous minimum trigger probability (SEA-HMTP), and security enhanced adjustable random scheduling (SEA-RS) to accelerate the convergence speed and increase the diversity of solutions. Extensive experiments on real-world benchmarks and randomly generated applications demonstrate that SEA-NSGA-II solutions have higher HV values, along with better convergence and diversity. In real-world benchmarks, the HV value of SEA-NSGA-II is increased by an average of 54.677% and 2113.778% compared to DRHEFT and NSGA-II, respectively. In randomly generated applications, the HV values of SEA-NSGA-II are also higher than those of DRHEFT and NSGA-II.
The deployment of fog computing has not only helped in task offloading for the end-users toward delay-sensitive task provisioning but also reduced the burden for cloud back-end systems to process variable workloads arriving from the user equipment. However, due to the constraints on the resources and computational capabilities of the fog nodes, processing the computational-intensive task within the defined timelines is highly challenging. Also, in this scenario, offloading tasks to the cloud creates a burden on the upload link, resulting in high resource costs and delays in task processing. Existing research studies have considerably attempted to handle the task allocation problem in fog–cloud networks, but the majority of the methods are found to be computationally expensive and incur high resource costs with execution time constraints. The proposed work aims to balance resource costs and time complexity by exploring collaboration among host machines over fog nodes. It introduces the concept of task scheduling and optimal resource allocation using coalition formation methods of game theory and pay-off computation. The work also encourages the formation of coalitions among host machines to handle variable traffic efficiently. Experimental results show that the proposed approach for task scheduling and optimal resource allocation in fog computing outperforms the existing system by 56.71% in task processing time, 47.56% in unused computing resources, 8.33% in resource cost, and 37.2% in unused storage.
Parallel programs may be represented as a set of interrelated sequential tasks. When multiprocessors are used to execute such programs, the parallel portion of the application can be speeded up by an appropriate allocation of processors to the tasks of the application. Given a parallel application defined by a task precedence graph, the goal of task scheduling (or processor assignment) is thus the minimization of the makespan of the application. In a heterogeneous multiprocessor system, task scheduling consists of determining which tasks will be assigned to each processor, as well as the execution order of the tasks assigned to each processor. In this work, we apply the tabu search metaheuristic to the solution of the task scheduling problem on a heterogeneous multiprocessor environment under precedence constraints. The topology of the Mean Value Analysis solution package for product form queueing networks is used as the framework for performance evaluation. We show that tabu search obtains much better results, i.e., shorter completion times, improving from 20 to 30% the makespan obtained by the most appropriate algorithm previously published in the literature.
A directed acyclic task graph (DAG) contains a set of tasks which access a set of data items and perform certain computations on those data items. The problem of DAG scheduling that optimizes the assignment of tasks onto the given processors has been studied extensively in the literature. We have developed a DAG scheduling system called PYRROS that maps the computation of task graphs onto message-passing machines [24]. In this paper we present a schedule executing model that incorporates several optimization strategies to reduce communication overhead and improve memory utilization. We study the correctness of task graph execution using this method and generalize this result to the iterative execution of a task graph and present experimental results on an nCUBE-2 parallel machine.
In this paper, we report a performance gap betweeen a schedule with small makespan on the task scheduling model and the corresponding parallel program on distributed memory parallel machines. The main reason of the gap is the software overhead in the interprocessor communication. Therefore, speedup ratios of schedules on the model do not approximate well to those of parallel programs on the machines. The purpose of the paper is to get a task scheduling algorithm that generates a schedule with good approximation to the corresponding parallel program and with small makespan.
For this purpose, we propose algorithm BCSH that generates only bulk synchronous schedules. In those schedules, no-communication phases and communication phases appear alternately. All interprocessor communications are done only in the latter phases, and thus the corresponding parallel programs can make better use of the message packaging technique easily. It reduces many software overheads of messages form a source processor to the same destination processor to almost one software overhead, and improves the performance of a parallel program significantly.
Finally, we show some experimental results of performance gaps on BCSH, Kruatrachue's algorithm DSH, and Ahmad et al's algorithm ECPFD. The schedules by DSH and ECPFD are famous for their small makespans, but message packaging can not be effectively applied to the corresponding program. The results show that a bulk synchronous schedule with small makespan has advantages that the gap is small and the corresponding program is a high performance parallel one.
The development of FPGAs that can be programmed to implement custom circuits by modifying memory has inspired researchers to investigate how FPGAs can be used as a computational resource in systems designed for high performance applications. When such FPGA–based systems are composed of arrays of chips or chips that can be partially reconfigured, the programmable array space can be partitioned among several concurrently executing tasks. If partition sizes are adapted to the needs of tasks, then array resources become fragmented as tasks with varying requirements are processed. Tasks may end up waiting despite their being sufficient, albeit fragmented resources available. We examine the problem of repartitioning the system (rearranging a subset of the executing tasks) at run–time in order to allow waiting tasks to enter the system sooner. In this paper, we introduce the problems of identifying and scheduling feasible task rearrangements when tasks are moved by reloading. It is shown that both problems are NP–complete. We develop two very different heuristic approaches to finding and scheduling suitable rearrangements. The first method, known as Local Repacking, attempts to minimize the size of the subarray needing rearrangement. Candidate subarrays are repacked using known bin packing algorithms. Task movements are scheduled so as to minimize delays to their execution. The second approach, called Ordered Compaction, constrains the movements of tasks in order to efficiently identify and schedule feasible rearrangements. The heuristics are compared by time complexity and resulting system performance on simulated task sets. The results indicate that considerable scheduling advantages are to be gained for acceptable computational effort. However, the benefits may be jeopardized by delays to moving tasks when the average cost of reloading tasks becomes significant relative to task service periods. We indicate directions for future research to mitigate the cost of moving executing tasks.
Genetic algorithms (GAs) have been well applied in solving scheduling problems and their performance advantages have also been recognized. However, practitioners are often troubled by parameters setting when they are tuning GAs. Population Size (PS) has been shown to greatly affect the efficiency of GAs. Although some population sizing models exist in the literature, reasonable population sizing for task scheduling is rarely observed. In this paper, based on the PS deciding model proposed by Harik, we present a model to represent the relation between the success ratio and the PS for the GA applied in time-critical task scheduling, in which the efficiency of GAs is more necessitated than in solving other kinds of problems. Our model only needs some parameters easy to know through proper simplifications and approximations. Hence, our model is applicable. Finally, our model is verified through experiments.
This paper presents a hybrid approach based discrete Particle Swarm Optimization (PSO) and chaotic strategies for solving multi-objective task scheduling problem in cloud computing. The main purpose is to allocate the summited tasks to the available resources in the cloud environment with minimum makespan (i.e. schedule length) and processing cost while maximizing resource utilization without violating Service Level Agreement (SLA) among users and cloud providers. The main challenges faced by Particle Swarm Optimization (PSO) when used to solve scheduling problems are premature convergence and trapping into local optimum. This paper presents an enhanced Particle Swarm Optimization algorithm hybridized with Chaotic Map strategies. The proposed approach is called Enhanced Particle Swarm Optimization based Chaotic Strategies (EPSOCHO) algorithm. Our proposed approach suggests two Chaotic Map strategies: sinusoidal iterator and Lorenz attractor to enhanced PSO algorithm in order to get good convergence and diversity for optimizing the task scheduling in cloud computing. The proposed approach is simulated and implemented in Cloudsim simulator. The performance of the proposed approach is compared with the standard PSO algorithm, the improved PSO algorithm with Longest job to fastest processor (LJFP-PSO), and the improved PSO algorithm with minimum completion time (MCT-PSO) using different sizes of tasks and various benchmark datasets. The results clearly demonstrate the efficiency of the proposed approach in terms of makespan, processing cost and resources utilization.
Whilst the concept of a virtual metacomputer over a networked collection of heterogeneous computer systems has slowly emerged. there are still some shortcomings of these systems. For example. some systems are still incapable of balancing the load amongst the workstations; the problem is accentuated by the heterogeneity of the computers. In this paper. we first discuss the design issue of our centralized task scheduler. and then present our implementation details. To use the task scheduler. some new library routines are provided. The task scheduler is layered above PVM; this has the advantage of retaining its portability. Since load-balancing is considered in task scheduling. our approach has been also proven to be more effective than the existing PVM round-robin task allocation scheme.
The multiprocessor scheduling problem with communication delays that we consider in this paper consists of finding a static schedule of an arbitrary task graph onto a homogeneous multiprocessor system, such that the total execution time (i.e. the time when all tasks are completed) is minimum. The task graph contains precedence relations as well as communication delays (or data transferring time) between tasks if they are executed on different processors. The multiprocessor architecture is assumed to contain identical processors connected in an arbitrary way, which is defined by a symmetric matrix containing minimum distances between every two processors. The solution is represented by a feasible permutation of tasks. In order to obtain the objective function value (i.e. schedule length, makespan), the feasible permutation has to be transformed into the actual schedule by the use of some heuristic method. For solving this NP-hard problem, we develop basic tabu search and variable neighborhood search heuristics, where various types of reduced Or-opt-like neighborhood structures are used for local search. A genetic search approach based on the same solution space is also developed. Comparative computational results on random graphs with up to 500 tasks and 8 processors are reported. On average, it appears that variable neighborhood search outperforms the other metaheuristics. In addition, a detailed performance analysis of both the proposed solution representation and heuristic methods is presented.
Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.
Integration of safety-critical tasks with different certification requirements onto a common hardware platform has become a growing tendency in the design of real-time and embedded systems. In the past decade, great efforts have been made to develop techniques for handling uncertainties in task worst-case execution time, quality-of-service, and schedulability of mixed-criticality systems. However, few works take fault-tolerance as a design requirement. In this paper, we address the scheduling of fault-tolerant mixed-criticality systems to ensure the safety of tasks at different levels of criticalities in the presence of transient faults. We adopt task re-execution as the fault-tolerant technique. Extensive simulations were performed to validate the effectiveness of our algorithm. Simulation results show that our algorithm results in up to 15.8% and 94.4% improvement in system reliability and schedule feasibility as compared to existing techniques, which contributes to a more safe system.
The present paper describes a hybrid self-adaptive learning global search algorithm and firefly algorithm (HSLGSAFA)-based model for task scheduling in cloud computing. The proposed hybrid model combines gravitational search algorithm (GSA), which has been successfully scheduling the task in the application, with the use of SL strategy and the FA. The basic scheme of our approach is to utilize the benefits of both SLGSA algorithm and firefly algorithm and not including their disadvantages. In HSLGSAFA, each dimension of a solution represents a task and a solution as a whole signifies all tasks’ priorities. The vital issue is how to allocate users’ tasks to exploit the income of Infrastructure as a Service (IaaS) provider while promising Quality-of-Service (QoS). The generated solution is proficient to assure user-level QoS and improve IaaS providers’ credibility and economic benefit. The HSLGSAFA method also used to design the hybridization process and suitable fitness function of the corresponding task. According to the evolved results, it has been found that our algorithm always outperforms the traditional algorithms.
This paper presents a novel design of a coprocessor that performs hardware-accelerated task scheduling for embedded real-time systems consisting of mixed-criticality real-time tasks. The proposed solution is based on the Robust Earliest Deadline (RED) algorithm and previously developed hardware architectures used for scheduling of real-time tasks. Thanks to the HW implementation of the scheduler in the form of a coprocessor, the scheduler operations (i.e., instructions) are always completed in two clock cycles regardless of the actual or even maximum task amount within the system. The proposed scheduler was verified using simplified version of UVM and applying billions of randomly generated instructions as inputs to the scheduler. Chip area costs are evaluated by synthesis for Intel FPGA Cyclone V and for 28-nm TSMC ASIC. Three versions of real-time task schedulers were compared: EDF-based scheduler designed for hard real-time tasks only, GED-based scheduler and the proposed RED-based scheduler, which is suitable for tasks of various criticalities. According to the synthesis results, the RED-based scheduler consumes LUTs and occupies larger chip area than the original EDF-based scheduler with equivalent parameters used. However, the RED-based scheduler handles variations of task execution times better, achieves higher CPU utilization and can be used for the scheduling of hard real-time, soft real-time and nonreal-time tasks combined in one system, which is not possible with the former algorithms.
The aging effect induced by negative bias temperature instability (NBTI) is a universal issue existing in electronic equipments. NBTI aging effect can increase the path delay of network-on-chip (NoC) device, resulting in the decreased frequency of processor core and in turn its performance degradation. Under this circumstance, aging-aware task scheduling becomes a complex and challenging problem in advanced multicore systems. This paper presents an aging-aware scheduling method that incorporates NBTI aging effect into the task scheduling framework for mesh-based NoCs. The proposed method relies on a NBTI aging model to evaluate the degradation of core’s operating frequency to establish the task scheduling model under aging effect. Taking into account core performance degradation and the communication overheads among cores, we develop a meta-heuristic scheduling strategy based on particle swarm optimization algorithm to minimize the total execution time of all tasks. Experimental results show that the schedule obtained by the aging-aware algorithm has shorter completion time and higher throughput compared with the nonaging-aware case. On average, the makespan can be reduced by 13.55% and the throughput can be increased by 21.73% for a variety of benchmark applications.
The rebel of global networked resource is Cloud computing and it shared the data to the users easily. With the widespread availability of network technologies, the user requests increase day by day. Nowadays, the foremost complication in Cloud technology is task scheduling. The cargo position and arrangement of the tasks are the two important parameters in the Cloud domain, which can provide the Quality of Service (QoS). In this paper, we formulated the optimal minimization of makespan and energy consumption in task scheduling using Local Pollination-based Gray Wolf Optimizer (LPGWO) algorithm. In the hybrid concept, Gray Wolf Optimizer (GWO) algorithm and Flower Pollination Algorithm (FPA) are combined and used. In the presence of GWO, the best searching factor is used to increase the convergence speed and FPA is used to distribute the data to the next packet of candidate solution using local pollination concept. Chaotic mapping and OBL are used to provide a suitable initial candidate for task solutions. Therefore, the experiments delivered better task scheduling results in low and high heterogeneities of physical machines. Ultimately, the comparison with the simulation results had shown the minimum convergence speed of makespan and energy consumption.
As an important component of computer system, GPU has been used more widely in the system under the support of general computing. In addition to focusing on its performance, the issues of its energy consumption and environmental problem have gradually attracted the concerns of researchers, computer architects, and developers. Current researches only consider single-task scheduling for saving energy, lacking the focus on energy saving from scheduling the overall tasks. In view of the shortcomings of current researches, we propose a METS (Minimizing Execution Time Slot) approach to reduce energy by rationally allocating the tasks across GPUs. It first collects the number of tasks and the corresponding estimated performance information. Next, it decides whether to turn the problem into a 0–1 knapsack problem or to use FIFO method based on the number of tasks. Then, we conduct our experiment on typical platform to verify our proposed approach. The experimental results show that METS can save on average 8.43% of energy when compared with the existing approaches. This shows that the proposed METS method is effective, reasonable and feasible.
This paper proposes a cloud computing-based approach to efficiently process the massive data produced in intelligent machine tool diagnosis flow. By collecting and extracting the vibration, power and other useful system signals during the machining operation of machine tools, the cutting process samples and cutting gap samples of machine tools can be accurately segmented, in order to construct a set of signal samples that can effectively and completely characterize the level of tool wear. We propose a visual detection method that relies on local threshold segmentation to predict tool wear status. The machine tool image is divided into several small blocks, and each image block is segmented to obtain the segmentation threshold, which is defined as the local threshold of each block. Then, the detection method scans the whole image based on the maximum local threshold among all blocks. Considering the complicated flow of visual detection and the high volume of machine tool diagnosis data, we further propose a big data processing approach which is implemented on a cloud computing architecture. By modeling the workflow of the proposed visual detection method as a directed acyclic graph, we develop a scheduling model that aims at minimizing the execution time of massive tool diagnosis data processing with available cloud computing resources. A effective metaheuristic based on search strategy of artificial bee colony is developed to solve the formulation scheduling problem. Experimental results on a cloud-based system demonstrate that, the visual detection method enhances the accuracy of tool wear detection, and the cloud-based approach significantly reduces the execution time of tool diagnosis flow by means of distributed computing.
Reliability and energy efficiency are two hostile objectives considered in designing task scheduling in most real-time multiprocessor systems on chip (MPSoC). Addressing and improving one of them may affect and degrade the efficiency of the other one and vice versa. In this paper, we intend to examine these challenges and ultimately achieve an optimal energy consumption and reliability state. This paper presents a novel scheduling technique that can adapt to the limitations of real-time systems and have optimal energy consumption and reliability. This is done by minimizing the overlap of tasks, adjusting the speed of processors and the number of backups of each task. The proposed scheme reduces energy consumption on average by 11% to 42% compared to the previous state-of-the-art techniques and keeps the reliability at a high level.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.