It is very important to accurately predict the population pattern in the framework of spatial planning in the township development track. In this paper, the basic principle and application field of population forecasting method of urban spatial planning are deeply studied, and the applicability of BP neural network method of genetic evolution to predict population size is described. The study initially used genetic algorithms to refine the initial weights and structure of BP neural networks to improve their proficiency and generalization ability in the interpretation of demographic data. The empirical results show that the method produces superior predictive performance on multiple township demographic data sets, especially when trying to cope with complex population dynamics. In addition, when benchmarked against traditional forecasting models, the technology showed significant enhancements in the accuracy, stability, and adaptability of predictive models. These results suggest that combining GA-driven evolution with BP neural networks provides a more robust and precise tool for population prediction.
The construction of medical centers demands the integration of advanced technologies for accurate cost estimation, resource allocation, and workflow optimization, making these tasks increasingly intricate. Artificial intelligence (AI) plays a revolutionary role by enabling sophisticated path preparation and even prediction. AI-driven methodologies, including machine learning (ML) algorithms and intelligent path planning systems, were explored to optimize construction process and expenditure assessment. 200 medical center construction project datasets were utilized, with preprocessing stages utilizing mean imputation for absent data, min-max scaling to normalize, and dimension decrease by applying principle component analysis (PCA). The usage of AI-based path planning techniques is implemented for resource optimization, reducing construction delay, and site logistics optimization. Furthermore, it developed the Smart Tasmanian Devil-Enhanced Adaptive Gradient Boosting System (STD-AGB) that reached a direct cost assessment accuracy of 97% in the early stage of project initiation. The 12 critical factors affecting cost and construction efficiency included site preparation and electrical works, which demonstrated standard percentage errors of 8.02% and 5.11%, respectively. Such consequences underscore the probable responsibility of AI in refining path planning so that improved outcomes can be achieved and innovation in the intelligent construction of medical centers can be encouraged. A strong framework for integrating AI into healthcare infrastructure development, guarantee optimal workflows and reduced risk, was further established.
Resource allocation is a pivotal concern in swarm robotic systems. This study delves into the chaotic dynamics within the yield game process of resource-providing robots in such systems, analyzing system bifurcation and stability across multiple parameters. We propose a market-based model for swarm robotic systems, employing game theory to formulate a yield game model among resource-providing robots. Through numerical analysis and simulation, we identify stable regions of system parameters and present bifurcation diagrams under varied parameters. Our findings indicate that increasing cost parameters or decreasing decision parameters stabilizes the yield and utility of resource-providing robots, while the opposite destabilizes them. Moreover, we examine the intricate influence of cost parameters on system stability by comparing linear and exponential resource cost functions. Results reveal that exponential cost functions lead to heightened system chaos and parameter sensitivity. These insights offer crucial theoretical groundwork and practical directions for the development and deployment of swarm robot systems.
Cloud computing has attracted significant attention because of the growing service demands of businesses that outsource computationally intensive tasks to the data center. Meanwhile, the infrastructure of a data center is comprised of hardware resources that consume a great deal of energy and release harmful levels of carbon dioxide. Cloud data centers demand massive amounts of electrical power as modern applications and organizations grow. To prevent resource waste and promote energy efficiency, virtual machines (VMs) must be dispersed over numerous physical machines (PMs) in a data center in the cloud. The actual allocation of VMs to PMs can involve more complex decision-making processes, such as considering the resource utilization, load balancing, performance requirements, and constraints of the system. Advanced techniques, like intelligent placement algorithms or dynamic resource allocation, may be employed to optimize resource utilization and achieve efficient VM distribution across multiple PMs. Cloud service suppliers aim to lower operational expenses by reducing energy consumption while offering clients competitive services. Minimizing large-scale data center power usage while maintaining the quality of service (QoS), especially for social media-based cloud computing systems, is crucial. Consolidating VMs has been highlighted as a promising method for improving resource efficiency and saving energy in data centers. This research provides deep learning augmented reinforcement learning (RL)-based energy efficient and QoS-aware virtual machine consolidation (VMC) approach to meet the difficulties. The proposed deep learning modified reinforcement learning-virtual machine consolidation (DLMRL-VMC) model can motivate both cloud providers and customers to distribute cloud infrastructure resources to achieve high CPU utilization and good energy efficiency as measured by power usage effectiveness (PUE) and data center infrastructure efficiency (DCiE). The suggested model, DLMRL-VMC, offers a VM placement approach based on resource usage and dynamic energy consumption to determine the best-matched host and VM selection strategy, Average Utilization Migration Time (AUMT). Based on AUMT, deep learning modified reinforcement learning (DLMRL) will choose a VM with a low average CPU utilization and a short migration time. The DLMRL-VMC Energy-efficient, Resource Allocation strategy is evaluated on the trace of the CloudSim VM to attain good PUE and CPU utilization.
The deployment of fog computing has not only helped in task offloading for the end-users toward delay-sensitive task provisioning but also reduced the burden for cloud back-end systems to process variable workloads arriving from the user equipment. However, due to the constraints on the resources and computational capabilities of the fog nodes, processing the computational-intensive task within the defined timelines is highly challenging. Also, in this scenario, offloading tasks to the cloud creates a burden on the upload link, resulting in high resource costs and delays in task processing. Existing research studies have considerably attempted to handle the task allocation problem in fog–cloud networks, but the majority of the methods are found to be computationally expensive and incur high resource costs with execution time constraints. The proposed work aims to balance resource costs and time complexity by exploring collaboration among host machines over fog nodes. It introduces the concept of task scheduling and optimal resource allocation using coalition formation methods of game theory and pay-off computation. The work also encourages the formation of coalitions among host machines to handle variable traffic efficiently. Experimental results show that the proposed approach for task scheduling and optimal resource allocation in fog computing outperforms the existing system by 56.71% in task processing time, 47.56% in unused computing resources, 8.33% in resource cost, and 37.2% in unused storage.
The goal of this work is to study the portfolio problem which consists in finding a good combination of multiple heuristics given a set of a problem instances to solve. We are interested in a parallel context where the resources are assumed to be discrete and homogeneous, and where it is not possible to allocate a given resource (processor) to more than one heuristic. The objective is to minimize the average completion time over the whole set of instances. We extend in this paper some existing analysis on the problem. More precisely, we provide a new complexity result for the restricted version of the problem, then, we generalize previous approximation schemes. In particular, they are improved using a guess approximation technique. Experimental results are also provided using a benchmark of instances on SAT solvers.
In this paper, we address the problem of k-out-of-ℓ exclusion, a generalization of the mutual exclusion problem, in which there are ℓ units of a shared resource, and any process can request up to k units (1 ≤ k ≤ ℓ). A protocol is self-stabilizing if, starting from an arbitrary configuration, be it initial state or after a corrupted state, the protocol can start behaving normally within a finite time. We propose the first deterministic self-stabilizing distributed k-out-of-ℓ exclusion protocol in message-passing systems for asynchronous oriented tree networks which assumes bounded local memory for each process.
The Resource Allocation Problem with Time Dependent Penalties (RAPTP) is a variant of uncapacitated resource allocation problems generally referred as uncapacitated facility allocation problems or uncapacitated facility location problem (UFLP). Work done in this paper is motivated by the work of Du, Lu and Xu [7] in which authors considered facility location problems with submodular penalties and presented a 3-approximation primal dual algorithm. This paper considers that each unallocated demand point adds to penalty that increases as time passes and is thus represented by function x(ti,pi) where ti and pi are elapse time and priority of demand point di. As this problem has been considered for emergency service allocation, all demand points should be allocated to some facility or resource within some stipulated time limit beyond which it may lose its purpose. Thus penalty incurred by a demand point is considered till that threshold value only. Thus it is assumed that penalty contribution by a demand point remains constant after a specified threshold value. By exploiting the properties of time dependent penalties, a 4-approximation primal-dual algorithm is proposed which is based on LP framework, and is the first constant-factor approximation algorithm for RAPTP.
Resource constrained scheduling problems are concerned with the allocation of limited resources to tasks over time. The solution to these problems is often a sequence, resource allocation, and schedule. When human workers are incorporated as a renewable resource, the allocation is defined as the number of workers assigned to perform each task. In practice, however, this solution does not adequately address how individual workers are to be assigned to tasks. This paper, therefore, provides mathematical models and heuristic techniques for solving this multi-period precedence constrained assignment problem. Results of a significant numerical investigation are also presented.
We introduce a method for maximizing the run-out time for a system where the number of components available to make repairs is finite, and some of the components may be substituted for one another. The objective is to maximize the time at which the earliest run-out of any component occurs. The approach proposed here is to find the minimum time horizon such that no feasible allocation exists for a related linear programming problem. An adaptive version of this algorithm is proposed as a heuristic for the stochastic problem.
We consider single-machine scheduling problem in which the processing time of a job is a function of its position in a sequence, its starting time, and its resource allocation. The objective is to find the optimal sequence of jobs and the optimal resource allocation separately. We concentrate on two goals separately, namely, minimizing a cost function containing makespan, total completion time, total absolute differences in completion times, and total resource cost; minimizing a cost function containing makespan, total waiting time, total absolute differences in waiting times, and total resource cost. The problem is modeled as an assignment problem, and thus can be solved in polynomial time. Some extensions of the problem are also shown.
We introduce a search game in which a hider has partial information about a searcher's resource. The hider can be a terrorist trying to hide and the searcher can be special forces trying to catch him. The terrorist does not know the number of forces involved in the search but just its distribution. We model this situation by a noncooperative game. In a related setup, which is motivated by wireless networks applications, the terrorist inserts a malicious node in a network, reducing network connectivity and thereby undermining its security. Meanwhile, the network operator applies appropriate measures to detect malicious nodes and maintain network performance. We investigate how the information about the total search resources that are available to the hider can influence the behavior of both players. For the case, where the distribution has two mass points, we prove that the game has a unique equilibrium and moreover, we describe explicitly this equilibrium, its structure and some other properties.
We consider a single-machine common due-window assignment scheduling problem, in which the processing time of a job is a function of its position in a sequence and its resource allocation. The window location and size, along with the associated job schedule that minimizes a certain cost function, are to be determined. This function is made up of costs associated with the window location, window size, earliness, and tardiness. For two different processing time functions, we provide a polynomial time algorithm to find the optimal job sequence and resource allocation, respectively.
This paper considers single-machine scheduling with learning effect, deteriorating jobs and convex resource dependent processing times, i.e., the processing time of a job is a function of its starting time, its position in a sequence and its convex resource allocation. The objective is to find the optimal sequence of jobs and the optimal convex resource allocation separately to minimize a cost function containing makespan, total completion (waiting) time, total absolute differences in completion (waiting) times and total resource cost. It is proved that the problem can be solved in polynomial time.
In this paper, the flow shop resource allocation scheduling with learning effect and position-dependent weights on two-machine no-wait setting is considered. Under common due date assignment and slack due date assignment rules, a bi-criteria analysis is provided. The optimality properties and polynomial time algorithms are developed to solve four versions of the problem. For a special case of the problem, it is proved that the problem can be optimally solved by a lower order algorithm.
In a centralized decision-making environment, the central unit supervises all the operating units’ production activities and focuses on achieving the overall goals of the entire organization from a global viewpoint by allocating available resources to them. However, all the current models about resource allocation under the centralized decision-making commonly exhibit the non-unique optimal solutions and neglect the competition and trade-off among all the DMUs to maximize the individual aggregated output of each DMU. This phenomenon makes the allocation result imbalanced and unacceptable to all the DMUs. To overcome this problem, this paper introduces Nash bargaining game theory to develop a new centralized resource allocation model which not only takes into account the overall goals of the organization from an overall perspective, but also considers competition and trade-off among all the DMUs. Finally, an empirical example is used to demonstrate the applicability of our proposed approach. The results show that our Nash bargaining game model not only guarantees the uniqueness of the optimal resource allocation results, but also improves the balance of the resource allocation result, which makes it acceptable to all the DMUs.
In this paper, different due-window assignment flow shop scheduling problem with learning effect and resource allocation is considered. Under two machine no-wait flow shop setting, the goal is to determine the due-window starting time, due-window size, optimal resource allocation and the optimal sequence of all jobs. A bicriteria analysis of the problem is provided where the first criterion is to minimize the scheduling cost (including earliness-tardiness penalty, due-window starting time and due-window size of all jobs) and the second criterion is to minimize the resource consumption cost. It is shown that four versions about scheduling cost and resource consumption cost can be solved in polynomial time.
Scheduling problems with variable processing times and past-sequence-dependent delivery times are considered on a single-machine. The delivery times of jobs depend on their waiting times of processing. A job’s actual processing time depends on its position in a sequence, its starting time and its allocation of non-renewable resources. Under the linear resource consumption function, the goal (version) is to determine the optimal sequence and optimal resource allocation such that the sum of scheduling cost and total resource consumption cost is minimized. Under the convex resource consumption function, three versions of the scheduling cost and total resource consumption cost are discussed. We prove that these four versions can be solved in polynomial time, respectively. Some applications are also given by using the scheduling cost, which involve the makespan, total completion time, total absolute differences in completion times (TADC), and total absolute differences in waiting times (TADW).
In this study, the due-window assignment single-machine scheduling problem with resource allocation is considered, where the processing time of a job is controllable as a linear or convex function of amount of resource allocated to the job. Under common due-window and slack due-window assignments, our goal is to determine the optimal sequence of all jobs, the due-window start time, due-window size, and optimal resource allocation such that a sum of the scheduling cost (including weighted earliness/tardiness penalty, weighted number of early and tardy job, weighted due-window start time, and due-window size) and resource consumption cost is minimized. We analyze the optimality properties, and provide polynomial time solutions to solve the problem under four versions of due-window assignment and resource allocation function.
We consider a single machine scheduling problem with slack due date assignment in which the actual processing time of a job is determined by its position in a sequence, its resource allocation function, and a rate-modifying activity simultaneously. The problem is to determine the optimal job sequence, the optimal common flow allowance, the optimal amount of the resource allocation, and the position of the rate-modifying activity such that the two constrained optimization objective cost functions are minimized. One is minimizing the total penalty cost containing the earliness, tardiness, common flow allowance subject to an upper bound on the total resource cost, the other is minimizing the total resource cost subject to an upper bound on the total penalty cost. For two optimization problems, we show that they can be solved in optimal time, respectively.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.