Search name | Searched On | Run search |
---|---|---|
Keyword: Heuristic (28) | 31 Mar 2025 | Run |
You do not have any saved searches
Secure and private user data are more important than ever with the explosion of online gaming platforms and the resulting deluge of user information. Intending to protect gaming ecosystems and maintain user confidence, Heuristic Predictive Modeling provides a proactive security strategy by allowing early detection and mitigation of potential risks. The ever-changing nature of the game, the wide variety of user interactions, and the always-evolving strategies of cybercriminals all contribute to the singular problems that data management and security encounter in modern gaming settings. This research proposes Heuristic Predictive Modeling for Gaming Security (HPM-GS). This system can analyze gaming data in real time and detect trends and abnormalities that could indicate security breaches. It uses advanced algorithms and machine learning approaches. With HPM-GS, gaming platforms can keep their users safe and secure by anticipating and proactively addressing security threats. Several areas of gaming security can benefit from HPM-GS, such as user authentication, detection of cheats, prevention of fraud, and incident response. Enhanced user experience and platform reliability can be achieved by incorporating HPM-GS into pre-existing security frameworks, which allows gaming platforms to strengthen their defenses and efficiently reduce risks. Extensive simulation studies assess the effectiveness of HPM-GS in gaming security. The performance metrics of HPM-GS, such as detection accuracy, false positive rates, and response time, are evaluated using real-world datasets and simulated attack scenarios. The simulation findings show that HPM-GS is a good solution for protecting gaming environments from cyber-attacks. The HPM-GS is a proactive, elastic gaming application data management and security method. The purpose of this research is to emphasize the potential of HPM-GS to improve the security posture of online gaming platforms and to ensure that players have a gaming experience that is both safer and more pleasant. This is accomplished by addressing the significance of HPM-GS, potential difficulties, proposed techniques, implementations, and simulation analysis.
In this paper we present a new parallel Iterative Deepening A* (IDA*) algorithm, the imprecise IDA* algorithm, for combinatorial optimization. This algorithm employs inadmissible heuristics. But approximate solutions which are arbitrarily close to the optimal ones can be obtained. In addition a linear speedup can be achieved by our algorithm. Experimental study has been carried out and the results are also reported here. Two computationally difficult problems, the quadratic assignment and the generalized assignment, are used in our experimental study. Our algorithm is effective for these problems of reasonable size.
We analyze the heuristic proposed by Jung [3] to partition a 3D phantom into homogeneous cuboids. We show that the 2D version of this heuristic generates the minimum number of rectangles when partitioning 2D non-degenerate phantoms. However, this 2D version doesn't generate optimal partitions of degenerate 2D phantoms. In fact, the heuristic may generate more than 3 times as many rectangles as is necessary. The 3D heuristic of [3] doesn't generate optimal partitionings of 3D simple phantoms either. We show that slicing partitions of 3D simple phantoms are optimal. Our analysis suggests a new heuristic for 3D phantoms. This heuristic generated one-third as many cuboids when used to partition a 3D CT-scan phantom and about one-fourth as many cuboids on randomly generated 3D phantoms as produced by the heuristic of [3].
The problem of nonpreemptively scheduling a set of n independent jobs with identical release times so as to minimize the number of late jobs is considered. It is known that the problem can be solved in O(n log n) time, by an algorithm due to Moore, for a single processor, and that it becomes NP-hard for two or more identical processors, even if all jobs have identical due dates. In this paper we give a fast heuristic, based on Moore’s algorithm, for the multiprocessor case. Like Moore’s algorithm, our heuristic also admits an O(n log n) implementation. It is shown that the performance ratio of the heuristic is 4/3 for two identical processors, where the performance ratio is defined to be the least upper bound of the ratio of the number of on-time jobs in an optimal schedule versus that in the schedule generated by the heuristic.
This paper presents a variant of Vogel's approximation method (VAM) for transportation problems. The importance of determining efficient solutions for large sized transportation problems is borne out by many practical problems in industries, the military, etc. With this motivation, a few variants of VAM incorporating the total opportunity cost (TOC) concept were investigated to obtain fast and efficient solutions. Computational experiments were carried out to evaluate these variants of VAM. The quality of solutions indicates that the basic version of the VAM coupled with total opportunity cost (called the VAM–TOC) yields a very efficient initial solution. In these experiments, on an average, about 20% of the time the VAM–TOC approach yielded the optimal solution and about 80% of the time it yielded a solution very close to optimal (0.5% loss of optimality). The CPU time required for the problem instances tested was very small (on an average, less than 10 s on a 200 MHz Pentium machine with 64 MB RAM).
In this paper, we propose an assignment-based local search method for solving vehicle routing problems. This method is a multi-route improvement algorithm that can operate on several routes at a time. To evaluate the performance of the proposed method, extensive computational experiments on the proposed method applied to a set of benchmark problems are carried out. The results show that the proposed method, when coupled with metaheuristics such as simulated annealing, is comparable with other efficient heuristic methods proposed in the literature.
The Vehicle Routing Problem with Backhauling deals with the supply of finished goods from a depot to a number of delivery points, and picking up returnable items and bringing them back to the depot using a fleet of trucks. Traditionally, the objective of the problem has been to determine the truck routes such that the total number of trucks and/or the total distance traveled/total route cost are minimized. Most of the papers available in the literature in this connection deal with problems where the linehaul (having a demand for finished goods) and backhaul (having items to be returned to the depot) customers are different, and a customer may be visited by at most one truck limiting demand and returns at a location by the capacity of the truck. In this paper, we allow the linehaul and backhaul customers to be the same leading to simultaneous delivery and pickup at a customer location, and also there is no restriction on the quantity demanded at (to be returned from) a customer location. As such a customer may be visited by more than one truck and more than once by the same truck. We developed a Mixed Integer Linear programming (MILP) formulation of the problem and a route construction heuristic. The heuristic averaged 80 ms for 110 problems tested, and in 78 of them the heuristic costs were either equal to the optimal costs or at most equal to the upper bounds on the optimal costs obtained after running the optimization package for 30 min. Optimal solutions were obtained for 28 problems at an average time of 295 ms. The heuristic could match the optimal solutions for 22 of these problems at an average time of 71 ms.
In the single machine scheduling problem with job delivery to minimize makespan, jobs are processed on a single machine and delivered by a capacitated vehicle to their respective customers. We first consider the special case with a single customer, that is, all jobs have the same transportation time. Chang and Lee (2004) proved that this case is strongly NP-hard. They also provided a heuristic with the worst-case performance ratio , and pointed out that no heuristic can have a worst-case performance ratio less than
unless P = NP. In this paper, we provide a new heuristic which has the best possible worst-case performance ratio
. We also consider an extended version in which the jobs have non-identical transportation times and the transportation time of a delivery batch is defined as the maximum transportation time of the jobs contained in it. We provide a heuristic with the worst-case performance ratio 2 for the extended version, and show that this bound is tight.
Shelf space allocation to products greatly impacts the profitability in a retail store. In this paper, we consider a retail shelf-space allocation problem where retailer wishes to allocate the available spaces of different shelves to a large number of products considering direct space elasticity in the product's demand. There is a great interest to develop efficient heuristics due to NP-hard nature of this problem. We propose a dynamic programming heuristic (DPH) to obtain near optimal solution in a reasonable time to solve this problem. The empirical results found that DPH obtained near optimal solutions for randomly generated instances of problems with size (products, shelves) varying from (100, 30) to (200, 50) within a few seconds of CPU time. The performance of DPH is benchmarked against an existing local search heuristic (LSH). It was found that DPH takes substantially less CPU time and attains a solution close to that obtained by LSH. Thus, DPH has great potential to solve the problem of realistic size within reasonable time. The proposed DPH is also applied to a case of an existing retail store.
The n-job and m-machine permutation flow shop scheduling problem with a proportional deterioration is considered in which all machines process the jobs in the same order, i.e., a permutation schedule. A proportional deterioration means that the job deterioration as an increasing function that is proportional to a linear function of time. The objective is to minimize the makespan, i.e., the maximum completion time. When some dominant relationships between m−1 machines can be satisfied, we show that some special cases of the problem can be polynomial solvable. For the general case, we also propose a heuristic algorithm and give the computational experiments.
In this paper, we consider the scheduling of high multiplicity jobs on parallel multi-purpose machines with setup times and machine available times, with the objective of minimizing makespan. High multiplicity means that jobs are partitioned into several groups and in each group all jobs are identical. Whenever there is a switch from processing a job of one group to a job of another group, a setup time is needed. Multi-purpose machine implies that each job can only be processed by a specific subset of all the machines, called processing set. A mixed integer programming is formulated for this NP-hard problem. A heuristic is proposed to solve the problem. Lower bounds are developed to evaluate the heuristic algorithm. Extensive numerical computations are performed and the results show that the heuristic generates solutions with makespan within 2% above the lower bounds in average, and outperforms CPLEX 12.6 for large scale and complex problems.
Among the carbon regulation policy schemes, the cap-and-trade has received more attention because of its efficiency and flexibility. Two primary challenges with the cap-and-trade scheme are determining the correct cap and carbon trading price in the carbon market. This paper presents a bi-level model to investigate these two challenges in the cap-and-trade scheme formed between multiple supply chains and the government. At the first level, the government minimizes the cap in such a way that the costs of the supply chains do not rise too much. At the second level, the supply chains minimize their costs according to their cap and trade the dedicated allowances. An exact and a heuristic method are developed to solve the bi-level model. The computational results on a set of randomly generated instances show the effectiveness of the presented heuristic. Sensitivity analysis demonstrates that the government should choose proper amounts of caps to balance costs and environmental benefits.
In this paper, we consider two single-machine scheduling problems with deteriorating jobs and controllable processing time. The job processing time is a function of its starting time and resource allocation. For the convex resource function, a bicriteria analysis is provided, where the first (respectively, second) criteria is to minimize total weighted completion time (respectively, total resource consumption cost). The first problem is to minimize the weighted sum of the total weighted completion time and the total resource consumption cost. The second problem is to minimize the total weighted completion time subject to the total resource consumption cost is bound. These both problems are NP-hard, we propose some heuristic algorithms and a branch-and-bound algorithm to solve the problems.
For an image consisting of wire-like patterns, skeletonization (thinning) is often necessary as the first step towards feature extraction. But a serious problem which exists is that the intersections of the lines (X-crossings) will be elongated when applying a thinning algorithm to the image. That is, X-crossings are usually difficult to be preserved as the result of thinning. In this paper, we present a non-iterative line thinning method that preserves X-crossings of the lines in the image. The skeleton is formed by the mid-points of run-length encoding of the patterns. Line intersection areas are identified via a histogram analysis of the lengths of runs, and intersections are detected at locations where the sequences of runs merge or split.
This paper presents the design of a new Digital Variable Gain Amplifier cell (DVGA). The proposed circuit based on transconductance, gm, amplifier and a transconductance amplifier is analyzed and designed for a cognitive radio receiver. The variable-gain amplifier (VGA) proposed consists of a digital control block, an auxiliary pair to retain a constant current density, and offers a gain-independent bandwidth (BW). A novel cell structure is designed for high gain, high BW, low power consumption and low Noise Figure (NF). The Heuristic Method is used to optimize the proposed circuit performance for high gain, low noise and low power consumption. This circuit is implemented and simulated using device-level description of TSMC 0.18μm CMOS process. Simulation results show that the DVGA can provide a gain variation range of 54dB (from 54dB to 0dB) with a 3dB BW over more than 110MHz. The circuit consumes the maximum power of 0.65mW from a 1.8V supply.
After years of experience in object-oriented design, software engineers have accumulated a great deal of knowledge in the design and construction of object-oriented systems: important contributions to this field including principles, heuristics, lessons learned, bad smells, refactorings, and so on, with the resultant major improvements in software development. However, this large body of knowledge is still not well organized, its terminology is ambiguous, and it is very difficult to make practical use of the contributions made. In this regard, we believe it is important to define an ontology in order to structure and unify design knowledge, since a good understanding of the experience derived from practical work is critical for software engineers. This ontology could be used to improve communication between software engineers, inter-operability among designs, design re-usability, design knowledge searching and specification, software maintenance, knowledge acquisition, etc. In the ontology we incorporate knowledge specific to both domain and technology. Such an organized body of knowledge could also be used for registering and documenting design rationale issues.
Providing counterexample for the refutation of a property is an essential feature of model checking, if it is not the most important. However, generating counterexample in stochastic model checking needs a dedicated algorithm. It usually costs too much time and memory, and sometimes it cannot find the counterexample. What is worse, generating smallest counterexample in stochastic model checking has been proved to be NP-complete, and it is unlikely to be efficiently approximable. Although there are a few heuristic methods that are applied to the construction of the counterexample, it is usually difficult to determine the heuristic function which is critical for counterexample generating. In this paper, we present a particle swarm optimization (PSO)-based approach to generating counterexample for stochastic model checking. We define the diagnostic sub-graph as counterexample, and extend PSO algorithm with heuristic (HPSO) to generate counterexample. It adopts indirect path coding scheme to expand the scope of the search space, and employs heuristic operator to generate more effective path. The validity of our approach is illustrated by some case studies in a prototype tool. The experiments show that HPSO algorithm can significantly outperform the present algorithm for counterexample generation in stochastic model checking.
Counterexample-guided abstraction refinement (CEGAR) is an extremely successful methodology for combating the state-space explosion problem in model checking. State-space explosion problem is more serious in the field of stochastic model checking, and it is still a challengeable problem to apply CEGAR in stochastic model checking effectively. In this paper, we formalize the problem of applying CEGAR in stochastic model checking, and propose a novel CEGAR framework for it. In our framework, the abstract model is presented by a quotient probabilistic automaton by making a set of variables or latches invisible, which can distinguish more degrees of abstraction for each variable. The counterexample is described by a diagnostic sub-model. Validating counterexample is performed on diagnostic loop paths, and the directed explicit state-space search algorithm is used for searching diagnostic loop paths. Sample learning, particle swarm optimization algorithm (PSO) and some effective heuristics are integrated for refining abstract model guided by invalid counterexample. A prototype tool is implemented for the framework, and the feasibility and efficiency are shown by some large cases.
Combinatorial auctions allow bidders to bid for items leading to more efficient allocations, but determining winners in auctions is -complete. In this work, a simple yet effective Lagrangian relaxation based heuristic algorithm is presented. Extensive computational experiments using standard benchmark data (CATS) as well as newly generated more realistic test sets were conducted which showed the heuristic was able to provide optimal solutions for most test cases and is within 1% from the optimums for the rest within very short times. Experiements comparing CPLEX 8.0, the fastest current algorithm, showed the heuristic was able to provide equally godd or better solutions often requring less than 1% of the time required by CPLEX 8.0.
This paper addresses stability issues in incremental induction of decision trees. Stability problems arise when an induction algorithm must revise a decision tree very often and oscillations between similar concepts decrease learning speed. We review a heuristic that solves this problem and subsequently employ asymptotic analysis to approximate the basic parameters related to the estimation of computational effort in incremental learning of decision trees. We then use these approximations to simplify the heuristic, we deliver insight into its amortizing behavior and argue how they can also speed-up its execution and enhance its applicability, also providing experimental evidence to support these claims.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.