Against the backdrop of the digital age, the openness, equality and interaction of the Internet economy have injected new vitality into China’s traditional industries. The application of big data technology, especially in information integration and analysis, has become a key force in promoting the sustainable and healthy development of the national economy. This study focuses on the “Internet +” environment, discusses the impact of the aging problem of community workers on home care services, and proposes an optimization scheme based on a heuristic algorithm. The heuristic algorithm, inspired by the foraging behavior of ants in nature, optimizes the route selection problem by simulating an ant colony to choose the path with a high concentration of pheromones and shows outstanding application potential in the field of home care. The accuracy of the event detection algorithm is directly related to the performance of the load decomposition algorithm, and the change point detection algorithm can effectively identify the change point of the probability distribution in the time series data, which provides important input data for unsupervised clustering. Advanced computer theory, including the Hidden Markov model (HMM) and swarm intelligence optimization algorithm, is used in this research. By comparing different swarm intelligence algorithms, we find that the standard Gray Wolf optimization (SGWO) model is better than the basic Gray Wolf optimization (BGWO) algorithm and the improved Gray Wolf optimization (DGWO) algorithm in terms of stability and output results. The SGWO model significantly improves the efficiency of the load decomposition algorithm, which has been verified in the application of the smart elderly care service platform. The platform not only supports the operation of related technologies and information products but also realizes the seamless integration of information among various subjects of elderly care services. In addition, the factor hidden in the Markov model that can be selectively activated effectively monitors equipment status in the Internet of Things environment, provides real-time monitoring of user consumption behavior and fault information and further enhances the quality and efficiency of smart elderly care services.
In real-world terminals, containers are divided into multiple groups according to the reservation time. The container retrieval order between different groups is known, while the retrieval order within the same group is unknown. Previous research studies prioritize between groups but seldom prioritize within the same group, especially uncertain intra-group priorities. This paper studies the container pre-marshaling problem with uncertain intra-group retrievals (CPMP-UIR). Since the retrieval order of intra-group containers is uncertain, after the pre-marshaling, containers still need to be relocated during retrieval. CPMP-UIR conducts pre-marshaling, ensuring no relocations between groups and a minimum expected number of overstows during retrieval. In this paper, we first give the formula for calculating the expected number of overstows in a layout. The value obtained by the formula is a lower bound for the expected number of relocations. Second, we develop an algorithm called the Expected Overstow-based Heuristic (EOH) for solving CPMP-UIR. In addition, a hard-enough dataset is generated to evaluate the performance of EOH. Finally, our numerical experiments show that the expected number of overstows during the retrieval phase is reduced dramatically after the pre-marshaling.
Phishing is the criminal effort to steal delicate information such as account details, passwords, usernames, credit, and debit card details for malicious use. Phishing fraud might be the most popular cybercrime used today. The online and webmail payment sectors are highly affected by phishing attacks. Nowadays, attackers create many techniques that pave the way for them to steal all personal information from the selected victims easily. However, numerous anti-phishing techniques are used to detect the phishing attack, such as blacklist, visual similarity, heuristic detection, Deep Learning (DL), and Machine Learning (ML) techniques. ML techniques are more efficient at detecting phishing attacks, and these techniques also rectify the drawbacks of existing approaches. This paper provides a detailed review of various phishing techniques, encompassing phishing mediums, phishing vectors, and numerous technical approaches. Also, new research works are analyzed to detect phishing websites using heuristic, visual similarity, DL, and ML models. Neural Networks (NNs), Convolutional Neural Networks (CNNs), Deep Neural Networks (DNNs), Support Vector Machine (SVM), fuzzy logic methods, Long Short-Term Memory (LSTM) techniques, Random Forest (RF), Decision Tree (DT), Adaboost –Extra Tree (AET) classifiers based on ML models are examined in this review paper.
In this paper, a heuristic mapping approach which maps parallel programs, described by precedence graphs, to MIMD architectures, described by system graphs, is presented. The complete execution time of a parallel program is used as a measure, and the concept of critical edges is utilized as the heuristic to guide the search for a better initial assignment and subsequent refinement. An important feature is the use of a termination condition of the refinement process. This is based on deriving a lower bound on the total execution time of the mapped program. When this has been reached, no further refinement steps are necessary. The algorithms have been implemented and applied to the mapping of random problem graphs to various system topologies, including hypercubes, meshes, and random graphs. The results show reductions in execution times of the mapped programs of up to 77 percent over random mapping.
The gravitational search algorithm (GSA) is an eminent heuristic algorithm inspired by the laws of gravity and motion. It possesses an independent physical model in which the mass agents are guided by gravitational force to quickly achieve the convergence. Although the GSA is proven to be efficient for science and engineering problems, the mass agents can be trapped in premature convergence due to the heaviness of masses in the later iterations. The occurrence of premature convergence impedes the agents’ further exploration of the search space for a better solution. Here, the ant miner plus (AMP) variant of the ant colony optimization (ACO) algorithm is utilized to avoid the trapping of agents in local optima. The AMP algorithm extends the exploration ability of the GSA algorithm by using the attributes of pheromone updating rules generated by best ants and a problem-dependent heuristic function. The AMP variant adheres to the attributes of the ACO algorithm and is also a decision-making variant which determines the problem solution more efficiently by constructing a directed acyclic graph, considering class-specific heuristic values, and including weight parameters for the pheromone and heuristic values. In this research, this hybridization of GSA and AMP (GSAMP) algorithms is presented, and it is utilized for the decision-making application of fingerprint recognition. Here, fingerprint recognition is conducted for complete as well as latent fingerprints, which are poor quality partial fingerprints, mostly acquired from crime scenes by law enforcement agencies. The experiments are performed for the complete fingerprint dataset of FVC2004 and the latent fingerprint dataset of NIST SD27, using the proposed GSAMP approach and the individual algorithms of Ant Miner (AM) and AMP. The experimental evaluation indicates the superiority of the proposed approach compared to other methods.
The constrained rectangle-packing problem is the problem of packing a subset of rectangles into a larger rectangular container, with the objective of maximizing the layout value. It has many industrial applications such as shipping, wood and glass cutting, etc. Many algorithms have been proposed to solve it, for example, simulated annealing, genetic algorithm and other heuristic algorithms. In this paper a new heuristic algorithm is proposed based on two strategies: the rectangle selecting strategy and the rectangle packing strategy. We have applied the algorithm to 21 smaller, 630 larger and other zero-waste instances. The computational results demonstrate that the integrated performance of the algorithm is rather satisfying and the algorithm developed is fairly efficient for solving the constrained rectangle-packing problem.
Scheduling track lines at a marshalling station where the objective is to determine the maximal weighted number of trains on the track lines can be modeled as an interval scheduling problem: each job has a fixed starting and finishing time and can only be carried out by an arbitrarily given subset of machines. This scheduling problem is formulated as an integer program, which is NP-Complete when the number of machines and jobs are unfixed and the computational effort to solve large scale test problems is prohibitively large. Heuristic algorithms (HAs) based on the decomposition of original problem have been developed and the benefits lie in both conceptual simplicity and computational efficiency. Genetic algorithm (GA) to address the scheduling problem is also proposed. Computational experiments on low and high utilization rates of machines are carried out to compare the performance of the proposed algorithms with Cplex. Computational results show that the HAs and GA perform well in most condition, especially HA2 with the maximum of average percentage deviation on average 3.5% less than the optimal solutions found by Cplex in small-scale problem. Our methodologies are capable of producing improved solutions to large-scale problems with reasonable computing resources, too.
Job learning and deterioration coexist in many realistic machine-job scheduling situations. However, in literature, the constructed forms of the machine scheduling models with job learning and/or deteriorating effects were specific types of functions, which constrained their applicability in practice. This paper introduces a new single-machine scheduling model, where the actual processing time of a job is a general function of its starting time as well as scheduled position, which shows a broad generalization in contrast to that of certain existing models. For three objectives corresponding to the single-machine scheduling problem–total weighted completion time, discounted total weighted completion time, and maximum lateness — this paper presents their respective approximation result on the basis of the worst-case bound analysis from the optimal algorithm. The results demonstrate that under our proposed model, minimization of scheduling operations such as the makespan, sum of the kth power of completion times, and total lateness are polynomially solvable. Moreover, under some feasible conditions for the scheduling parameters, the minimum optimization problems of the total weighted completion time, discounted total weighted completion time, maximum lateness, and total tardiness are all recognized as polynomial forms and their solutions are provided.
In this paper, we consider a three-machine makespan minimization permutation flow shop scheduling problem with shortening job processing times. Shortening job processing times means that its processing time is a nonincreasing function of its execution start time. Optimal solutions are obtained for some special cases. For the general case, several dominance properties and two lower bounds are developed to construct a branch-and-bound (B&B) algorithm. Furthermore, we propose a heuristic algorithm to overcome the inefficiency of the branch-and-bound algorithm.
The multi-vehicle covering tour problem (m-CTP) is defined on a graph G=(V∪W,E), where V is a set of vertices that can be visited and W is a set of vertices that must be covered but cannot be visited. The objective of the m-CTP is to obtain a set of total minimum cost tours on subset of V, while covering all v∈W by up to m vehicles. In this paper, we first generalize the original m-CTP by adding a realistic constraint, and then propose an algorithm for the generalized m-CTP using a column generation approach. Computational experiments show that our algorithm performs well and outperforms the existing algorithms.
In this paper, we investigate the well-known permutation flow shop (PFS) scheduling problem with a particular objective, the minimization of total idle energy consumption of the machines. The problem considers the energy waste induced by the machine idling, in which the idle energy consumption is evaluated by the multiplication of the idle time and power level of each machine. Since the problem considered is NP-hard, theoretical results are given for several basic cases. For the two-machine case, we prove that the optimal schedule can be found by employing a relaxed Johnson’s algorithm within O(n2) time complexity. For the cases with multiple machines (not less than 3), we propose a novel NEH heuristic algorithm to obtain an approximate energy-saving schedule. The heuristic algorithms are validated by comparison with NEH on a typical PFS problem and a case study for tire manufacturing shows an energy consumption reduction of approximately 5% by applying the energy-saving scheduling and the proposed algorithms.
This work investigates an unrelated parallel machine scheduling problem in the shared manufacturing environment. Based on practical production complexity, five job and machine-related factors, including job splitting, setup time, learning effect, processing cost and machine eligibility constraint, are integrated into the considered problem. Parallel machines with uniform speed but non-identical processing capabilities are shared on a sharing service platform, and jobs with different types can only be processed by the machines with matching eligibilities. The platform pays an amount of processing cost for using any machine to process the jobs. To balance the processing cost paid and the satisfaction of customers, we aim to minimize the weighted sum of total processing cost and total completion time of jobs in the considered problem. We establish a mixed integer linear programming model, and provide a lower bound by relaxing the machine eligibility constraint. The CPLEX solver is employed to generate optimal solutions for small-scale instances. For large-scale instances, we propose an efficient heuristic algorithm. Experimental results demonstrate that for various instance settings, the proposed algorithm can always produce near optimal solutions. We further present several managerial insights for the shared manufacturing platform.
The emergence of computation intensive automotive applications poses significant challenges on computation capacity of automotive electronic systems, thus vehicular edge computing (VEC) is introduced as a new computing paradigm into the internet of vehicle (IoV) to improve its data processing capability. However, as the computation capacity is limited in VEC servers, efficient task offloading algorithms need to be proposed. This paper first proposes a multi-task offloading model and gives the related task response time analysis method, and then, both a mixed-integer linear programming (MILP)-based algorithm and a simulated annealing-based heuristic algorithm are proposed to minimize the task response time. By comparing with a baseline algorithm, the MILP-based offloading algorithm can reduce the average task response time by 91.45%, and the heuristic offloading algorithm can reduce the average task response time by 70%.
Path generation means generating a path or a set of paths so that the generated path meets specified properties or constraints. To our knowledge, generating a path with the performance evaluation value of the path within a given value interval has received scant attention. This paper subtly formulates the path generation problem as an optimization problem by designing a reasonable fitness function, adapts the Markov decision process with reward model into a weighted digraph by eliminating multiple edges and non-goal dead nodes, constructs the path by using a priority-based indirect coding scheme, and finally modifies the bat algorithm with heuristic to solve the optimization problem. Simulation experiments were carried out for different objective functions, population size, number of nodes, and interval ranges. Experimental results demonstrate the effectiveness and superiority of the proposed algorithm.
A Smart Hybrid Home (SHH) system studies power concerns for individual clients with an off-grid electricity connection are studied in a Smart Hybrid Home (SHH) system. The creation of autonomous hybrid electricity systems has been regarded as a way to improve energy independence. The primary objective is to ensure efficient management of energy. Detailed modeling is proposed in which the solar energy component is regarded as the significant source to achieve this aim. A multi-agent energy management system (MA-EMS) is presented in this article. It also includes the storing devices for power (Fuel Cell/Capacitors) for safe power delivery. The technical performance of the decentralised solution is in line with the existing central solution, which offers improved financially and operationally for the implementation and operation of the autonomous method. Furthermore, an IT-management system is created and described to guarantee that the system functions correctly utilizing a multi-agent structure. The proposed electricity management system is meant to recognize multi-agent jobs and comprehend probable situations based on energy attributes and requirements. The findings showed that the method suggested meets the aims of an operational database taken from different climatic services using the Matlab/Simulink Application set for the Smart Electricity Management Method. By implementing the proposed technique the power consumption is increased to 410kw, and it improves the efficiency rate to 88.4%.
The novel introduction of spaced seed idea in the filtration step of sequence comparison by Ma, Tromp and Li (2002) has greatly increased the sensitivity of homology search without compromising the speed of search. In this paper, we survey some recent works on spaced seeds with main emphasis on their probabilistic and computational aspects.
Agility represents a key factor in industry to handle the continuous market changes. Companies must re-organize their activities to be agile and competitive in such a dynamic environment. In particular, production planning and control tools are very important to optimize the manufacturing process responsiveness to sudden changes in customer demand. In this paper, an attempt has been made to develop an object-oriented software architecture that allows the optimal line organization to be determined once a set of parts to be produced has been ordered. An optimization module represented by a simulated annealing algorithm has been interfaced with an object oriented architecture to build up a framework that can be used to select both the optimal schedule, the buffer capacity and the speed of the conveyors in a very short time. The main characteristic of the proposed system is represented by its extremely high flexibility and reconfigurability with respect to the sudden changes in the customer orders that are reflected on the mix to be produced. A test case is provided to demonstrate the potential of using the proposed software architecture. A discussion on the obtained results is also developed to prove how the system is able to cope with lines characterized by different configurations of available resources.
Haplotypes can provide significant information in many research fields, including molecular biology and medical therapy. However, haplotyping is much more difficult than genotyping by using only biological techniques. With the development of sequencing technologies, it becomes possible to obtain haplotypes by combining sequence fragments. The haplotype reconstruction problem of diploid individual has received considerable attention in recent years. It assembles the two haplotypes for a chromosome given the collection of fragments coming from the two haplotypes. Fragment errors significantly increase the difficulty of the problem, and which has been shown to be NP-hard. In this paper, a fast and accurate algorithm, named FAHR, is proposed for haplotyping a single diploid individual. Algorithm FAHR reconstructs the SNP sites of a pair of haplotypes one after another. The SNP fragments that cover some SNP site are partitioned into two groups according to the alleles of the corresponding SNP site, and the SNP values of the pair of haplotypes are ascertained by using the fragments in the group that contains more SNP fragments. The experimental comparisons were conducted among the FAHR, the Fast Hare and the DGS algorithms by using the haplotypes on chromosome 1 of 60 individuals in CEPH samples, which were released by the International HapMap Project. Experimental results under different parameter settings indicate that the reconstruction rate of the FAHR algorithm is higher than those of the Fast Hare and the DGS algorithms, and the running time of the FAHR algorithm is shorter than those of the Fast Hare and the DGS algorithms. Moreover, the FAHR algorithm has high efficiency even for the reconstruction of long haplotypes and is very practical for realistic applications.
In order to solve the problem of complex function optimization, the ecological balance dynamics-based optimization (EBDO) algorithm is proposed based on the Lotka–Volterra ecological balance dynamics. The algorithm assumes that there are three populations of nurturers, consumers, and decomposers in an ecosystem. The self-plowing is mainly the plant; the consumer is mainly the animal who feeds on the nourish; the decomposer mainly breaks down the dead body of the consumer, and the nutrient is supplied to the self-raised person. According to the relationship among the populations in the above ecosystem, the consumer-autotrophic operator, the self-coter decomposer operator, the decomposer-consumer operator and the growth operator are constructed. The growth change of the population of the self-employed, the consumer and the decomposer is equivalent to the search space trying to move from one location to another. The algorithm has the characteristics of strong search ability and global convergence, it provides a solution for solving complex optimization problems.
This paper proposes a tolerance-based rule-base system for a single-crane scheduling problem in a flexible circuit board electroplating line. The objective is to minimize the completion time without a defective product. The algorithm is designed to determine the crane and job schedules in a real-time environment with dynamic change of job demand and random processing times. Two rule bases are used in the algorithm. The proposed rule base utilizes the tolerance of processing times to prevent delaying jobs’ entry times; the basic rule base which adopts only the shortest processing time (SPT) is introduced as a base to compare with the proposed one. The simulation results show the tradeoffs on the performances of the algorithm between these two rule bases and the tolerance-based rule base results in better throughput.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.