Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Remaining activity sequence prediction (i.e. activity suffix prediction) aims at recommending the most likely future behaviors for ongoing process instances (i.e. traces), which enables process managers to rationally allocate resources and detect process deviations in advance. Recently, techniques of neural networks have found promising applications in activity suffix prediction by training a prediction model for next activity and iteratively performing the model to achieve the whole sequence prediction. However, the iterative prediction accumulates the deviations of each iteration and the result also lacks interpretability. In this paper, we propose a novel method to predict activity suffixes from the perspective of control flow and data flow for ongoing traces, where process discovery and trace replay techniques are employed to simulate executions of traces under real conditions and Long Short-Term Memory (LSTM) is applied to characterize the correlation between executed information and future execution. Sequence matching between historical prefix traces and ongoing traces is performed based on the above information to select the optimal-matched (i.e. most similar) activity suffix for ongoing process instances. Experiments on real-life datasets demonstrate that the proposed method outperforms other methods.
Process discovery algorithms typically aim at discovering process models from event logs that best describe the recorded behavior. Often, the quality of a process discovery algorithm is measured by quantifying to what extent the resulting model can reproduce the behavior in the log, i.e. replay fitness. At the same time, there are other measures that compare a model with recorded behavior in terms of the precision of the model and the extent to which the model generalizes the behavior in the log. Furthermore, many measures exist to express the complexity of a model irrespective of the log.
In this paper, we first discuss several quality dimensions related to process discovery. We further show that existing process discovery algorithms typically consider at most two out of the four main quality dimensions: replay fitness, precision, generalization and simplicity. Moreover, existing approaches cannot steer the discovery process based on user-defined weights for the four quality dimensions.
This paper presents the ETM algorithm which allows the user to seamlessly steer the discovery process based on preferences with respect to the four quality dimensions. We show that all dimensions are important for process discovery. However, it only makes sense to consider precision, generalization and simplicity if the replay fitness is acceptable.
Process discovery algorithms aim to capture process models from event logs. These algorithms have been designed for logs in which the events that belong to the same case are related to each other — and to that case — by means of a unique case identifier. However, in service-oriented systems, these case identifiers are rarely stored beyond request-response pairs, which makes it hard to relate events that belong to the same case. This is known as the correlation challenge. This paper addresses the correlation challenge by introducing a technique, called the correlation miner, that facilitates discovery of business process models when events are not associated with a case identifier. It extends previous work on the correlation miner, by not only enabling the discovery of the process model, but also detecting which events belong to the same case. Experiments performed on both synthetic and real-world event logs show the applicability of the correlation miner. The resulting technique enables us to observe a service-oriented system and determine — with high accuracy — which request-response pairs sent by different communicating parties are related to each other.
One of the most valuable assets of an organization is its organizational data. The analysis and mining of this potential hidden treasure can lead to much added-value for the organization. Process mining is an emerging area that can be useful in helping organizations understand the status quo, check for compliance and plan for improving their processes. The aim of process mining is to extract knowledge from event logs of today’s organizational information systems. Process mining includes three main types: discovering process models from event logs, conformance checking and organizational mining. In this paper, we briefly introduce process mining and review some of its most important techniques. Also, we investigate some of the applications of process mining in industry and present some of the most important challenges that are faced in this area.
Customer requirements and specifications are becoming increasingly complex, resulting in more complicate production processes to meet these requirements. With this complexity come anomalies and deviations into processes. On the other hand, we are seeing a new generation of technology can handle complexity of process and discover unusual executions of large and complex processes by tracing their generated data and transforming it into insights and actions. Hence, using data in industry becomes inevitable, giving it a fundamental role in improving efficiency and effectiveness of any organization. However, it is not sufficient to store and analyze data, but also the ability to link it to operational processes and be able to pose right questions; moreover, a deep understanding of end-to-end processes, which may ultimately accelerate every aspect of detecting abnormal process executions and determining process parameters responsible for quality fluctuations. Anomaly detection in manufacturing has raised serious concerns. Any divergence in process may lead to a quality degradation in manufactured products, energy wastage and system unreliability. This paper proposes an approach for anomaly detection in electroplating processes by combining process steps ordering relationship and boosted decision tree classifiers, XGBoost system, that employs the dimensionality reduction using Kernel principal component analysis which turns out to be effective in handling nonlinear phenomena using the Gaussian kernel on a self-tuning procedure. This approach has ensured a good accuracy property while maintaining enough generalization characteristics to beat data size and complexity challenges in order to improve detection rate and accuracy. The approach was validated using a dataset representing electroplating executions in year 2021. The classified anomaly events produced by our approach can be used, for instance, as candidates for a generalized anomaly detection framework in electroplating.
One of the most valuable assets of an organization is its organizational data. The analysis and mining of this potential hidden treasure can lead to much added-value for the organization. Process mining is an emerging area that can be useful in helping organizations understand the status quo, check for compliance and plan for improving their processes. The aim of process mining is to extract knowledge from event logs of today’s organizational information systems. Process mining includes three main types: discovering process models from event logs, conformance checking and organizational mining. In this paper, we briefly introduce process mining and review some of its most important techniques. Also, we investigate some of the applications of process mining in industry and present some of the most important challenges that are faced in this area.