The reputation of lightweight software development processes such as Agile and Lean is damaged by practitioners that claim benefits of such processes that are not true.
Teams that want to demonstrate their seriousness, could benefit from matching their processes to the CMMI model, a recognized model by industry and the public administration. CMMI stands for Capability Maturity Model Integration and provides a reference model to improve and evaluate processes according to their maturity based on best practices.
On the other hand, particularly in a lightweight software development process, the costs of a CMMI appraisal are hard to justify since its advantages are not directly related to the creation of value for the customer.
This paper presents Jidoka4CMMI, a tool that — once a CMMI appraisal has been conducted — allows the documentation of the assessment criteria in form of executable test cases. The test cases, and so the CMMI appraisal, can be repeated anytime, without additional costs.
The use of Jidoka4CMMI increases the benefits of conducting a CMMI appraisal. We hope that this encourages practitioners using lightweight software development processes to assess their processes using a CMMI model.
Process mining mainly focuses on discovering control flow models, conformance checking and analyzing bottlenecks. It extends the scope by looking at the other perspectives like time, data and resources by connecting events in the event logs to this process model. These perspectives are not isolated and are all related to each other. For each perspective, there is a different technique, which is dedicated to the relevant perspective, applied and these techniques may need to consume the results of one another in a sequence of process mining analyses. As a result, a holistic process model is created by attaching and binding related attributes of the event logs to the backbone (control flow) of the model. Therefore, representing the holistic model and keeping what is produced from each perspective in a secure and immutable way while applying the multiple perspectives become important. In this study, a BPMN-extended Data Model is proposed to put together the models from the multi-perspective process mining and a tool is developed to keep this data model as an asset into a private blockchain developed by using Hyperledger Fabric. The practical relevance and validity of the approach are shown in the case studies that use real-life data from two different domains.
Event logs often record the execution of business process instances. Detecting traces in the event logs that do not comply with access control policies, such as role-based access control (RBAC) policies, is essential to ensuring system security. Moreover, process mining has been extensively utilized for security analysis in recent years. However, pattern-based approaches for designing and analyzing RBAC policies in the context of business processes through process mining are notably absent. In this paper, we present a systematic framework for checking the conformance of RBAC implemented in the event logs of business processes with the RBAC policies specified in domain knowledge. To facilitate the representation of the RBAC policies derived from the domain knowledge, we employ an RBAC domain-specific language (DSL) combined with our RBAC-driven object constraint language (OCL) invariant patterns built from the various types of RBAC constraints. The implemented RBAC in an event log is represented as snapshots within our framework. Then, we validate the snapshots with the RBAC policies to be able to detect RBAC conformance issues. The proposed framework is experimented with and evaluated on two business process logs, one simulated log and one real-world event log named “BPI Challenge 2017”.
Currently most researches in process mining focus on discovering a workflow model from an entire log. In fact, the process designers may have a partially built model and their prior knowledge is also valuable information to process mining. Besides, the large volume of log data makes the process mining a time-consuming job. It is better that the process mining be done in an incremental way. There are only a few methods that can incrementally mine a model, but they have their limitations such as incapability in handling loops or being intolerant to noise. On the other hand, loop mining is a challenging problem in process mining because the repeatedly executed tasks add complexity to the search for task precedence. This paper studies the problem of handling loops in process mining and proposes an improved incremental process mining method which supports loops. Experiments in the end show the feasibility and validity of the proposed method.
Process Discovery techniques, allowing to extract graph-like models from large process logs, are a valuable mean for grasping a summarized view of real business processes’ behaviors. If augmented with statistics on process performances (e.g., processing times), such models help study the evolution of process performances across different processing steps, and possibly detect bottlenecks and worst practices. However, when the process analyzed exhibits complex and heterogeneous behaviors, these techniques fail to yield good quality models, in terms of readability, accuracy and generality. In particular, the presence of deviant traces may lead to cumbersome models and misleading performance statistics. Current noise/outlier filtering solutions can alleviate this problem and help discover a better model for “normal” process executions, but they do not provide insight on the deviant ones. Then, difficult and expensive analyses are usually performed to extract interpretable and general enough patterns for deviant behaviors. The performance-oriented discovery approach proposed here is addressed to recognize and describe both a normal execution scenario and deviant ones for the process analyzed, by inducing different sub-models: (i) a collection of readable clustering rules (conjunctive patterns over trace attributes) defining the deviance scenarios; (ii) a performance model M0 for the “normal” traces that do not fall in any deviant scenario; and (iii) a performance model (and a “difference” model emphasizing the differences in behaviors from the “normal” execution scenario), for each discovered deviance scenario. Technically, these models are discovered by exploiting a conceptual clustering method, embedded in an iterative optimization scheme where the current version of M0 is replaced with the model extracted from the newly found normality cluster, in case the latter is more accurate than M0; on the other hand, the clustering procedure is devised to greedily find groups of traces that maximally deviate from M0. Tests on real-life logs confirmed the validity of this approach, and its capability to find good performance models, and to support the analysis of deviant process instances.
During the last years a new generation of adaptive Process-Aware Information Systems (PAIS) has emerged, which enables dynamic process changes at runtime, while preserving PAIS robustness and consistency. Such adaptive PAIS allow authorized users to add new process activities, to delete existing activities, or to change pre-defined activity sequences during runtime. Both this runtime flexibility and process configurations at build-time, lead to a large number of process variants being derived from the same process model, but slightly differing in structure due to the applied changes. Generally, process variants are expensive to configure and difficult to maintain. This paper presents selected results from our MinAdept project. In particular, we provide a clustering algorithm that fosters learning from past process changes by mining a collection of process variants. As mining result we obtain a process model for which average distance to the process variant models becomes minimal. By adopting this process model as reference model in the PAIS, need for future process configuration and adaptation decreases. We have validated our clustering algorithm by means of a case study as well as comprehensive simulations. Altogether, our vision is to enable full process lifecycle support in adaptive PAIS.
Process discovery algorithms typically aim at discovering process models from event logs that best describe the recorded behavior. Often, the quality of a process discovery algorithm is measured by quantifying to what extent the resulting model can reproduce the behavior in the log, i.e. replay fitness. At the same time, there are other measures that compare a model with recorded behavior in terms of the precision of the model and the extent to which the model generalizes the behavior in the log. Furthermore, many measures exist to express the complexity of a model irrespective of the log.
In this paper, we first discuss several quality dimensions related to process discovery. We further show that existing process discovery algorithms typically consider at most two out of the four main quality dimensions: replay fitness, precision, generalization and simplicity. Moreover, existing approaches cannot steer the discovery process based on user-defined weights for the four quality dimensions.
This paper presents the ETM algorithm which allows the user to seamlessly steer the discovery process based on preferences with respect to the four quality dimensions. We show that all dimensions are important for process discovery. However, it only makes sense to consider precision, generalization and simplicity if the replay fitness is acceptable.
Artifact-centric modeling is an approach for capturing business processes in terms of so-called business artifacts — key entities driving a company's operations and whose lifecycles and interactions define an overall business process. This approach has been shown to be especially suitable in the context of processes where one-to-many or many-to-many relations exist between the entities involved in the process. As a contribution towards building up a body of methods to support artifact-centric modeling, this article presents a method for automated discovery of artifact-centric process models starting from logs consisting of flat collections of event records. We decompose the problem in such a way that a wide range of existing (non-artifact-centric) automated process discovery methods can be reused in a flexible manner. The presented methods are implemented as a package for ProM, a generic open-source framework for process mining. The methods have been applied to reverse-engineer an artifact-centric process model starting from logs of a real-life business process.
Process discovery algorithms aim to capture process models from event logs. These algorithms have been designed for logs in which the events that belong to the same case are related to each other — and to that case — by means of a unique case identifier. However, in service-oriented systems, these case identifiers are rarely stored beyond request-response pairs, which makes it hard to relate events that belong to the same case. This is known as the correlation challenge. This paper addresses the correlation challenge by introducing a technique, called the correlation miner, that facilitates discovery of business process models when events are not associated with a case identifier. It extends previous work on the correlation miner, by not only enabling the discovery of the process model, but also detecting which events belong to the same case. Experiments performed on both synthetic and real-world event logs show the applicability of the correlation miner. The resulting technique enables us to observe a service-oriented system and determine — with high accuracy — which request-response pairs sent by different communicating parties are related to each other.
Cloud computing market is continually growing in the last years and becoming a new opportunity for business for private and public organisations. The diffusion of multi-tenants distributed systems accessible by clouds leads to the birth of some cross-organisational environments, increasing the organisation efficiency, promoting the business dynamism and reducing the costs. In spite of these advantages, this new business model drives the interest of researchers and practitioners through new critical issues. First of all, the multi-tenant distributed systems need new techniques to improve the traditional resource management distribution along the different tenants. Secondly, new approaches to the process analysis and monitoring analysed since cross-organisational environments allow various organisations to execute the same process in different variants. Hence, information about how each process variant characterised can be collected by the system and stored as process logs. The usefulness of such logs is twofold: these logs can be analysed using some process mining techniques to understand and improve the business processes and can be used to find better resource management and scalability. This paper proposes a cloud computing multi-tenancy architecture to support cross-organisational process executions and improve resource management distribution. Moreover, the approach supports the systematic extraction/composition of distributed data from the system event logs that are assumed to carry information of each process variant. To this aim, the approach also integrates an online process mining technique for the runtime extraction of business rules from event logs. Declarative processes are used to represent process variants running on the analysed infrastructure as they are particularly suited to represent the business process in a context characterised by low predictability and high variability. In this work, we also present a case study where the proposed architecture is implemented and applied to the execution of a real-life process of online products selling.
To improve quality and safety and to contain costs, hospitals should optimize their processes. Therefore, they need to have a full understanding and visibility of their operations and supply chain flows. In this chapter, we combined a lean management approach using value stream mapping, with a data-driven approach using process mining applied to the surgery processes of a private hospital in Switzerland. As the sterilization process was identified as a major bottleneck, we developed a digital twin and artificial intelligence algorithms to optimize the hospital operations and supply chain management. As a result, the hospital increased the quality of its work environment, improved visibility in their processes, and developed a stronger communication channel with their teams. In addition, they could better plan schedules, which allowed the hospital to limit overtime of the operating room staff. Finally, they optimized costs and saved time in their overall sterilization process.
Web service composition provides a way to build value-added services and web applications by integrating and composing existing web services. A composite web service is essential a process in a loosely-coupled service-oriented architecture. To conduct performance analysis, a workflow representation of the underlying process is required. This paper describes a method to discover such underlying processes from history execution logs. The workflow logs contain temporal information that records the starting and ending time of activities. Based on a probabilistic model, the algorithm can discover sequential, parallel, exclusive choice and iterative structures. Some examples are given to illustrate the algorithm. Though discussed process mining of a composite web service, this method can also be used for mining other workflow applications.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.