Patient-specific quality assurance (QA) for Volumetric Modulated Arc Therapy (VMAT) plans is routinely performed in the clinical. However, it is labor-intensive and time-consuming for medical physicists. QA prediction models can address these shortcomings and improve efficiency. Current approaches mainly focus on single cancer and single modality data. They are not applicable to clinical practice. To assess the accuracy of QA results for VMAT plans, this paper presents a new model that learns complementary features from the multi-modal data to predict the gamma passing rate (GPR). According to the characteristics of VMAT plans, a feature-data fusion approach is designed to fuse the features of imaging and non-imaging information in the model. In this study, 690 VMAT plans are collected encompassing more than ten diseases. The model can accurately predict the most VMAT plans at all three gamma criteria: 2%/2 mm, 3%/2 mm and 3%/3 mm. The mean absolute error between the predicted and measured GPR is 2.17%, 1.16% and 0.71%, respectively. The maximum deviation between the predicted and measured GPR is 3.46%, 4.6%, 8.56%, respectively. The proposed model is effective, and the features of the two modalities significantly influence QA results.
This study utilized a new type of detector, the CROSS II (Liverage Biomedical Inc., Taiwan), to perform a beam quality assurance (QA) procedure on a Sumitomo (Sumitomo Heavy Industries, Inc., Japan) pencil beam linear scanning proton therapy machine. The Cross II can monitor proton Pristine Bragg peak range, beam width, beam size, beam position, and scanning speed. All the data presented here were collected during a time span of over one year. The accuracy of the QA program could be verified if all the QA items were tested stably and within the programmed tolerances. Our results showed that the proton range remained within the ±2 mm tolerance, with the majority of measurements within ±0.5 mm, ±2 mm for spot size, 1.5 mm for spot position, and ±2% for scanning speed. We found that the CROSS II detector is in high precise and steady state with highly efficient. Our proton therapy system was also proven to be in an accurate and reliable condition according to our QA results.
The growing complexity of System-on-a-Chips (SoCs) and rapidly decreasing time-to-market have pushed the design abstraction to the electronic system level in order to increase design productivity. SystemC is a widely used electronic system level modeling language that enables quick prototyping and early verification in the SoC design process. The functional correctness of SystemC designs is often one of the greatest concerns in the SoC design process, since undetected design errors may propagate to low-level implementations or even final silicon products, which are costly to fix. However, SystemC verification is a challenging task due to its complex language features such as object-oriented constructs, hardware-oriented data types and concurrency. A variety of approaches have been proposed for SystemC verification in the past two decades. This work systematically investigates the state-of-the-art SystemC verification approaches by discussing their methodologies, advantages, and limitations, as well as presenting comparison among various approaches.
Embedded software is used to control the functions of mechanical and physical devices by dedicated digital signal processor and computers. Nowadays, heterogeneous and collaborative embedded software systems are widely adopted to engage the physical world. To make such software extremely reliable, very efficient and highly flexible, component-based embedded software development can be employed for the complex embedded systems, especially those based on object-oriented (OO) approaches. In this paper, we introduce a component-based embedded software framework and the features it inherits. We propose a quality assurance (QA) model for component-based embedded software development, which covers both the component QA and the system QA as well as their interactions. Furthermore, we propose a generic quality assessment environment for component-based embedded systems: ComPARE. ComPARE can be used to assess real-life off-the-shelf components and to evaluate and validate the models selected for their evaluation. The overall component-based embedded systems can then be composed and analyzed seamlessly.
Model Checking based verification techniques represent an important issue in the field of concurrent systems quality assurance. The lack of formal semantics in the existing formalisms describing multi-agents models combined with multi-agents systems complexity are sources of several problems during their development process. The Maude language, based on rewriting logic, offers a rich notation supporting formal specification and implementation of concurrent systems. In addition to its modeling capacity, the Maude environment integrates a Model Checker based on Linear Temporal Logic (LTL) for distributed systems verification. In this paper, we present a formal and generic framework (DIMA-Maude) supporting formal description and verification of DIMA multi-agents models.
The Failure Mode and Effects Analysis (FMEA) documents single failures of a system, by identifying the failure modes, and the causes and effects of each potential failure mode on system service and defining appropriate detection procedures and corrective actions. When extended by Criticality Analysis procedure (CA) for failure modes classification, it is known as Failure Mode Effects and Criticality Analysis (FMECA). The present paper presents a literature review of FME(C)A, covering the following aspects: description and review of the basic principles of FME(C)A, types, enhancement of the method, automation and available computer codes, combination with other techniques and specific applications. We conclude with a discussion of various issues raised as a result of the review.
Reforms in the Life Science Program at the National University of Singapore.
This articles is about the Product or Service in the Quality in Clinical Trials.
Microfluidic droplets formed in emulsions are used in a variety of analytical techniques and hold great potential for future scientific and commercial applications. Our experiments present a microdroplet generation and consistency monitoring system with laser optics excitation and detection. We also demonstrate the detection of cancer cells encapsulated within aqueous microdroplets in continuous oil phase flow. The custom setup analyzes each droplet with sub-millisecond signal resolution and single photon accuracy, and is compatible with process-monitoring methods. To demonstrate the consistency of microdroplet generation over time, we measure and examine the mean frequency of aqueous plug-shaped droplet (microplug) formation in oil phase, as well as the mean length and interval between consecutive droplets. Two-channel optical monitoring allows for the simultaneous and independent inspection of both microdroplet generation and identification of green fluorescent protein-labeled cancer cells within the droplets. A precise, quantitative approach as utilized in these experiments may be helpful in the development of microfluidic concepts that require exacting reproducibility and would benefit from automated consistency monitoring techniques.
Verification procedures for patient-specific quality assurance (QA) in advanced radiotherapy are laborious and time-consuming. Moreover, it has been shown that these procedures cannot detect some inaccuracies for some particular complex cases due to tissue inhomogeneity and highly modulated plans. Secondary dose calculation verification of radiotherapy plans is an important aspect in patient-specific QA. A suitably optimized software, RadCalc equipped with 3D Monte Carlo Module (MC), was used to dosimetrically verify radiotherapy treatment plans where the measured dose distributions can be inaccurate due to the TPS dose calculation algorithm and/or treatment unit delivery uncertainties. MC Models were built using specific commissioning measurements and then the Additional Radiation to Light Field Offset parameter (Dosimetric Leaf Gap parameter) was tuned to achieve the best dose comparison agreement with phantom patient-specific QA measurements. The results showed a good agreement between the TPS and the simulations. RadCalc MC also allows to better estimate the plan doses in lung cancer patients and to detect the presence of possible inaccuracies due to tissue inhomogeneity, which is not estimable with a homogeneous phantoms.
This paper attempts to explore the challenges faced by a leading tractors manufacturing industry of India on losing its market position and investigate the impact of implementing customer centric Total Quality Management (TQM) initiatives for regaining its market position. XYZ Ltd. adopted TQM as a business tool in 2008 to counter external and internal business challenges. Improving customer satisfaction was one of the major targets for the industry with a purpose to be more competitive. Its TQM journey remained extremely successful and as a result received Deming Award in 2012. The tractors manufactured by it are now at top position from 2011 onwards in the Indian market. The customer satisfaction index has improved from 76 to 106; sales satisfaction index has increased to 105, market share enhanced by 12.6%. The case study presented in this paper may give insight to TQM practitioners to promote similar approaches in their organizations for enhancing customer satisfaction.
This review emphasizes the evolving need for automated inspection in metal fabrication processes due to the increasing complexity of design advancements over the years. The study explores various defect detection algorithms and evaluates their effectiveness in enhancing the accuracy and reliability of the inspection process. Machine vision plays a crucial role in this context, contributing significantly to the precision of the inspection process in metal fabrication. Its ability to handle complex tasks ensures a thorough assessment of manufactured components. The paper also explores the use of digital image correlation (DIC) as a key tool in quality assurance for metal fabricated products. This technique provides detailed insights, enabling a thorough understanding of structural integrity and defect identification. By integrating insights on automated inspection through defect detection algorithms, machine vision and DIC, this review aims to advance quality assurance methodologies in the ever-evolving field of metal fabrication.
The scope of this paper is to describe the existing problems with respect to the lack of a harmonised assessment approach for air quality, and to provide general recommendations and priorities for an assessment approach at the European scale. Taken that the approach for assessment may diverge among the European countries and the players involved, recommendations are general and aim at providing the overall framework on the basis of which the formulation of specific assessment approaches could be worked out. The starting point for the paper is a synopsis of the characteristics of air quality assessments, as well as of the drawbacks for a harmonised assessment approach at the European scale. The assessment of air quality is then described in relation to sources as well as to effects. At a final stage the assessment goals are recognised and the link between air quality assessment and integrated assessment is also discussed.
The goal of intensity-modulated radiation therapy (IMRT) is to deliver a uniform dose to the tumor with minimal margins around the target, in order to increase local control of the disease while reducing secondary effects. The research performed in this work has shown the potential usefulness of the Fricke-gel dosimeter as a quality assurance (QA) tool to verify IMRT treatments produced by inverse treatment planning. First, the 3D integrating Fricke-gel dosimeter was successfully compared to an accepted dosimetric tool. It was then used to measure relative 3D dose distributions of simple treatment plans with multiple square or rectangular fields and specific inverse-planned IMRT treatment plans. By combining the CT anatomical information and the plan contours with the gel-measured data, it was possible to display the contours on the measured dose and the measured isodose lines on the CT, in addition to measuring dose-volume histograms (DVH) for the plans. This demonstrated the usefulness of the gel dosimeter as a QA tool for IMRT and inverse planning.
As a consequence of the permanently increasing complexity of modeling and simulation (M&S) applications, the demand for controlling and demonstrating the quality of a model and its applications by introducing adequate quality measures, techniques, and tools is obvious. Since efforts toward quality assurance such as model documentation and verification and validation (V&V) are time consuming and costly, a reasonable balance between quality and efficiency should be achieved in all practical applications. In the context of an integrated quality assurance strategy, several quality- and efficiency-related concepts have been developed and combined, including a model documentation guideline, a generalized V&V concept (the V&V Triangle), a graph-based V&V, and a multistage tailoring process. This article describes the proposed concepts and their applications in two selected simulation development projects. While the primary objective of the first project was introduction and application of the documentation guideline and the tailoring concept; the second project focused more on applying the V&V Triangle and the graph-based V&V. Based on the findings achieved by the case studies, some relevant lessons learned are summarized, which serve as expertise for both theoretical concept refinement and practical applications.
During the last few decades, assisted reproductive technologies (ARTs) have flourished rapidly and accompanied a set of advanced procedures such as intracytoplasmic sperm injection (ICSI), electronic witnessing, digital monitoring through embryoscope time-lapse systems, consistent decision-making algorithms with advanced statistics modes and preimplantation genetic testing (PGT). In usual practice, manual procedures were routinely used in IVF (in vitro fertilization) laboratories worldwide, but automation and artificial intelligence (AI) systems are promising techniques for quality assurance, which reduced the burden on the working staff in the embryology laboratory. In addition, these systems are equipped with powerful mathematical tools that minimize technician variability in the IVF lab and efficiently generate data for impaired gametes and embryos. The principal challenge of single-sperm selection out of 108 gametes can be sorted out by incorporating machine learning algorithms coupled with advanced data processing capabilities. In the same line, the emergence of closed embryo culture systems (CECSs) in human embryology has enabled the accurate morphokinetic evaluation of the more rapid cell division and the identification of normal and abnormal hallmarks of embryo viability. In particular, these CECSs are guided by the latest time-lapse microscopy (TLM) facility to continuously monitor embryo development kinetics without removing them from controlled and stable incubator conditions. In conclusion, AI-driven models can reduce technical variability in sample handling and remove the burden of the most subjective, tedious and/or monotonous aspects of the IVF lab. Furthermore, these systems also highlight environmental stressors that could hamper embryo development competence. In a broader sense, AI-based approaches are more accurate, precise and rapid in predicting embryo quality noninvasively.
The provision of central resources of human stem cell lines will be an important element in enabling progress in stem cell research and the development of safe and effective cell therapies. These resource centres, commonly referred to as "stem cell banks", could promote advances in the field of stem cell research, by providing access to well-characterised and quality controlled seed stocks of human stem cell lines that have been checked for appropriate ethical provenance. Such banks can also deliver benefits for the field through establishing international collaboration and standardisation between the developing national stem cell banking centres. This chapter will review the key issues that centres banking and distributing stem cells must address to support the research community in the development of exciting cell therapies for the future.
Developing software and systems in a way that entails plannable project execution and predictable product quality requires the use of quantitative data for project control. In the context of software development, few techniques exist for supporting on-line monitoring, interpretation, and visualization of project data. This is caused particularly by the often insufficient use of such engineering principles as experience-based planning and plan-based execution in the software development domain. However, effective software project control requires integrated tool support for capturing, managing, analyzing, and storing data. In addition, advanced control approaches aim at providing purpose- and role-oriented information to all involved parties (e.g. project manager, quality assurer) during the execution of a project. This chapter introduces the concept of a so-called Software Project Control Center (SPCC), sketches a control-oriented software development model, and gives a representative overview of existing tool-based software control approaches from academia and practice. Finally, the different approaches are classified and compared with respect to a characterization schema that reflects important requirements from the viewpoint of practitioners.
An essential and inherent part of any managerial process is the monitoring and feedback for all the organization's activities. Every organization needs to know if it is acting in an effective way and if its activities are accepted by their recipients the way they are intended to. The handling of complaints is designed to prevent the reoccurrence of similar incidents in the future and to improve the production of the organization. Furthermore, the handling of complaints from the public is important in tempering the bitter feelings and sense of helplessness of the citizen vis-à-vis bureaucratic systems. One of the various tools available to managers in the organization for obtaining this much needed feedback on the organization's activities is via complaints from the public. The mechanism for handling complaints from the public and responding to them is generally headed by an ombudsman. Managing information received from complaints and transforming it into knowledge in an effective way requires the database to be: complete, up-to-date, versatile and — most importantly — available, accessible, and practical. For this purpose, a computerized system that is both user-friendly and interfaces with existing computerized demographic and other databases already existing in the hospital is essential. This type of information system should also assist in the ongoing administrative management of complaint handling by the ombudsman. In this chapter, we will examine the importance of the ombudsman in public and business organizations in general and in health organizations in particular. The findings presented in this chapter are based on a survey of the literature, on a study we conducted among the ombudsmen and directors of all 26 general hospitals in Israel, and on the authors' cumulative experience in management, in complaint handling and in auditing health systems, as described in case studies. These findings will illustrate how it is possible to exploit a computerized database of public complaints to improve various organizational activities, including upgrading the quality of service provided to patients in hospitals.
Quality in academic libraries is a multi-dimensional construct. Quality management and quality assurance is part of measuring performance excellence. Libraries are services. To improve service quality, stakeholders' needs and expectations should be monitored and measured, shortfalls should be identified and addressed.
Some basic principles are common to all measurements, but quality metrics will focus on the unique nature and factors that could affect quality of academic library services. Measuring quality includes the resources, resource delivery, the service environment, the management and staff, and the different stakeholders. In the digital environment, the academic library also moves from collection to connection, with new demands and performance indicators. Both quantitative and qualitative measurements are required to evaluate the overall performance of the library. The ultimate goal of measurement is improving the "fitness for purpose" of the library.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.