![]() |
The main theme of the AMCTM 2008 conference, reinforced by the establishment of IMEKO TC21, was to provide a central opportunity for the metrology and testing community worldwide to engage with applied mathematicians, statisticians and software engineers working in the relevant fields.
This review volume consists of reviewed papers prepared on the basis of the oral and poster presentations of the Conference participants. It covers all the general matters of advanced statistical modeling (e.g. uncertainty evaluation, experimental design, optimization, data analysis and applications, multiple measurands, correlation, etc.), metrology software (e.g. engineering aspects, requirements or specification, risk assessment, software development, software examination, software tools for data analysis, visualization, experiment control, best practice, standards, etc.), numerical methods (e.g. numerical data analysis, numerical simulations, inverse problems, uncertainty evaluation of numerical algorithms, applications, etc.), and data fusion techniques and design and analysis of inter-laboratory comparisons.
Sample Chapter(s)
Foreword (205 KB)
Chapter 1: Sensitivity Analysis in Metrology: Study and Comparison on Different Indices for Measurement Uncertainty (230 KB)
https://doi.org/10.1142/9789812839527_fmatter
The following sections are included:
https://doi.org/10.1142/9789812839527_0001
Sensitivity analysis is an important part of metrology, particularly for the evaluation of measurement uncertainties. It enables the metrologist to have a better knowledge of the measurement procedure and to improve it. A tool for sensitivity analysis is developed in the Guide for the evaluation of Uncertainty in Measurement (GUM) [1]. Supplement 1 to the GUM [2] that deals with Monte Carlo Methods (MCM) provides a similar sensitivity index known as “One At a Time” (OAT). Other sensitivity indices have been developed, but have not yet been used in metrology so far. In this paper, we put forward four indices and we compare them by means of metrological applications. We particularly focus on the Sobol indices [3], based on the evaluation of conditional variances. In order to compare the performance of these indices, we have chosen two examples, different from a mathematical point of view. The first example is the mass calibration example, mentioned in Supplement 1 to the GUM ([2], §9.3). It highlights the relevance of Sobol index to estimate interaction effects. The second example is based on Ishigami function, a non-monotonic function, ([3], §2.9.3). It leads to the conclusion that when the model is non-monotonic, indices based on partial derivatives and SRRC give wrong results, according to the importance of each input quantity.
https://doi.org/10.1142/9789812839527_0002
The goal of this article is to propose a criterion based on an adequate real distribution. In order to do this, the criterion must be able to integrate any type of distribution like unsymmetrical distributions, which are characteristic of the PDF shapes left by some manufacturing processes. To carry out this objective, two tasks should be realised. First, a criterion has to be constructed using the statistical approach. And next, the behaviour of the likelihood function against different kinds of PDF shape may be studied to define the most appropriate form of this function.
https://doi.org/10.1142/9789812839527_0003
In this article developed by the author the variant of system of covariance calculation is presented, which leads to the precise results of the measurements and based on: the least - squares method, criteria of uniformity of several correlation coefficients, “expert-statistical” method. Use of the above mentioned methods has allowed to increasing the reliability of the results of measurements of an alternating current voltage.
https://doi.org/10.1142/9789812839527_0004
The central aim of this study is to improve the accuracy of multivariate Calibration Curves employing Polynomial and Multi Layer Perceptrons Neural Networks. The proposed method considers a Reference (or Principal) Function represented by a polynomial and a Deviate Function represented by a Multilayer Perceptron Artificial Neural Network (MLP). It has been observed that the MLP results in decreasing data spread around the fitting Calibration Curve (CC).
https://doi.org/10.1142/9789812839527_0005
Standard test measures can be calibrated using the gravimetric or the volumetric methods according to the needed accuracy. In this paper is presented and compared the evaluation of the data and uncertainty of both gravimetric and volumetric method. The model for calculating the measurement uncertainty and the methodology for the evaluation of the uncertainty components are described for both gravimetric and volumetric methods. It is also described the cases where two reference volume measures are used to determine the volume of a single instrument in the volumetric method.
https://doi.org/10.1142/9789812839527_0006
Novel concept of linear dimension measurements by means of AFM is described. The main idea of this concept is based on simultaneous analysis of the simulation data obtained on “virtual atomic force microscope” and the experimental results. The physical model of AFM, which is the key element of the concept, is described in this paper.
https://doi.org/10.1142/9789812839527_0007
The design and the manufacture of free form surfaces are being a current practice in the industry. Thus the problem of the parts conformity of complex geometry is felt more and more. By this work, a mathematical procedure for modelling and inspecting complex surfaces in measurement process is presented including the correction of geometrical deviations within manufacturing process. The method is based on the Iterative Closest Point (I.C.P.) algorithm for alignment stage between nominal surface and measured points. The finite elements method for geometric compensation STL model generally obtained by a triangulation of nominal model using Computer Aided Design (CAD) software; is used in our proposal. An industrial application concerning a rapid prototyping technology process is presented. This last is used for manufacturing of real parts obtained from a STL file format.
https://doi.org/10.1142/9789812839527_0008
A simulation of a lock-in amplifier, based on the Monte Carlo method, has been developed to allow metrologists to understand how uncertainties propagate through a mathematical model of the instrument. Examples of its use are presented. The software is available from the NPL web site.
https://doi.org/10.1142/9789812839527_0009
Diverse approaches to measurement uncertainty have been proposed as alternatives to well-established statistical methods. Some are founded on deductive inference systems that dismiss one law or another of classical logic, and purport processing of uncertainty in non-classical terms, e.g., fuzzy sets or belief measures. Intuitionistic logic, a deductive inference system where the law of excluded middle need not hold, is a case-study: since the logic of uncertainty is essentially inductive, a major point at issue is the viability of an intuitionistic probability calculus. An objective of this paper is to set the relevance of such a non-classical context to quantitative treatments of measurement uncertainty.
https://doi.org/10.1142/9789812839527_0010
In this paper we compare ISO 5725 and GUM from a statistician point of view. In particular we report some considerations on the relevant role of the interactions among the variables in the equation model, we emphasize the importance to apply standard ANOVA technique if the measurand is measurable and we give some warnings concerning the application of the expanded uncertainty when the variables that are involved are few.
https://doi.org/10.1142/9789812839527_0011
Key comparisons aim to demonstrate agreement, within claimed uncertainties, of measurements made by National Metrology Institutes (NMIs). Chi-squared-like tests of this agreement can be strengthened by increasing the number of independent measurements by aggregation: grouping measurement comparisons into larger sets where an averaged test of expected agreement is still of practical interest.
Measurements can have their agreement evaluated relative to reference values; or relative to each other to test unmediated agreement. One might average over artefacts, or over measurement ranges, or over techniques of measurement, or over all the comparisons run by a particular consultative committee (CC), or over all CCs. One might want to average over agreement for measurements from a particular NMI, or average over all participants from a particular region, or average over all participants in the key comparison data base. We show how the relative weights of subgroups of measurements should be chosen to balance the influence of the different subgroups in the aggregation irrespective of the number of measurements in the subgroup.
https://doi.org/10.1142/9789812839527_0012
In this paper, a multiple camera measurement system is presented for the pose (position and orientation) estimation of a robot end-effector for assembly. First, a multiple camera geometric model is introduced for the 3D pose estimation of the end-effector and the identification of the intrinsic and extrinsic parameters of cameras. Second, an uncertainty propagation model is derived from the geometric model that enables the number of cameras to be chosen, the relative position between cameras and finally uncertainty evaluation for pose estimation. Experimental results are presented in the last section of this paper.
https://doi.org/10.1142/9789812839527_0013
Sensors employed for dynamic measurements often show a non-perfect dynamic behaviour. By taking the instantaneous output of the sensor as an estimate of the time-dependent value of the measurand, the so-called dynamic error emerges. We assume that a linear time-invariant system is appropriate to model the dynamic input-output behaviour of the sensor, and we propose digital filtering for the compensation of the dynamic error. A procedure for the design of the digital compensation filter is described, and the evaluation of the uncertainty associated with the application of this filter is considered. Finally, the benefit of the dynamic compensation and proposed uncertainty treatment is illustrated in terms of simulations for a particular relevant dynamic model.
https://doi.org/10.1142/9789812839527_0014
This communication focuses the Portuguese experience in the field of the refractometers metrological requirements, namely the analyses of variance with a nested experimental design applied to the liquid standards used in metrological control.
https://doi.org/10.1142/9789812839527_0015
The proficiency testing (PT), as described by the ISO Guide 43 “is the use of interlaboratory comparisons (ILCs) for purpose of the determination of laboratory testing or measurement performance… ”. The NMIs traditionally organize the ILCs for the NABs providing the travelling standards, the reference(s) value(s) and at the end perform the statistical analysis of the laboratory results. The goal of this article is to discuss the existing approaches for the calibration laboratories ILCs evaluation and propose a basis for the validation of the laboratories measurement capabilities.
https://doi.org/10.1142/9789812839527_0016
Many data analysis problems in metrology involve fitting a model to measurement data. Least squares methods arise in both the parameter estimation and Bayesian contexts in the case where the data vector is associated with a Gaussian distribution. For nonlinear models, both classical and Bayesian formulations involve statistical distributions that, in general, cannot be completely specified in analytical form. In this paper, we discuss how approximation methods and simulation techniques, including Markov chain Monte Carlo, can be applied in both a classical and Bayesian setting to provide parameter estimates and associated uncertainty matrices. While in the case of linear models, classical and Bayesian methods lead to the same type of calculations, in the nonlinear case the differences in the approaches are more apparent and sometimes quite significant.
https://doi.org/10.1142/9789812839527_0017
In the paper, the recommendations on the parameters of a circle measurement with CMM (Coordinate Measuring Machine) have been presented. The method of uncertainty estimation for the measured circle has been presented, too. The influence of the parameters of measured detail underwent discussion, like the type of form deviation and dimensional tolerances, as well as influence of the measurement method, particularly, number of probing points, CMM permissible error and fitting element type. The simulations and experimental measurement proved that the minimal number of probing points is not recommended. On the other hand, in the touch-trigger measurement, too large number of points does not increase the accuracy of measurement. As a result, the recommendations on the probe points number and CMM accuracy have been presented.
https://doi.org/10.1142/9789812839527_0018
A new pressure controller has been designed and developed at the Italian Istituto Nazionale di Ricerca Metrologica (INRiM). It allows pressure regulations at 1 ppm by means of a Gas-Controlled Heat-Pipe, that realizes the liquid-vapor phase transition of its working fluid. Thanks to the thermodynamic relation p(T), linking the pressure p and the temperature T of the working fluid, the pressure control algorithm can use a thermometer as a pressure sensor, reaching in this way better pressure control than those obtainable using commercial pressure gauges. Dedicated electronics circuits and software have been developed at INRiM for the complete control of the electronics, I/O interfacing routines, PID and auto-calibrating control functions, data acquisition modules and “real-time” control. The paper reports the structure of the software, its main features and the capabilities of the whole system.
https://doi.org/10.1142/9789812839527_0019
Adequacy improvement problem is discussed for measurement models. The main practical situations, which require improving models, are revealed and described. The peculiarities of model improvement algorithms are outlined and compared for these cases. Measurement result is proposed to be extended by including model inadequacy characteristics.
https://doi.org/10.1142/9789812839527_0020
The aim of this article is to outline the purpose and rationale for a new software guide for measurement systems software. The progress on constructing the guide is described. The work is being done in collaboration with the Physikalisch- Technische Bundesanstalt (PTB) and the National Physical Laboratory (NPL).
https://doi.org/10.1142/9789812839527_0021
The ISO 11929 standards series for characteristic limits of ionising radiation measurements shall be replaced by a single standard the current draft of which is ISO/DIS 11929:2008. Its scrutiny reveals that it is conditional on an arbitrary extension of the parameter space and of the likelihood function into the unphysical region not in line with ordinary Bayesian statistics and that some of the equations it includes are unfavourable.
https://doi.org/10.1142/9789812839527_0022
With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. The uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.
https://doi.org/10.1142/9789812839527_0023
In this paper a Kalman filter type algorithm for estimating a calibration characteristic of measurement instruments is presented. That approach gives a rigorous derivation of a recurrent algorithm for estimating the parameters of the calibration curve with the incorporation of the errors in reproducing the inputs. A new approach has been proposed for the generation of stopping rule in calibration characteristics identification problem. The proposed stopping rule defines the coefficients estimation halt and makes possible to determine sufficient number of measurements to attain required accuracy of calculated estimates. As an example, the problem of calibration of a differential pressure gauge using standard pressure setting devices (piston gauges) is examined.
https://doi.org/10.1142/9789812839527_0024
A procedure for optimal selection of sample input signals via D-optimality criterion to get the best calibration characteristics of measuring apparatus is proposed. As an example the problem of optimal selection of standard pressure setters when calibrating differential pressure gage is solved.
https://doi.org/10.1142/9789812839527_0025
An uncertain complex number is an entity that encapsulates information about an estimate of a complex quantity. Simple algorithms can propagate uncertain complex numbers through an arbitrary sequence of data-processing steps, providing a flexible tool for uncertainty calculations supported by software. The technique has important applications in modern instrumentation systems. Some radio-frequency measurements are discussed.
https://doi.org/10.1142/9789812839527_0026
Evaluation of measurement uncertainty is essential for all kinds of calibration procedures. The widely accepted approach may directly be applied in static or stationary measurements but gives little support for transient conditions. To remedy this situation, the sensitivities to the uncertainty of input variables which are normally constant, are here generalized to be time-dependent and derived by means of digital filtering. Arbitrarily varying measurands are allowed but the system must be linear and time-invariant. The approach is illustrated for a common accelerometer system.
https://doi.org/10.1142/9789812839527_0027
In many cases the overall measurement uncertainty in a chemical measurement result is highly determined by the uncertainty components generated during the sample preparation process. Extra components are caused by inhomogeneity and instability of the samples to be measured or under test. The measurement uncertainty is further determined by the eventual application of recovery procedures and the consequences of commutability problems. In several cases the measurement result is method dependent. In order to have comparable measurement results it is essential that the measurand is defined very precisely. Metrological traceability to the SI is not (yet) possible in all cases. Procedures for the calculation of measurement uncertainty in chemical measurements as carried out by the National Metrology Institutes are still under consideration. Likewise the uncertainty in the reference value of an inter-laboratory comparison, for example a Key Comparison, is still debated, as well as the determination of the uncertainty claimed with a Calibration and Measurement Capability (Best Measurement Capability in the scope of an accreditation) of a measurement or testing laboratory. So far, measurement uncertainty related to sampling is left aside. An overview will be given of achievements and challenges with regard to measurement uncertainty evaluation in chemical measurement institutes.
https://doi.org/10.1142/9789812839527_0028
A process is well understood when all critical sources of variability are identified and explained, variability is managed by the process, and product quality attributes can be accurately and reliably predicted over the design space. Quality by Design (QbD) is a systematic approach to development that begins with predefined objectives, emphasizes product and process understanding and sets up process control based on sound science and quality risk management. The FDA and ICH have recently started promoting QbD in an attempt to curb rising development costs and regulatory barriers to innovation and creativity. QbD is partially based on the application of multivariate statistical methods and a statistical Design of Experiments strategy to the development of both analytical methods and pharmaceutical formulations. In this paper, we review the basics of QbD and emphasize their impact on the development of analytical measurement methods in the innovative, generic and biosimilar pharmaceutical industry.
https://doi.org/10.1142/9789812839527_0029
Since the Guide to the Expression of Uncertainty in Measurement (GUM) was published in 1993 it has changed the evaluation of physical and chemical measurements. Nowadays almost all high level measurements include a detailed evaluation of uncertainty. This allows the scientific community to do the next step and evaluate uncertainty for derived evaluations like parameter fittings. The evaluation of the uncertainty for complicated parameters like the results from non-linear fitting procedures can be carried out in two steps. The first step is a sensitivity analysis of the evaluation algorithm and a test of mutual independence of the parameters. If the fitting algorithm is sufficiently robust a linear model is derived from the fitting algorithm which is then used in a second step to evaluate the uncertainty of the fitting parameters. This paper discusses the sensitivity analysis in detail with the emphasis on possibilities to check for robustness and linearity. An efficient method based on covering arrays is presented to test for hidden couplings between the input parameters inside the evaluation model.
https://doi.org/10.1142/9789812839527_0030
High precision calibration of electromechanical transducers using dynamic loads is a field of rapidly growing demand in industries. But in contrast, the number of validated methods or even written standards is still close to negligible considering the number of measurands concerned. For this reason, the Physikalisch-Technische Bundesanstalt (PTB) has put increasing effort into the field of dynamic measurement of mechanical quantities in the past several years. The dynamic measurands covered so far by this R&D initiative include acceleration, force, pressure and torque. New dynamic calibration facilities have been developed, applying either periodic or shock excitations and using either primary or secondary calibration methods. The working principles of these calibration devices are described subsequently as well as the different approaches for the dynamic characterization of transducers, like step response, impulse response or harmonic response, respectively. From the scientific point of view, the harmonization of different dynamic calibration results is an essential prerequisite towards future standardization work. Despite the different technical realizations, the mathematical procedures and the problems for a dynamic characterization of transducers as well as the dissemination of the dynamic units are generally similar. These challenges are now approached by methods of a model-based calibration, which seem to give very promising results.
https://doi.org/10.1142/9789812839527_0031
In the recently issued Supplement 1 to the GUM a Monte Carlo method is proposed for the evaluation of measurement uncertainty. The method is based on Bayes' theorem and the principle of maximum entropy (PME). These two tools are used to assign probability density functions (PDFs) to the input quantities entering a measurement model, which are then combined using the rules of probability calculus to find the PDF associated with the output quantity. The expectation of the latter PDF will usually be taken as the best estimate of the output quantity, and the square root of its variance will be taken as the associated standard uncertainty. Furthermore, coverage intervals having specified probabilities of containing the value of the output quantity can be calculated. The method is general, in the sense that it is not restricted to linear models. In this paper, some details on the application of Bayes' theorem and of the PME in assigning PDFs to the input and output quantities are reviewed.
https://doi.org/10.1142/9789812839527_0032
International standards provide, with the Geometrical Product Specification (GPS), tools for geometric specification and verification of mechanical parts based on tolerance zones (TZ). A TZ represents a domain space in which the real surface must be included. This space domain defines the set of geometries which are functionally equivalent. In a part design, numerous TZ are constraints related to a reference system. This last one is an ideal geometry, called datum, associated to real surfaces, called datum feature. A datum can be simple, if it is related to one datum feature, or forming a datum-system reference if it is related to two or three real surfaces. In this case, there is a hierarchy in the fitting process between the datum-system and the set of datum features. Hence, a primary, secondary and a tertiary datum should be considered. It is now commonly admitted that measurements are tainted with uncertainties. These uncertainties on datum feature estimations are propagated to the datum-system reference and then propagated to all TZ positions. Thus, considering the level arm effect between the datum and the TZ, uncertainties on TZ positions can become quickly of the same order of magnitude as the maximal admissible defect specified on product geometry. This paper presents a method permitting to estimate uncertainties on a datum-system reference.
https://doi.org/10.1142/9789812839527_0033
This paper deals with a fuzzy/possibility representation of measurement uncertainty that often arises in physical domains. The construction of the possibility distribution is based on the stacking up of probability coverage intervals issued from the probability density. When the probability density is unknown but different amounts of information such as symmetry, unimodality, support or standard deviation are available, corresponding possibility distributions dominating probability coverage intervals can be built. They are compared to the coverage intervals obtained by using the maximum entropy principle.
https://doi.org/10.1142/9789812839527_0034
The study proposes to find the most efficient way to grow the intracellular trehaloze content through beer yeast suspending into trehaloze solutions by different concentrations, at different thermo-stating temperatures and in different contact times, taking into account that this technique allows the passive transfer of exogenetic trehaloze inside the cells both at a new propagated cell population, and at cells resulted from an industrial inoculums.
https://doi.org/10.1142/9789812839527_0035
MUSE (Measurement Uncertainty Simulation and Evaluation) is a software tool for the calculation of measurement uncertainty based on the Monte Carlo method (MCM) as recommended by the first supplement to the GUM (GS1). It is shown here how a simple example can be implemented in the application language of MUSE. An overview of the possible parameters that can be set is given.
https://doi.org/10.1142/9789812839527_0036
A novel method for detection of lines in a digital image used in line scale calibration process is described. The line images are recorded by a moving microscope and a CCD camera. This method is based on the center of the line instead of the edges of the line. Line detection is one of the most important problems in the line scale calibration. This method uses a Gabor filter for each row in the digital image. Likewise, based on robust statistics, some outlier points due to imperfections in the mark on the scale are ignored.
https://doi.org/10.1142/9789812839527_0037
At present, several international written standards and guides are available, dealing with the expression of uncertainty in measurement, calibration and testing, and concerning the definition of the basic concepts and related nomenclature and basic methods for data analysis. An inspection and comparison of these texts, which were prepared in a time span of more than twenty years, shows that, to different degrees, not all of them are consistent with each other, probably due mainly to an advancement of the discussion and of agreed solutions that took place in the indicated time span and that can be observed for most of them. The paper addresses an attempt to identify what appear the more serious inconsistencies that in author's view should be removed in the process of revision started for several of them.
https://doi.org/10.1142/9789812839527_0038
A traditional approach in conformity assessment to minimizing the risks of incorrect decision-making associated with measurement uncertainty is to apply ‘guard-bands’ which narrow the zone of acceptance by some fraction of the measurement uncertainty. Criticism of the guard-band approach has included increased costs for the supplier in meeting the narrower acceptance zone as well as the fact that the new acceptance limits will in time become new de facto specification limits. The present work applies an optimized uncertainty methodology to analyse the consequences of guard-banding in terms of an economic decision theory approach. The methodology is of general applicability but is illustrated in the present work in the simple case of pre-packaged goods priced linearly with the amount of content. Optimum strategies for the supplier are illustrated in terms of minimizing production and testing costs, while at the same time maintaining satisfactory levels of customer satisfaction.
https://doi.org/10.1142/9789812839527_0039
Since the advent of the Guide to the expression of Uncertainty in Measurement (GUM) in 1995 laying the principles of uncertainty evaluation numerous projects have been carried out to develop alternative practical methods that are easier to implement namely when it is impossible to model the measurement process for technical or economical aspects.
In this paper the author presents the recent evolution of measurement uncertainty evaluation methods with GUM's supplements and several alternative methods for laboratories. The evaluation of measurement uncertainty can be presented according to two axes based on intra-laboratory and interlaboratory approaches.
The intra-laboratory approach includes : “the modelling approach” (application of the procedure described in chapter 8 of the GUM, GUM uncertainty framework) and “the single laboratory validation approach”.
The interlaboratory approaches are based on collaborative studies and they are respectively named “interlaboratory validation approach” and “proficiency testing approach”. The first one uses the statistical methods described in the standard ISO 5725 Accuracy (trueness and precision) of measurement methods and results and the recent ISO Technical Specification 21748 Guidance for the use of repeatability reproducibility and trueness estimates in uncertainty estimation. The second approach will certainly play an important role in the near future. A majority of testing laboratories are involved in Proficiency Testing Scheme and interested in the possibility to use the results of these exercises and the data accumulated along the time to evaluate their uncertainty. The most important application area of these approaches certainly concerns medicine laboratories. Recently, the medicine laboratories have been encouraged to evaluate their uncertainty of analysis, either by regulation or on voluntary basis in accreditation process.
https://doi.org/10.1142/9789812839527_0040
In metrology laboratories, the transfer of traceability from primary standards (fixed points) to standard platinum resistance thermometers (SPRTs) is often based on the use of in-house methodologies with the fixed points defined by the International Temperature Scale, ITS-90. This process, however, is not simple, since different calibration functions apply to different temperature sub-intervals. The process has two stages: (1) estimation of the calibration parameters of the SPRT, and (2) the use of those calibration parameter estimates to obtain temperatures from measured values of electrical resistance. A relevant discussion is whether to use the above two stages or an approach given by combining them, taking into account the correlation associated with the calibration parameter estimates. In such cases, a further uncertainty component (a covariance) should be evaluated and included in the uncertainty budget. An example of application is given regarding the calibration and use of 25 Ω SPRTs in a metrology laboratory, considering two overlapping temperature sub-intervals. The comparison of the two approaches as regards the evaluation of uncertainty is the main concern of this paper.
https://doi.org/10.1142/9789812839527_0041
Measurement of characteristics related to human perception is discussed. A historical background is provided to explain how this subject has been critical for the entire science of measurement. Some current research activities are then surveyed and future research needs are addressed, including instrumentation, methods and models. Theoretical aspects are discussed and a new criterion for assessing measurability, which may be particularly useful for perceptual characteristics, is proposed.
https://doi.org/10.1142/9789812839527_0042
This paper proposes a new method for an efficient lossless compression of computer tomography (CT) images. For this, we adapted a delta encoding scheme from PNG images to CT data by applying linear filter operations to the CT scans. After applying improved delta filters, we performed both run-length and entropy encoding using the deflate method. For the evaluation of the proposed methods, we performed CT scans of objects and compressed the resulting point clouds. As a result, we achieved an additional reduction of the data size compared to standard compression methods by up to 13.1 percent.
https://doi.org/10.1142/9789812839527_0043
Declaration and specification of a geometrical part is formulated in the language of Geometric Algebra, a unified mathematical language based on Clifford algebra, developed by D. Hestenes since 1966. Some of the modern applications of Clifford geometric algebra are: computer vision, CAD systems, robotics and screw theory, automated theorem proving, symbolic algebra and numerical algorithms. First, we will recall as briefly as possible the TTRS theory and the thirteen geometrical constraints necessary and sufficient to specify any technical dimensions of a mechanical part and their “softgage” compatible with the ISO standard. Then we will give a description of the Geometric Algebra. Finally we will show how Geometric Algebra allows us to write down the thirteen constraints by an algebraic expression not only coordinate free but much more simple and taking into account the chirality of objects.
https://doi.org/10.1142/9789812839527_0044
Modelling of a measurement is an indispensable prerequisite of modern uncertainty evaluation. Both, the ISO-GUM [1] and the supplement of the GUM [2] require to express the knowledge about the measurement process by a so-called model equation which represents the mathematical relationship between the relevant parameters, the influence quantities, the indication and the measurand(s). Nevertheless both documents are confined to lumped-parameter systems in the steady state. Since dynamic measuring systems gain more and more importance, modern uncertainty determination must develop appropriate modelling approaches for dealing with dynamic measurements. This paper exemplary describes a possible modelling approach for dynamic measurements that utilizes discretised state-space forms.
https://doi.org/10.1142/9789812839527_0045
This paper describes a general approach, based on Bayesian decision analysis, to determining decision rules in the presence of uncertain knowledge taking into account the likely decision costs. It shows how the approach can be applied to limit-of-detection problems for a number of measurement models, including those associated with repeated measurements. The approach is extended to cover the case where the measurand reflects background, naturally-occurring levels of a quantity.
https://doi.org/10.1142/9789812839527_0046
The validity of expanded uncertainties evaluated using the method based on the “law of propagation of uncertainty” (LPU, GUM method) and “propagation of distribution using the Monte Carlo method” (MCM, GUM Supplement1 draft method) were investigated by using model data generated from random numbers belonging to the normal distribution. It was found that the expanded uncertainty evaluated by the MCM can be a significant overestimate, when t-distributions are adopted as the input distributions.
https://doi.org/10.1142/9789812839527_0047
Statistical analysis of key comparison and inter-laboratory experiments is required to produce an estimate of the measurand called a reference value and further, measures of equivalence of the participating laboratories. Methods of estimation of the reference value have been proposed that rest on the idea of finding a so-called consistent subset of laboratories, that is, eliminating outlying participants. In this paper we propose an alternative statistical model, one that accommodates all of the participants' data and incorporates the dispersion among the laboratories into the total uncertainty of the various estimates. This model recognizes the fact that the dispersion of values between laboratories often is substantially larger than the measurement uncertainties provided by the participating laboratories. We illustrate the method on data from key comparison CCL-K1.
https://doi.org/10.1142/9789812839527_0048
Tolerance verification permits to check the product conformity and to verify assumptions made by the designer. For conformity assessment, the uncertainty associated with the values of the measurands must be known. In the ISO TS 17450 part 2, the notion of the uncertainty is generalized to the specification and the verification. The uncertainty is divided into correlation uncertainty, specification uncertainty and measurement uncertainty. Correlation uncertainty characterizes the fact that the intended functionality and the controlled characteristics may not be perfectly correlated. Therefore, we propose a new specified characteristics based on the statistical tolerancing approach which is directly in relationship with the design intent: the probability distribution of maximum range of the transmission error (the transmission error is the main source of vibratory and acoustic nuisances), and the evaluation of this characteristic based on 3D acquisition by Monte Carlo simulation and Tooth Contact Analysis. Moreover, the measurement uncertainty of the evaluation of this characteristic is estimated by Monte Carlo Simulation.
https://doi.org/10.1142/9789812839527_0049
The output of a Monte Carlo analysis is a large set of numbers that can be used to approximate the probability distribution of the relevant random variable. This article describes how this set can be summarized to enable the variable to function as an input to a further uncertainty calculation, in particular a further Monte Carlo calculation. The basic technique presented involves approximating the inverse distribution function of the variable, which is the quantile function, by a function with a simple form. This makes the subsequent generation of random samples for Monte Carlo calculations straightforward.
https://doi.org/10.1142/9789812839527_0050
The ‘principle of maximum entropy’ has been advocated for choosing a probability distribution to represent individual unknowns such as systematic deviations. Supporting claims like ‘the maximum-entropy distribution is minimally committal’ are indefensible in this context. The idea that entropy measures information only has meaning with sequences of categorical random variables.
https://doi.org/10.1142/9789812839527_0051
Modern laboratory analysis systems often involve parallel components to enhance throughput. As a consequence these components need to be designed and controlled to allow for a homogeneous performance. In our case antigen-antibody-reactions take place on small measurement chips, which are processed at several units of an analysis system. Signals are measured on the two dimensional chip surface. In order to optimize the measurement process it is necessary to compare the outcome of the separate processing units and to identify the units of equal performance.
Clustering, the unsupervised partitioning of data into groups, is a task with broad application in many disciplines. We compare some cluster algorithms in the aforementioned situation. The algorithms that we analyze and compare are classical multivariate ones like K-means or hierarchical clustering, as well as a more recent one emerging from functional data analysis. The latter method is based on nonparametric functional principal component analysis. Furthermore a clustering method based on the coefficients of a parametric linear model fit is taken into consideration.
https://doi.org/10.1142/9789812839527_0052
The dynamic behaviour of accelerometers can be described in terms of a second-order model. The parameters of this model may be estimated from a measurement of the frequency response of the accelerometer as obtained in a dynamic calibration measurement. The impact of correlations of the measured frequency response on the results obtained from a subsequent analysis of this data is studied. Possible correlations are considered which arise from correlations of magnitude and phase measurements at the same frequency as well as from a common systematic influence. By using data from a dynamic calibration measurement it is shown that the considered correlations can have a large influence on the results and that they should be taken into account.
https://doi.org/10.1142/9789812839527_0053
This article discusses the GLS method which estimates the calibration line taking into account unequal variances and covariances associated with responses and standard concentrations. As these two variables are not pair correlated, we use a simplified GLS estimator.
https://doi.org/10.1142/9789812839527_0054
To prove the reliability of nanoscale measurements a new approach is proposed, which assumes joined analysis of experimental image and its analog produced by simulation in “virtual microscope”. Uncertainties of “measured” parameters are evaluated using cross-correlation dependences between the parameters describing specimen shape and parameters of simulated image.
https://doi.org/10.1142/9789812839527_0055
Precision free-form surfaces are widely used in advanced optical and mechanical devices. In order to evaluate the form quality of a free-form surface, it is required to fit the measurement data with the design template and compare the relative deviation between them. A common approach is to minimize the sum of squared differences in the z direction. Its solution is not robust enough and may be biased due to outliers. This paper presents a fitting algorithm which employs the sum of orthogonal distances to evaluate the goodness of fit. The orthogonal projection points are updated simultaneously with the shape and motion parameters. Additionally, the l1 norm is adopted to improve the robustness of the solution. The Monte-Carlo simulation demonstrated that the bias in the fitted intrinsic characteristics of this method is much smaller than the traditional algebraic fitting, whereas the fitted motion parameters have no distinct difference.
https://doi.org/10.1142/9789812839527_0056
In metrology, the uncertainty of the mean of repeated measurements is often calculated by the sample standard deviation of the measurements divided by the square root of the sample size. When the measurements are autocorrelated, in particular when they are from a stationary process, a recent paper in Metrologia (Zhang 2006) provided an approach to calculate the corresponding uncertainty. However, when the measurements are from a nonstationary process, how to assess their uncertainty remains unresolved. Allan variance or two-sample variance has been used for more than three decades as a substitute for the classical variance to characterize the stability of clocks or frequency standards when the underlying process is a 1/f noise process. Recently, from the point of view of the time domain, a paper in Metrologia (Zhang, 2008) studied the Allan variance and its properties for stationary processes, random walk, and long-memory processes such as the fractional difference processes This paper discusses the use of Allan variance as an alternative measure of uncertainty for measurements from time series models. The results show that the Allan variance is a better measure of the process variation than the classical variance of the random walk and the nonstationary fractional difference processes including the 1/f noise.
https://doi.org/10.1142/9789812839527_bmatter
The following sections are included: