This volume collects refereed contributions based on the presentations made at the Sixth Workshop on Advanced Mathematical and Computational Tools in Metrology, held at the Istituto di Metrologia “G. Colonnetti” (IMGC), Torino, Italy, in September 2003. It provides a forum for metrologists, mathematicians and software engineers that will encourage a more effective synthesis of skills, capabilities and resources, and promotes collaboration in the context of EU programmes, EUROMET and EA projects, and MRA requirements. It contains articles by an important, worldwide group of metrologists and mathematicians involved in measurement science and, together with the five previous volumes in this series, constitutes an authoritative source for the mathematical, statistical and software tools necessary to modern metrology.
The proceedings have been selected for coverage in:
https://doi.org/10.1142/9789812702647_fmatter
Foreword.
Contents.
https://doi.org/10.1142/9789812702647_0001
A new kind of artefact, based on a modification of the hexapod machine’s well-known structure, has been introduced by Antunes, S. D. et al in [1], in order to determine the global errors of coordinate measuring machines. Here we are presenting results from validation of the technique: using a self-calibrated method and modeling the reference value for calibration based on laser trilateration.
https://doi.org/10.1142/9789812702647_0002
The problem of assigning uncertainty-like statements to qualitative test results still remains widely unsolved, although the volume of not purely quantitative testing and analysis, and their economic impact are immense. A pragmatic approach is developed for the assessment of measurement uncertainty for procedures where a complex characteristic (e.g. of a material) discontinuously changes in dependence on one or more recordable, continuous quantitative parameters, and the change of the characteristic is assessed by judgement (which may be instrument-assisted). The principles of the approach are discussed, and an application example is given.
https://doi.org/10.1142/9789812702647_0003
Coherent anomalies can affect the quality of digitized images. In this paper the method, which was developed for their removal in images from old movies, is proposed in the contest of a metrological application in nanotechnology, where accurate thickness measurements of special coatings are obtained after an image processing. The method constructs a piece-wise spline approximation to restore the corrupted data in the low-pass filtered version of the digitized image obtained by a wavelet decomposition. An invisible reconstruction of the damaged area are obtained, since the type of the morphology in suitable domains is preserved. Simple statistical tests are able to automatically recognize the morphology typand to construct the appropriate approximation. The calibration study that have been performed for identify the test thresholds is described in the paper. It uses simulated images with different types of noise and of scratches. The benefit of the method adopted to locally preprocess the metrological image before acquiring the measurements is finally discussed.
https://doi.org/10.1142/9789812702647_0004
Least squares methods provide a flexible and efficient approach for analyzing metrology data. Given a model, measurement data values and their associated uncertainty matrix, it is possible to define a least squares analysis method that gives the measurement data values the appropriate ‘degree of belief’ as specified by the uncertainty matrix. Least squares methods also provide, through χ2 values and related concepts, a measure of the conformity of the model, data and input uncertainty matrix with each other. If there is conformity, we have confidence in the parameter estimates and their associated uncertainty matrix. If there is nonconformity, we seek methods of modifying the input information so that conformity can be achieved. For example, a linear response may be replaced by a quadratic response, data that has been incorrectly recorded can be replaced by improved values, or the input uncertainty matrix can be adjusted. In this paper, we look at a number of approaches to achieving conformity in which the main element to be adjusted is the input uncertainty matrix. These approaches include the well-known Birge procedure. In particular, we consider the natural extensions of least squares methods to maximum likelihood methods and show how these more general approaches can provide a flexible route to achieving conformity. This work was undertaken as part of the Software Support for Metrology and Quantum Metrology programmes, funded by the United Kingdom Department of Trade and Industry.
https://doi.org/10.1142/9789812702647_0005
During the analysis of natural gas by gas chromotography, the indicated response is affected systematically by varying environmental and instrumental conditions. Calibration data dispersion is attributed not only to random effects, but also to systematic measurement effects. The accuracy of the measurement results and consequently the usability of a calibration model are accordingly reduced. The model consists of a series of calibration curves, one for each component of the gas measured. When the systematic uncertainty component dominates, a correlated response behaviour between corresponding points on which the set of curves depends is observed. A model-based least-squares method is introduced that compensates for these effects. It incorporates correction parameters to account for the systematic uncertainty components, thus eliminating the correlation effects and reducing the calibration model uncertainty. Results are presented for calibration data modelled by straight-line functions. Generalizations are indicated.
https://doi.org/10.1142/9789812702647_0006
Consider approximating a set of discretely defined values f1, f2,…, fm say at x = x1, x2,…, xm, with a chosen approximating form. Given prior knowledge that noise is present and that some might be outliers, a standard least squares approach based on an l2 norm of the approximation error є may well provide poor estimates. We instead consider a least squares approach based on a modified measure taking the form , where c is a constant to be fixed. Given a prior estimate of the likely standard deviation of the noise in the data, it is possible to determine a value of c such that the estimator behaves like a robust estimator when outliers are present but like a least squares estimator otherwise. We describe algorithms for computing the parameter estimates based on an iteratively weighted linear least squares scheme, the Gauss-Newton algorithm for nonlinear least squares problems and the Newton algorithm for function minimization. We illustrate their behaviour on approximation with polynomial and radial basis functions and in an application in co-ordinate metrology.
https://doi.org/10.1142/9789812702647_0007
This paper deals with a method for the calibration of optical sensors on CMMs that relies on the measurement of specific artefacts. The calibration process consists of establishing the global transfer function between the 2D data given in the 2D space of the sensor and the 3D coordinates of points in the CMM space. The identification of the transfer function parameters requires the measurement of geometrical artefacts. In the paper, we suggest a specific artefact, the facet sphere. Algorithms for parameter identification are thus developed, based on the extraction of interest points belonging to the artefact. An estimation of dispersions associated with the method highlights the effect of some configuration parameters of the measuring system such as the position in the 2D space.
https://doi.org/10.1142/9789812702647_0008
Statistical methodology coming from the mathematical branch of probability theory, and powerfully helped today by the alliance of computing science and other fields of mathematics gives ways of “making the data talk”. The place of statistical methods in metrology and testing is presented. Emphasis is put on some difficulties encountered when wishing to apply statistical techniques in metrology: importance of the traditional mathematical assumptions of many statistical techniques, difference between a confidence interval and other intervals (coverage interval,…), difficulties caused by the differences in terminology and concepts used by metrologists and statisticians. At last information is given about standardization organizations involved in metrology and statistics.
https://doi.org/10.1142/9789812702647_0009
In this paper the possibility of expressing the final result of a (any) measurement by a probability distribution over the set, discrete or continuous, of the possible values of the measurand is considered.
After a brief review of the motivation, state of the art and perspective of this option, the related software tools are discussed, with reference to the package UNCERT developed at the authors’ laboratory.
The software implements an approach based on the direct calculation of probability distributions, for a wide class of measurement models. It is arranged in a hierarchical and modular structure, which greatly enhances validation.
Involved variables are considered as inherently discrete and related quantisation and truncation effects are carefully studied and kept under control.
https://doi.org/10.1142/9789812702647_0010
Information on possible values of a quantity can be gained in various ways and be expressed by a probability density function (PDF) for this quantity. Its expectation value is then taken as the best estimate of the value and its standard deviation as the uncertainty associated with that value. A coverage interval can also be computed from that PDF. Information given by a small number n of values obtained from repeated measurements requires special treatment. The Guide to the Expression of Uncertainty in Measurement recommends in this case the t-distribution approach that is justified if one knows that the PDF for the measured quantity is a Gaussian. The bootstrap approach could be an alternative. It does not require any information on the PDF and -based on the plugin principle- can be used to estimate the reliability of any estimator. This paper studies the feasibility of the bootstrap approach for a small number of repeated measurements. Emphasis is laid on methods for a systematic comparison of t-distribution and bootstrap approach. To support this comparison, a fast algorithm has been developed for computing the total bootstrap and total median.
https://doi.org/10.1142/9789812702647_0011
We consider the problem of approximating a continuous real function known on a set of points, which are situated on a family of (straight) lines or curves on a plane domain. The most interesting case occurs when the lines or curves are parallel. More generally, it is admitted that some points (possibly all) are not collocated exactly on the lines or curves but close to them, or that the lines or curves are not parallel in a proper sense but roughly parallel. The scheme we propose approximates the data by means of either an interpolation operator or a near-interpolation operator, both based on radial basis functions. These operators enjoy, in particular, two interesting properties: a subdivision technique and a recurrence relation. First, the recurrence relation is applied on each line or curve, so obtaining a set of approximated curves on the considered surface. This can be done simultaneously on all the lines or curves by means of parallel computation. Second, the obtained approximations of the surface curves are composed together by using the subdivision technique. The procedure gives, in general, satisfactory approximations to continuous surfaces, possibly with steep gradients.
https://doi.org/10.1142/9789812702647_0012
Coordinate measuring machines are now widely used to qualify industrial pieces. Nevertheless, the actual CMM software’s usually restrict to the determination of mean values. This is the case for both the characterization of individual surfaces and for the determination of geometrical errors. However, in accordance with quality standards, the uncertainty of each measurement should also be defined. At last CIRP seminar, a new non linear least squares method has been proposed for that purpose to define the error bars of the parameters estimated for each measured surface. These values are deduced from the gap between the measured coordinates and the associated optimized surface. Our new presentation now extends to the propagation of such uncertainties to the determination of ISO1101 tolerances (dimensions and geometrical errors). To illustrate this approach, a specification was inspected on a true real industrial piece, with respect to the ISO 1101 standard. For this industrial application, different measurement procedures were proposed and carried out. The uncertainties of the estimated geometrical errors were then evaluated, showing the influence of the experimental method onto the reliability of the measurement. This example thus demonstrates the need to optimize the inspection process. To conclude our presentation, different aspects of the inspection are finally discussed to improve the verification of ISO 1101 specifications.
https://doi.org/10.1142/9789812702647_0013
The in-use uncertainty of an instrument is investigated according to the possible measurement procedures, namely: as a comparator or as a standard. These two alternatives are embedded in a unique model and the uncertainty components due to instrument calibration and noise are discussed. Special attention is given to the case of calibration by fitting with a straight line.
https://doi.org/10.1142/9789812702647_0014
This paper describes automatic differentiation techniques and their use in metrology. Many models in metrology are nonlinear and the analysis of data using these models requires the calculation of the derivatives of the functions involved. While the rules for differentiation are straightforward to understand, their implementation by hand is often time consuming and error prone. The complexity of some problems makes the use of hand coded derivatives completely unrealistic. Automatic differentiation (AD) is a term used to describe numerical techniques for computing the derivatives of a function of one or more variables. The use of AD techniques potentially allows a much more efficient and accurate way of obtaining derivatives in an automated manner regardless of the problem complication. In this paper, we describe a number of these techniques and discuss their advantages and disadvantages.
https://doi.org/10.1142/9789812702647_0015
The methods of statistical hypotheses testing are widely used for the data analysis in metrology. As a rule they are based on normal distribution, t-distribution, X2-distribution and F-distribution. The criterion of hypotheses testing is usually characterized only by the level of significance. Such characteristic as a criterion power is not actually used in metrological practice. The paper discusses the problems of using the corresponding non-central distributions for the evaluation of the criterion power as well as actual level of significance in the present of systematic biases in measurement results. The examples of testing the measurand value, the difference between measurement results, the consistency of the data and the relations’ model are considered.
https://doi.org/10.1142/9789812702647_0016
We present a new software product (DFM Calibration of Weights) developed at Danish Fundamental Metrology (DFM), which has been implemented in mass measurements. This product is used for designing the mass measurements, acquiring measurement data, data analysis, and automatic generation of calibration certificates. Here we will focus on the data analysis, which employs a general method of least squares to calculate mass estimates and the corresponding uncertainties. This method provides a major simplification in the uncertainty calculations in mass measurements, and it allows a complete analysis of all measurement data in a single step. In addition, we present some of the techniques used for validation of the new method of analysis.
https://doi.org/10.1142/9789812702647_0017
The paper describes a simple approach to software design in which the ‘Law of propagation of uncertainty’ is used to obtain measurement results that include a statement of uncertainty, as described in the Guide to the Expression of Uncertainty in Measurement (ISO, Geneva, 1995). The technique can be used directly for measurement uncertainty calculations, but is of particular interest when applied to the design of instrumentation systems. It supports modularity and extensibility, which are key requirements of modern instrumentation, without imposing an additional performance burden. The technique automates the evaluation and propagation of components of uncertainty in an arbitrary network of modular measurement components.
https://doi.org/10.1142/9789812702647_0018
The identification of the phase transition is required in thermometry in order to reproduce the International Temperature Scale of 1990 (ITS-90) defining fixed points.
This paper proposes the use of statistical hypotheses testing methodologies for phase transition identification by simply monitoring the temperature behaviour. Only the pure structural change model is taken into consideration, that is the model in which the components of the parameter vector are allowed to change all together.
A statistical test for a single known change point is briefly presented in the paper. The procedure is extended to the more interesting application of the detection of a single unknown change point. A sliding window algorithm is proposed for the on-line detection of a possible change point.
The method has been applied to identify the triple points of the four gases realizing the cryogenic range of the ITS-90. The pure structural change model has proved to be an innovative tool for a reproducible and reliable phase transition identification.
https://doi.org/10.1142/9789812702647_0019
A measurement process has imperfections that give rise to uncertainty in each measurement result. Statistical tools give the assessment of uncertainties associated to the results only if all the relevant quantities involved in the process are interpreted or regarded as random variables. In other terms all the sources of uncertainty are characterized by probability distribution functions, the form of which is assumed to either be known from measurements or unknown and so conjectured. Entropy is an information measure associated with the probability distribution of any random variable, so that it plays an important role in the metrological activity.
In this paper the authors introduce two basic entropy optimization principles: the Jaynes’s principle of maximum entropy and the Kulback’s principle of minimum cross-entropy (minimum directed divergence) and discuss the methods to approach the optimal solution of those entropic forms in some specific measurements models.
https://doi.org/10.1142/9789812702647_0020
A model for atomic clock errors is given by the stochastic differential equation. The probability that the clock error does not exceed a limit of permissible error is also studied by means of the survival probability.
https://doi.org/10.1142/9789812702647_0021
In metrology the measurements on the same standard are usually repeated several times in each Laboratory. Repeating measurements is also the idea behind making inter-comparisons in different Laboratories of standards of a physical or chemical quantity. Often, whether these results can, or not, be considered as repeated measurements is not obvious, so that the statistical treatment of the data as they are repeated data can bring to misleading results. The paper reviews the use of two classes of methods keeping track of the fact that the data are collected in series: a) those considering a class of regression models able to accommodate both the commonality of all series and the specificity of each series; b) those using the mixture probability model for describing the pooled statistical distribution when the data series are provided in a form representing their statistical variability. Some problems related to the uncertainty estimate of the latter are also introduced.
https://doi.org/10.1142/9789812702647_0022
This paper presents a new homotopic algorithm for solving Elementwise-Weighted Total-Least-Squares (EW-TLS) problems. For this class of problems the assumption of identical variances of data errors, typical of classical TLS problems, is removed, but the solution is not available in a closed form. The proposed iterative algorithm minimizes an ad hoc parametric weighted Frobenius norm of errors. The gradual increase of the continuation parameter from 0 to 1 allows one to overcome the crucial choice of the starting point. Some numerical examples show the capabilities of this algorithm in solving EW-TLS problems.
https://doi.org/10.1142/9789812702647_0023
Measurement comparison data sets are generally summarized using a simple statistical reference value calculated from the pool of the participants’ results. This reference value can become the standard against which the performance of the participating laboratories is judged. Consideration of the comparison data sets, particularly with regard to the consequences and implications of such data pooling, can allow informed decisions regarding the appropriateness of choosing a simple statistical reference value. Recent key comparison results drawn from the BIPM database are examined to illustrate the nature of the problem, and the utility of a simple approach to creating pooled data distributions. We show how to use detailed analysis when arguing in favor of a KCRV, or when deciding that a KCRV is not always warranted for the particular data sets obtained experimentally.
https://doi.org/10.1142/9789812702647_0024
The forthcoming Supplement 1 to the GUM: Numerical methods for the propagation of distributions proposes the use of Monte Carlo simulation for uncertainty evaluation. Here we apply a modified implementation of the proposed Monte Carlo Simulation to construct intervals of confidence for a complex-valued measurand. In particular, we analyze the so-called three-voltage method for impedance calibration, which relates complex-valued impedances to voltage moduli measurements. We compare and discuss the results obtained with those given by application of Bootstrap Resampling on the same model.
https://doi.org/10.1142/9789812702647_0025
In this paper we focus on the main problem of quantum state tomography: the reconstructed density matrices often are not physical because of experimental noise. We propose a method to avoid this problem using Bayesian statistical theory.
https://doi.org/10.1142/9789812702647_0026
Based on the orthodox theory of single electronics, a simulation of a tunnel junction is performed, aiming at investigating if quasiparticle events are predictable to transfer fractional charge. The related outcome from the software package MOSES (Monte-Carlo Single-Electronics Simulator) is discussed.
https://doi.org/10.1142/9789812702647_0027
To comply with point 5.4.5 of ISO/IEC 17025 standard means that the laboratories “shall validate non-standard methods, laboratory designed/developed methods, standard methods used outside their intended scope, and amplifications and modifications of standard methods to confirm that the methods are fit for the intended use” [1]. This requisite of the standard is new and the laboratories are evaluating the approaches for the validation process in terms of the quality system. A procedure for validation of calibration methods is proposed and an example of validation of results of measurements is described.
https://doi.org/10.1142/9789812702647_0028
Various least squares methods have been compared in respect to the straight line fitting to data sets with errors on both variables, to check the benefit in using the most appropriate method for dealing with heteroschedastic data, the element-wise total least squares (EW TLS). It is found that the EW TLS always gives the correct estimate: weighted least squares can sometimes be also a good approximation, but this cannot be guessed a priori.
https://doi.org/10.1142/9789812702647_0029
In roughness measurements often noise plays a role. Noise may give an offset in measurement parameter as noise makes the parameter deviate away from zero. In this paper we propose a method to correct for noise bias for the surface parameter Sq. By considering the decrease in Sq once an average over multiple measurements is made, an unbiased value for Sq is estimated by extrapolating the value to an infinite amount of measurements. It is shown that using this method for two measurements only, the true measurand is approached better than with averaging tens of measurements. This principle is extended to obtain a complete ‘noise-corrected’ surface by considering the power spectrum and the change of each Fourier component with averaging. Combining the two methods and considering the statistical significance of each Fourier component enables a further reduction. Examples and simulations are shown for the calibration of roughness drive axis and surface measurements.
https://doi.org/10.1142/9789812702647_0030
The main purpose of the paper is to present uncertainty evaluation of Standard Platinum Resistance Thermometers (SPRT). The three methods of calculation of expanded uncertainty are presented and the results are compared.
https://doi.org/10.1142/9789812702647_0031
The main purpose of the paper is to present how easily and clearly the result of measurements can be accompanied by coverage interval calculated at any desired confidence level in Virtual Instrument, if an appropriate procedure is implemented in instrument, or if instrument is equipped with additional processor, which handles measured data as a series of values, calculates coverage interval, which covers the true value at a certain level of confidence.
https://doi.org/10.1142/9789812702647_0032
Linear programming (LP) techniques and interior-point methods (IPMs) have been used to solve ℓ1 approximation problems. The advantage of the IPMs is that they can reach the vicinity of the optimum very quickly regardless of the size of the problems, but numerical difficulties arise when the current solution is approaching optimal. On the other hand, the number of iterations needed for an ℓ1 approximation problem by LP techniques is proportional to the dimension of a problem. However these LP methods are finite algorithms, and do not have the problems that the IPMs endure. It would be beneficial to combine the merits of both methods to achieve computational efficiency and accuracy. In this paper, we propose an algorithm which applies the IPMs to get a near best solution and fine-tune the results by one of the simplex methods. The savings in terms of numbers of iterations can be substantial.
https://doi.org/10.1142/9789812702647_0033
The Callendar – Van Dusen (CVD) equations of the IEC-751 for the Industrial Platinum Resistance Thermometers (IPRTs) do not exactly follow the International Temperature Scale of 1990 (ITS-90), within the required accuracy. Indeed, there is a demand for calibrations of IPRTs to uncertainties of around 10 mK for temperatures from 0 to 250 °C and better than 0.1 K between 250 °C and about 500 °C, while the use of the CVD equations do not allow an interpolation uncertainty better than ± 0.2 K over a range larger than 0 - 250 °C. To solve this problem, two new reference equations, one below 0 °C and the other above 0 °C, are proposed to be used instead of the CVD equations as reference for the IPRTs. These equations will be of a higher order than the CVD equations and their lower order coefficients are equal to the constants A, B and C of the CVD. The use of these new reference equations allows an interpolation accuracy at the millikelvin level, limiting the number of calibration points in every range to five.
https://doi.org/10.1142/9789812702647_0034
At the Intermediate Temperature Laboratory of the Istituto di Metrologia “G. Colonnetti” (IMGC) all operations concerning the calibration of standard platinum resistance thermometers (SPRTs) at the ITS-90 fixed points are now completely automated. The software is based upon Visual Basic© modules, forms, macros, I/O connection and dialogs with the main Microsoft® Office© application. These modules and other “.exe” files are also used for research purposes as they can be separately used as virtual consoles for data acquisition, pre-processing and post-processing. Several blocks call on each other, starting from the data acquisition to the certificate printout: those blocks can be used as useful tools for accredited calibration laboratories. Statistics, data input evaluation, cross controls, ITS-90 requirements and equations, fixed points and thermometer databases are in continuous dialog, till the end of the calibration procedure. All the data, both from the calibration and the research activities is weekly saved automatically.
https://doi.org/10.1142/9789812702647_0035
A new off-line gain stabilisation method is applied to high-resolution alpha-particle spectrometry of 235U. The software package SHIFTER automatically identifies and quantifies gain shift for intermediate spectra or even individual data in list mode files. By reversing the gain shift before combining all data into one sum spectrum, one can optimise the overall resolution. This automatic procedure is very useful with high-resolution spectrometry at low count rate, as a compensation for gain drift during the long measurement.
https://doi.org/10.1142/9789812702647_0036
In recent years, a large number of programs have been equipped with ANOVA (Analysis of Variance) functions. The expression of expectation of variance in ANOVA must be calculated in order to evaluate each standard uncertainty. However, modern software does not yet offer the functionality to calculate this expression of expectation of variance. In this study expectations of variance in ANOVA were formulated and a new program that calculates the expression of the expectation of variance in typical and specific experimental designs and displays symbolic expectations of each variance was developed.
https://doi.org/10.1142/9789812702647_0037
Passive sonar records the sound radiated by a target. A signal corresponding to a vessel is called a frequency track. A single noise source typically produces several frequency tracks, which collectively form a harmonic set. Often, there are several detectable harmonic sets, and so a key problem in passive sonar is to group the frequency tracks into their corresponding harmonic sets. This paper describes a novel method of identifying the different harmonic sets using a combination of data fitting and template matching.
https://doi.org/10.1142/9789812702647_0038
We describe a method for the rapid calculation of an interval covering approximately a specified proportion of the distribution of a function of independent random variables. Its speed will make it particularly valuable in inverse problems. The first four moments of the output variable are obtained by combining the moments of the input variables according to the function. A Pearson distribution is fitted to these moments to give 95% intervals that are accurate in practice.
https://doi.org/10.1142/9789812702647_0039
A short course on uncertainty evaluation was held in association with the international conference on Advanced Mathematical and Computational Tools for Metrology (AMCTM 2003). The objectives of the course are stated and a brief description given of the lessons learned.
https://doi.org/10.1142/9789812702647_0040
In recent years, a large number of programs have been equipped with ANOVA (Analysis of Variance) functions. The expression of expectation of variance in ANOVA must be calculated in order to evaluate each standard uncertainty. However, modern software does not yet offer the functionality to calculate this expression of expectation of variance. In this study expectations of variance in ANOVA were formulated and a new program that calculates the expression of the expectation of variance in typical and specific experimental designs and displays symbolic expectations of each variance was developed.
https://doi.org/10.1142/9789812702647_bmatter
Author Index.