Advances in metrology depend on improvements in scientific and technical knowledge and in instrumentation quality, as well as on better use of advanced mathematical tools and development of new ones. In this volume, scientists from both the mathematical and the metrological fields exchange their experiences. Industrial sectors, such as instrumentation and software, will benefit from this exchange, since metrology has a high impact on the overall quality of industrial products, and applied mathematics is becoming more and more important in industrial processes.
This book is of interest to people in universities, research centers and industries who are involved in measurements and need advanced mathematical tools to solve their problems, and also to those developing such mathematical tools.
Sample Chapter(s)
Foreword (46 KB)
Chapter 1: Discrete B-Spline Approximation in a Variety of Norms (667 KB)
https://doi.org/10.1142/9789812811684_fmatter
The following sections are included:
https://doi.org/10.1142/9789812811684_0001
B-splines can be used to approximate data in a variety of metrology applications. In general, the approximation is carried out so as to minimize a criterion defined in terms of the least-squares norm. However, often this choice of norm is inappropriate, and a measure such as the ℓ1 or minimax norm should be adopted. In this paper, we introduce the topic of B-splines, and we discuss the approximation of data in the ℓ1, least-squares and minimax norms. In particular, we consider how the specific properties of B-splines can be exploited to allow an approximant to be developed in an efficient manner, and we describe a new algorithm for ℓ1 B-spline approximation.
https://doi.org/10.1142/9789812811684_0002
In the last years, we assisted to the development of new high precision machine tools: the hexapod machines. We tried to use the hexapod machine as an artefact for CMM's calibration. However, given their cost and their resolution, they are not the best solution for a transportable artefact. By modifying the hexapod structure, we develop two different artefacts: one for local CMM's calibration and another one for global calibration.
Local calibration allows us the determination of the transfer function characterizing the sensor displacement of the CMM. This local calibration is based on the measurement of a rigid artefact of a known geometry, derived from hexapod geometry, which allows us to determine the errors of displacement of the sensor of the CMM.
The artefact for global calibration uses a self-calibrated method, based on measurements from three miniature laser interferometers, measuring the position of a sphere in the volume of the CMM.
The paper describes the patented artefacts for local and global calibration, as well as the referring mathematical problems resulting from self-calibration of global artefact and the method of interpretation of the measurement results of the artefact used in the measure of local errors.
https://doi.org/10.1142/9789812811684_0003
A method is presented for detecting and solving intrinsic non-identifiability of error models. Here, a non-identifiability is called intrinsic when no sampling strategy exists which results in a non-singular computational problem. Detecting these cases is useful in saving time when designing a sampling strategy. The method is limited to linear algebraic models, and is based on the projection of the model equation gradient onto a suitable functional base. Then a null space analysis is carried out which leads to suitable constraints to add to the model in order to solve the non-identifiability. A complex case study is presented and worked out in the field of coordinate metrology. The results are fully consistent with those obtained in previous work carried out without a systematic approach, and extend the conclusions to a wider class of cases. A possible extension of the method to non linear models is briefly discussed.
https://doi.org/10.1142/9789812811684_0004
We describe a general methodology for testing the numerical accuracy of scientific software used in metrology. The basis of the approach is the design and use of reference data sets and corresponding results to undertake black-box testing of the software under investigation. The application of the methodology is illustrated by presenting the results of testing particular in-built functions provided by two proprietary software packages.
https://doi.org/10.1142/9789812811684_0005
In this paper we report a theoretical approach for the evaluation of the distortion effects on photon statistics, measured by non-ideal devices with dead time. In particular we present a general method for the reconstruction of the distribution function of photon arrival times, by evaluating the probability to detect the n-th photon at a fixed time, in the presence of either extending or non-extending dead time.
https://doi.org/10.1142/9789812811684_0006
The architecture DCOM2 is conceived to create software objects that are compatible and integrable with the structure of INTERNET (ActiveX-modules). This architecture has very interesting properties for the Measurement community: it allows the development of virtual instrument consoles that can be integrated with more complex programs or with existing commercial structures. In this picture Microsoft Excel becomes an efficient and flexible datalogger.
The principle is based on object-programming and takes advantage of the possibility to export some essential commands and properties from a piece of software, that is in itself complex and specialized (in this case the virtual console of an instrument), to a client program that can be used directly by the client with only a few simple instructions.
In our particular case virtual consoles have been created for the ASL F18 resistance bridge, for the MI 6010B resistance bridge and for various digital multimeters, to be used within Microsoft® Excel for simple and immediate data-acquisition or, in more complex structures, for calibration by comparison or for calibration at the fixed points.
The "shell" programs can freely use one of the instruments that are connected through the virtual console. In these last, complex, structures the actual calibration structure and the virtual consoles have been merged such that parameters of both the general experiment and the instrument, the resistance bridge, are set from within the same window, while still maintaining the possibility to call the virtual console itself for more detailed settings. A general outline of an ActiveX-module is given, and also a description of the calibration program.
https://doi.org/10.1142/9789812811684_0007
In the present report the outcomes of discussion of the concept of the measurement uncertainty in Russia are represented, the opinion of the authors on the given problem is expressed, the ways of application of Guide to the Expression of Uncertainty in Measurement (GUM) in Russia are demonstrated. The concept of the equivalence of the measurement standards, its place in a traceability system of measurements throughout the world, tasks of key-comparisons and quantitative estimators of degrees of equivalence of the measurement standards are considered. The common problems of the procedure for performing measurements at key comparisons are discussed.
https://doi.org/10.1142/9789812811684_0008
We introduce a computational tool allowing interactive bootstrap calculations to be carried out in the Internet environment. The tool, written in Java, can be embedded as an applet in a Web page, and hence can be used, among other things, to produce live teaching materials ready to be published on the net. It allows to estimate bootstrap accuracy measures for location statistics, variance and correlation. A short presentation of the Java technology with a few warning concerning the security problems involved in running the applet are given.
https://doi.org/10.1142/9789812811684_0009
This paper generalises the Total Least Squares (TLS) problem to the cases in which the variances of data and observations are assumed to be different. Besides TLS, this formulation comprises the ordinary (OLS) and data (DLS) least squares problems by specifying a finite value of a parameter ζ depending on the ratio between data variance and observation variance. Numerical iterative techniques for finding the minimum are shown to be very efficient, when parameter ζ is suitably varied during the search path in order to stay in the convergence domain and improve the speed. Some numerical examples show the capabilities of the algorithm in dealing with large scale problems.
https://doi.org/10.1142/9789812811684_0010
A close connection can be estabilished between subband coding schemes used in signal processing and the multiresolution analyses generated by refinable functions and wavelets. The main advantage of the filter approach is that it is possible to design filters without requiring the existence of a corresponding MRA. Here, an efficient algorithm is presented to obtain filter systems verifying the perfect reconstruction condition. Then, a class of filters with relevant properties is considered and experimental results in image compression are shown.
https://doi.org/10.1142/9789812811684_0011
Laboratories worldwide use the Guide to the Expression of Uncertainty in Measurement (GUM) (ISO, 1995) in providing the uncertainty associated with their measurement results. There are circumstances, though, where the direct applicability and usefulness of the GUM are limited. Such instances arise when the linearisation of the measurement model is inadequate, or when the Welch-Satterthwaite approximation for the effective number of degrees of freedom is poor.
This paper advocates an approach to uncertainty evaluation that avoids these limitations. It is based on the determination of the probability distribution of the values that might be attributed to the measurand. From this probability distribution an interval can be determined that contains the value of the measurand with a specified probability (coverage interval). This distribution can be obtained with analytical or numerical methods. The former have relatively limited applicability, but the latter are very general and their implementation using Monte Carlo Simulation (MCS) is addressed. The effectiveness of this tool increases relative to other numerical methods with the number of model inputs.
MCS is consistent with the more general considerations of the GUM, a point that is essential in the context of its being adopted within quality management systems.
https://doi.org/10.1142/9789812811684_0012
The total median is an appealing estimator of the population mean from a sample because it retains the robustness property of the median but has a smaller mean-squared error. It is computed as a weighted combination of the sample data, but using weights that are intermediate to those for the mean and median. Although the weights are related to the total bootstrap of the median, a compact formula is presented which permits the evaluation of the weights without actually performing the bootstrap. The variance of the total median and coverage intervals for the total median can readily be obtained using the bootstrap.
https://doi.org/10.1142/9789812811684_0013
The need to treat collectively data obtained from a number of different sources arises in a varied number of disciplines. Within the field of metrology, the science of measurement, it is often necessary to fuse together data which are known to contain differing levels of error. Often the intention is to collect the data in some structured way, however external influences can force the abscissae away from the intended path and produce data that lie along a number of curved paths in the xy - plane. In this paper we present an algorithm which overcomes the need to treat curved paths (or lines) of data as being generally scattered and allows the semi-structured form of the abscissae to be exploited. The method presented here has two key features. Firstly we relate the data collected along each curved path to a local approximation, enabling any source errors to be considered separately. We show how a set of secant lines can be defined from the locations of the centres chosen for each local approximation to provide a representation of the global spread of the abscissae. Secondly, we present a method for fast evaluation of the global approximation by creating a new set of 'approximated' data values lying at scattered locations on parallel lines, where the separable properties of the Gaussian function can be utilised to produce an efficient tensor product approximation. We further suggest how greater efficiency can be achieved by identifying opportunities for parallel processing.
https://doi.org/10.1142/9789812811684_0014
A relational database is described that supports calibration and standard conservation in the Dutch standards laboratory for temperature and humidity calibrations. The tables in the database are grouped to support various instrument types. The relational structure and sets of integrity rules guarantee an optimal and reliable database. Several applications share the database. Access to the database is controlled and protected by the use of objects.
https://doi.org/10.1142/9789812811684_0015
Quantitative magnetic resonance spectroscopy can be used to determine metabolite concentrations of the human brain from in-vivo measurements. The concentrations are obtained by application of nonlinear least-squares methods using physical models. The estimation is complicated due to unavoidable background signals contained in the measured spectra. Many different methods of analysis have been proposed and it is increasingly required to compare these methods in the light of the difficulties inherent in magnetic resonance spectroscopy.
This paper presents a set of test problems which are constructed on the basis of measured and simulated magnetic resonance spectroscopy data. The problems contain measured background signals, and they allow an exact quantitative assessment of results obtained by a quantification method. Current quantification methods are briefly outlined and two of them are compared by applying them to the set of test problems.
https://doi.org/10.1142/9789812811684_0016
The Mutual Recognition Arrangement (MRA) signed in October 1999 by the countries of the Meter Convention has raised interesting problems to the National Measurement Institutes (NMI) and to the Scientific Metrology Groups dealing with equivalence statements and reference values. Despite the characteristics of the different quantities, there are common problems linked with statistics occurred from the results of the comparisons. The equivalence obtained by a country inside a region has traceability recognition consequences for its NMI as the head of the standards hierarchical chain. An interlaboratory comparison is an experiment that should be designed in order to assign to the results a confidence level statistically significant. Some problems are raised, solutions will be found through an interdisciplinary work between metrologists, mathematicians and statisticians.
https://doi.org/10.1142/9789812811684_0017
Self-calibration is an important, practical technique in metrology. It refers to measurement experiments in which determination of the parameters a of primary interest depends on knowledge of additional parameters b associated with the system. In a standard approach, additional experiments are performed to provide estimates of b, which are then used as prior calibration information in subsequent experiments to detemine a. In a self-calibration experiment, the aim is to determine both sets of parameters simultaneously, making best use of all the measurement data. In this paper, we discuss a general framework for self-calibration, illustrated with a number of traditional examples, including angle measurement.
https://doi.org/10.1142/9789812811684_0018
Many metrology systems involve more than one sensor and the analysis of the data produced by these systems has to take into account the characteristics of the data arising from the different sensors. For well-characterised systems in which the behaviour of the sensors is known a priori, appropriate methods for estimating the parameters of the system from measurement data can be derived according to maximum likelihood principles. For systems subject to unknown or unpredictable variations, estimation methods that can adapt to these variations are required. In this paper, we show how a class of such methods can be defined. We also discuss how these methods can applied in the validation of multi-sensor systems.
https://doi.org/10.1142/9789812811684_0019
A common concern in metrology is the characterisation of the systematic departure from ideal behaviour of an instrument. In recent years, a generic approach has been developed for addressing this type of problem. It involves a) a mathematical model of the nominal behaviour, b) a model of the departure from nominal and c) measurement data {xi} incorporating calibration information. A mathematical model of the instrument is developed and the relationship of the model to the measurement data encapsulated in a numerical simulation of the measurement set-up. This simulation system can be used to examine firstly, the system identifiability, i.e., the extent to which all the model parameters can be determined from a particular measurement strategy, and secondly, the system effectiveness, providing an estimate of how well the parameters are determined relative to uncertainties in the data. Although the construction of the numerical simulation may appear a significant overhead, in practice all its components are required for subsequent analysis of the experimental data.
As an example of this approach, we consider the geometrical characterisation of a flexible arm co-ordinate measuring machine (FCMM) from repeated measurements of calibrated artefacts. The FCMM consists of a robotic arm with a number of links connected by revolute joints, with angle encoders at each joint and a probe head mounted at the end of the last link. The mathematical model accounts for its non-perfect geometry, e.g., the rotation axes of the joints are only nominally orthogonal to the connected links. Through numerical simulations using the model of the FCMM, it has been possible to examine system identifiability and effectiveness. The analysis shows that the system is fully identifiable with the exception of four parameters that can be related to defining a frame of reference and associated symmetries. These degrees of freedom can be eliminated by constraining the model in a natural way. The effectiveness of measurement strategies using the constrained model can be examined by looking at the standard uncertainty of the distance between probe locations.
https://doi.org/10.1142/9789812811684_0020
The estimation of the covariance matrix of individual standards by means of comparison measurements is evaluated in the case of atomic clocks. Since the problem is underdetermined, two different objective functions to estimate the individual instability are introduced and examined by viewpoint of their analytical and numerical properties. Applications to simulated and real clock data show the capabilities of the two proposed methods.
https://doi.org/10.1142/9789812811684_0021
Model adequacy problem regarding to the measurement is discussed as an iterative procedure of the model improvement. An analysis of a separate step of the model extension procedure is discussed in more details. Principles of the model extension are considered, including regularity, interpretation, constructive and effective nature.
Two general aspects of the model improvement are discussed, including an increase of the model dimension and the relevant transformation of the structure of the model.
Two main formal approaches to the model extension analysis are outlined. The first one is based on the functional models for the measurement problem (with proper metrics); the second approach is founded on the testing of statistical hypotheses, connected with main aspects of the model adequacy.
https://doi.org/10.1142/9789812811684_0022
The need for databases in metrology is manifold, for instance, for obtaining information on materials and substances, for the storage of measurement results, and for the development of new types of measurement standards.
From the database point of view, the requirements to be met by databases in metrology are rather heterogeneous and non-systematic. Many requirements are already satisfied by current standard database technology. Ongoing developments such as Internet-based access to databases and enhanced security measures are of importance for metrological applications.
The growing need of the database assistance in technical research and development has led to additional requirements, for instance, the extension of the data modeling capability.
Requirements like the support of measurement data archives and the management of new types of standard references are directly related to metrology.
There are manifold approaches to database solutions, which at least meet part of the requirements. Four different approaches are outlined. The first one, the Internet-based remote data entry, is insofar remarkable as well-known technologies are combined to form an acceptable solution. Both, the second and the third approach refer to extensions of data modeling techniques. They have different backgrounds and present different advantages. Finally, the fourth one deals with the archiving problem, particularly with the integration of operational databases and archives. Here, research still has to be undertaken.
https://doi.org/10.1142/9789812811684_0023
In the processing of data series, such as in the case of the resistance R vs. temperature T calibrations of the thermometers (several thousands) necessary for the LHC new accelerator at CERN, it is necessary to use automatic methods for determining the quality of the acquired data and the degree of uniformity of the thermometer characteristics, that are of the semiconducting type. In addition, it must be determined if the calibration uncertainties comply with the specifications in the wide temperature range 1,6 – 300 K.
Advantage has been taken of the fact that these thermometers represent a population with limited variability, to apply a Least Squares Method with Fixed Effect. This allows to fit the data of all the thermometers together, by taking into account the individuality of each thermometer in the model as a deviation from one of them taken as reference:
This method is shown in the paper applied to different stages of the data processing. First, for efficient compensation for the thermal drift occurring during acquisition, robust against the occurrence of outliers. Second, for detection of clusters of thermometers with inherently different characteristics. Finally, for optimisation of the calibration-point distribution.
https://doi.org/10.1142/9789812811684_0024
a flexible and accurate method for separating spindle error motion and workpiece roundness is presented. The method makes use of three or more displacement probes. Angle measuring probes can also be used. The angular positions of the probes as well as errors in sensor amplification are determined directly from the measurement data and require no extra measurements. The method can be used for real-time runout measurements with nanometer accuracy.
https://doi.org/10.1142/9789812811684_0025
The work presented shows the typical pathway of translating a measurement problem into a piece of software, suitable for daily use. The application of independent methods throughout the process is discussed, as it forms the backbone of the validations necessary in each step of the development process.
https://doi.org/10.1142/9789812811684_0026
In this paper, we examine the role of prior probabilities in the analysis of measurement data. This leads us to a Bayesian approach, in which the interpretation of probability is different from the classical, frequentist view. Bayesian techniques allow greater scope for information to be incorporated into the analysis. In particular, they allow us greater capabilities in data fusion.
We examine the case where a Bayesian approach enables us to incorporate a constraint (such as positivity) on a Gaussian random variable into the calculation of confidence intervals. This shows the ease with which constraints, and other forms of information, can be fused into experimental measurement data.
https://doi.org/10.1142/9789812811684_0027
The ISO/BIPM Guide to the Expression of Uncertainty in Measurement (GUM) describes a method to evaluate the associated uncertainty of a measurement result. It is still an ongoing challenge to adapt the Guide to the different fields of metrology. In chemical analysis results from different measurements must often be combined. This paper will discuss cases where correlation can have an import influence on the uncertainty of the result.
A scheme will be presented for the calculation of the correlation between results using the uncertainty budgets. Implementing a model, which includes the correlation, can significantly change the importance of some parameters. It also gives the analyst a better understanding of the major sources of uncertainties in the measurement process.
https://doi.org/10.1142/9789812811684_0028
A set of international key comparisons provide the basis for the so-called Mutual Recognition Arrangement. Some examples of results for such key comparisons are given in this paper.
https://doi.org/10.1142/9789812811684_0029
One of the most popular methods for solving the ℓ1 data fitting problem is the Barrodale-Roberts (BR) simplex algorithm, which is based on linear programming (LP) techniques. Much of the reason for the popularity of the BR algorithm is that it exploits characteristics of the ℓ1 approximation in order to solve the problem in a more efficient manner than the general simplex approach. However, it is a simplex-based method, and so it is susceptible to numerical instabilities caused by the use of inappropriate pivots. The new method that we present here uses the highly efficient pivoting strategy of the BR algorithm. However, rather than using the complete simplex tableau, we reconstruct only the parts of the simplex tableau that are needed at each step. This allows the use of numerically stable updates to be made and avoids the unnecessary build-up of rounding errors. This new algorithm is particularly efficient when the observation matrix is large and sparse.
https://doi.org/10.1142/9789812811684_0030
Support vector machine (SVM) regression is a new tool for the approximation of a data set comprising two or more sources of data of the same type but with different noise levels. These types of data set occur frequently in metrology, e.g., when measurements from coordinate measuring machines (CMMs) of differing precisions are merged. More generally, these data sets are found in a variety of problems of "data fusion," the collective name given to situations in which we wish to combine data obtained from multiple and/or different sources. Therefore, SVM regression has potential applications in a number of metrology and data fusion problems. In this paper, we introduce the topic of SVM regression and discuss the generic types of data fusion problems that it can solve. We show that SVM regression in its most common form can be formulated mathematically as a quadratic programming problem. Furthermore, we consider how SVM regression can be extended to solve more general problems, as well as discussing when it is appropriate to use these general techniques.
https://doi.org/10.1142/9789812811684_0031
A new computerised system has been developed at IMGC, to regulate the helium pressure in gas controlled heat-pipes. Ad hoc software has been developed to reach a high level of automation in the whole process of acquisition and control.
The needed pressure sensitivity requires time control well within a few milliseconds. This goal has been obtained by using two computers connected through a serial line. Both with a Windows Operating system, the two PCs work in a master/slave configuration. The master PC (a 450 MHz, Pentium III machine) drives three automatic bridges connected to the Platinum Resistance Thermometers (PRTs) and acquires all the needed temperature information inside the heat-pipes. All acquired data is used to determine the pressure value of the helium gas inside the heat-pipes. The slave PC (a 33 MHz, 486 machine) receives this pressure set point from the master and starts all required operation. This slave PC drives all pressure valves and a d.c. motor driven bellows, and read pressure data from the pressure gauges. This closed cycle automated process is completely realised via Visual Basic® 5 applications and projects.
Many modules and virtual consoles have been developed to let the whole system work without a human operator. At the same time, all control and acquisition processes can still be operated manually on the same virtual consoles (ActiveX-modules): one for each instrument and others for specialised actions, such as valve or bellows real time control and temperature or pressure data acquisition.
A general outline of the ActiveX® modules and VB projects is given together with a description of the acquisition and control cycle.
https://doi.org/10.1142/9789812811684_0032
The discrete Wavelet-transform (DWT) is a well-known standard tool in data communication. In that field, its power is usually reduced to data compression, resulting in a shortening of signal and image files. The capabilities of the Wavelet-transform are, however, considerably extended when advantage is taken of the intrinsic mathematical structure instead of using it in a black-box manner.
We applied the two-dimensional DWT to a specific problem of the assessment of the thickness of material layers. Such problems are of significance in the quality control of surface treatment, for instance. The measurement data are recorded by a scanning probe microscope which represents the different magnetic properties of the layers on a cross-section of the material examined. Although our problem does not appear to be an ideal field of application of wavelets, we successfully used DWT when handling some specific items of the assessment of cross-section images.
After briefly summarizing the principles of Wavelt-transform, in particular those of the 2D multi-scale analysis, the paper deals with useful manipulations of the transformed data – the so called wavelet coefficients. The comparison of the imaging of the original data with the image reconstructed from the coefficients by the inverse transform shows the changes caused by such manipulations, even in cases where the inverse transform is not actually a necessary step in data processing.
It turns out that in particular the splitting into vertical, horizontal and diagonal detail coefficients, which is achieved by 2D-DWT, is a very useful tool to analyse unisotropic structures. Essential parts of systematic horizontal disturbances due to the direction of the scanning lines of the measurement equipment are separated and suppressed in this way.
https://doi.org/10.1142/9789812811684_0033
Drilling cooling holes in turbine blades is a complex process in which errors can be introduced inadvertently at several stages of the process, and isolating specific errors can be difficult and time consuming. Some of the mathematical errors can be identified by simulating the process using the Coordinate Measuring Machine software to create a virtual drilling machine. The virtual machine can be used to test machining offset data for both virtual blades and real blades, which can help to pinpoint faults in several parts of the system.
https://doi.org/10.1142/9789812811684_0034
In automatic acquisition of thermal data, a valid sample is generally made up of several instrumental readings. These readings are generally reduced to a single value by simple methods, such as averaging. To avoid ambiguous results or a computation time too long in comparison with the experimental purposes, the data acquisition has to be performed automatically to reproduce the physical property possibly independently on outliers values. The paper introduces an algorithm, named a sequence-analysis outliers rejection (SAOR) that takes into account the most usual problems affecting the measurand during the acquisition, i.e. a non linear drift, sequences of outliers due to noise peaks. The algorithm uses the ordering of the sequence and of the "distances" between successive readings. The case of equispaced data is discussed. Results on tests performed for this case are reported using simulated thermal data affected by sequences of outliers.
https://doi.org/10.1142/9789812811684_0035
Radioactivity comparisons are used to confirm the accuracy of measurement methods or to compare new methods. Since the signing of the Mutual Recognition Arrangement (MRA), there is greater interest in a more critical analysis of the outcome of comparisons. In radioactivity, the unique features of every radionuclide: decay mode, decay scheme, energies and emission probabilities render the task more demanding. However, the International Reference System (SIR) has allowed a rapid and continuing normalisation of activity measurements carried out in National Metrology Institutes (NMIs) since 1976. It has been agreed internationally that this is the appropriate tool to assess equivalence between NMIs. The MRA has increased the significance of the SIR and the interest in it shown by the participating laboratories. This paper presents methods to evaluate the equivalence between laboratories using the SIR results. Some examples are given.
https://doi.org/10.1142/9789812811684_0036
The UK's National Measurement System programme entitled "Software Support for Metrology" (SSfM) has nearly completed its three-year term. The plans for the follow-on programme, to run from April 2001 to March 2004, are well developed. This paper presents one of the more important achievements of the current SSfM programme – the METROS web site – which will become the principal delivery mechanism for the next SSfM programme. METROS is a METROLOGY Software web site to give metrologists and software engineers access to a library of algorithms and their software implementations relevant to metrology applications. International collaboration is needed to ensure that METROS is both appropriately populated (with algorithms, software implementations, reference data sets and reference results, and guidance material) and widely promoted.
https://doi.org/10.1142/9789812811684_0037
We present an algorithm to fit parametric curves to range data. Such data arises when measuring devices can only measure distance to a measured point, rather than measuring the positon of the point exactly. Fitting parametric curves to such data has application in a variety of fields including tracking and metrology. The algorithm presented has two main stages, the first is to determine the points measured, and the second is to use these points to fit a parametric curve. We describe the algorithm used, and illustrate its use when applied to fitting a circle to range data. Extension to more complex parametric curves is discussed and numerical results displayed to show the usefulness of the technique.
https://doi.org/10.1142/9789812811684_0038
Analysis and review is underway to determine which calibration and measurement capabilities put forward by National Metrology Institutes will be included in the Appendix C database of the CIPM Mutual Recognition Arrangement (MRA). Thousands of decisions are required now, and this level of activity can be expected to continue for the foreseeable future. For those capabilities with a parallel entry in the Appendix B database of Key Comparison results, simple and defensible criteria for acceptance are available with the Quantifying Demonstrated Equivalence confidence interval formalism.
https://doi.org/10.1142/9789812811684_0039
A software toolkit which simplifies the task of generating tables of equivalence from Key Comparison data has been written in Visual Basic for use with Excel. Its several simple functions for performing routine statistical analysis of comparison data are discussed, and a more complex "bilateral equivalence matrix" generator macro is demonstrated. This macro can be customized to provide more in-depth analysis of the pairwise differences, and additional analysis within the framework of the Quantified Demonstrated Equivalence formalism is discussed and illustrated through the use of worked examples using real comparison data.
https://doi.org/10.1142/9789812811684_0040
The preparation and certification of matrix reference materials is revisited. The relationships are outlined between the preparation, homogeneity and stability testing, and the characterisation of reference materials in terms of measurement uncertainty. Based on metrological concepts, a statistical model is developed to describe the preparation and certification process, that leads to the property values and their uncertainties.
https://doi.org/10.1142/9789812811684_0041
Interpreting the process of data fusion as the combination of results from different sources, probably the final step in the certification of a reference material is a typical example of this concept. At the final stage in the preparation and certification of a reference material, the data obtained during homogeneity and stability testing and that of the characterisation should be brought together to obtain the property value and its expanded uncertainty. Some of the common problems during this process are reviewed.
https://doi.org/10.1142/9789812811684_0042
When a measurement has to be traceable, it is required to know the measurement uncertainty. For measurements on coordinate measurement machines (CMM's), it can be difficult to calculate the measurement uncertainty. A method is needed that can deal with the complex error structure of a CMM and with complex measurement tasks. Among other things, this method has to deal adequately with the autocorrelation behaviour of the error structure. The method of surrogate data, applied to multiple dimensions, seems a suitable way to deal with this problem.
https://doi.org/10.1142/9789812811684_0043
The uncertainty expression in a measurement process is studied by using computationally intensive methods. The case of interest is represented by a multivariate model and the level of confidence of a probability region is estimated by combining a Monte Carlo iteration and a bootstrap resampling algorithm. The numerical results obtained in an experimental model are presented and compared with those obtained by analytical procedure in circumstances where the comparison has been possible.
https://doi.org/10.1142/9789812811684_0044
After the workshop held in Oxford in 1999, this Special Interest Group (SIG) met a second time as a joined event of the AMCTM workshop. D. Richter, PTB, GERMANY, chaired again this SIG meeting. This time, databases and information systems were the focal issues. The aim was to present solutions and experience and to enhance the awareness both of the benefits and of the risks of databases in metrology …
https://doi.org/10.1142/9789812811684_0045
The following sections are included:
https://doi.org/10.1142/9789812811684_0046
This paper contains an edited account of a round table discussion on interlaboratory comparisons that took place at AMCTM2000, the Advanced Mathematical Tools for Metrology conference held in Lisbon, Portugal, in May 2000. The discussion topics include the interpretation of the CIPM Mutual Recognition Arrangement, the impact of the Guide to the Expression of Uncertainty in Measurement, the determination of key comparison reference values, artefact selection, constrained quantities, and sequence data.
https://doi.org/10.1142/9789812811684_bmatter
Author Index