Search name | Searched On | Run search |
---|---|---|
[Keyword: Machine Vision] AND [All Books] (12) | 27 Mar 2025 | Run |
Keyword: Composite Beam (5) | 27 Mar 2025 | Run |
[Keyword: DNA Sequencing] AND [All Books] (1) | 27 Mar 2025 | Run |
[in Journal: Biomedical Engineering: Applications, Basis and Communications] AND [K... (1) | 27 Mar 2025 | Run |
[Keyword: Linear Defects] AND [All Books] (1) | 27 Mar 2025 | Run |
You do not have any saved searches
Machine vision technology has attracted a strong interest among Finnish research organizations, which has resulted in many innovative products for industry. Despite this goal users were very skeptical towards machine vision and its robustness in harsh industrial environments. Therefore the Technology Development Centre, TEKES, which funds technology related research and development projects in universities and individual companies in Finland, decided to start a national technology program, “Machine Vision 1992-1996”.
Led by industry, the program boosts research in machine vision technology and seeks to put the research results to work in practical industrial applications. The emphasis is on nationally important, demanding applications. The program will create new business for machine vision producers and encourage the process and manufacturing industry to take advantage of this new technology. So far 60 companies and all major universities and research centers in Finland are working on our forty different projects. The key themes are Process Control, Robot Vision and Quality Control.
Since the design of an inspection system typically requires a lot of application-dependent work, the provision of systematic methods and tools to assist in the design process could significantly reduce the system development and installation time. With this in view, a step-by-step design procedure for image acquisition systems is suggested, consisting of measurements of certain important optical parameters for the surfaces to be inspected, modelling of the measurements and arrangement of the imaging in a form that a computer can understand, simulation of the imaging process in a computer using optical analysis tools, and verification of the results through a pilot system. The procedure is exemplified by describing its application to the design of a steel sheet inspection system and its capacity for optimising the detection of various defects is demonstrated. For comparison, measurements made on some other materials are shown and the implications discussed. The results of the simulation and the pilot system for steel are compared and the usefulness of the computer-based design method is evaluated.
Much research is currently going on about the processing of one or two-camera imagery, possibly combined with other sensors and actuators , in view of achieving attentive vision, i.e. processing selectively some parts of a scene possibly with another resolution. Attentive vision in turn is an element of active vision where the outcome of the image processing triggers changes in the image acquisition geometry and/or of the environment. Almost all this research is assuming classical imaging, scanning and conversion geometries, such as raster based scanning and processing of several digitized outputs on separate image processing units.
A consortium of industrial companies comprising Digital Equipment Europe, Thomson CSF, and a few others, have taken a more radical view of this. To meet active vision requirements in industry, an intelligent camera is being designed and built, comprised of three basic elements:
– a unique Thomson CSF CCD sensor architecture with random addressing
– the DEC Alpha 21064 275MHz processor chip, sharing the same internal data bus as the digital sensor output
– a generic library of basic image manipulation , control and image processing functions, executed right in the sensor-internal bus-processor unit , so that only higher level results or commands get exchanged with the processing environment.
Extensions to color imaging (with lower spatial resolution), and to stereo imaging, are relatively straightforward. The basic sensor is 1024*1024 pixels with 2*10 bits addresses, and a 2. 5 ms (400 frames/second) image data rate compatible with the Alpha bus and 64 bits addressing. For attentive vision, several connex fields of max 40 000 pixels, min 5*3 pixels, can be read and addressed within each 2 .5 ms image frame. There is nondestructive readout, and the image processing addressing over 64 bits shall allow for 8 full pixel readouts in one single word.
The main difficulties have been identified as the access and reading delays, the signal levels, and dimensioning of some buffer arrays in the processor.
The commercial applications targeted initially will be in industrial inspection, traffic control and document imaging. In all of these fields, selective position dependent processing shall take place, followed by feature dependent processing.
Very large savings are expected both in terms of solutions costs to the end users, development time, as well as major performance gains for the ultimate processes. The reader will appreciate that at this stage no further implementation details can be given.
Texture analysis has many areas of potential application in industry. The problem of determining composition of grain mixtures by texture analysis was recently studied by Kjell. He obtained promising results when using all nine Laws' 3×3 features simultaneously and an ordinary feature vector classifier. In this paper the performance of texture classification based on feature distributions in this problem is evaluated. The results obtained are compared to those obtained with a feature vector classifier. The use of distributions of gray level differences as texture measures is also considered.
The use of machine vision technology is being investigated at VTT for improving the colour quality and productivity of web offset printing. The visual inspection of colour quality is performed by a colour CCD camera which traverses the moving web under a stroboscopic light. The measuring locations and goal values for the colour register, the ink density and the grey balance are automatically determined from the PostScript™ description of the digital page. A set of criteria is used to find the most suitable spots for the measurements. In addition to providing data for on-line control, the page analysis estimates the zone wise link consumption of the printing plates as a basis for presetting the ink feed. Target calorimetric CIE-values for grey balance and critical colours are determined from the image originals. The on-line measurement results and their derivations from the target values are displayed in an integrated manner. The paper gives test results of computation times, measurements of register error with and without test targets and the colour measuring capabilities of the system. The results show that machine vision can be used for on-line inspection of colour print quality. This makes it possible to upgrade older printing presses to produce a colour quality that is competitive with more modem presses.
Optical coordinate measurement systems will benefit from developments similar to those taking place in robots, namely off-line programming capability and intelligent sensors. Research work towards these goals is reported here. An experimental system with CAD model-based measurement planning and vision control has been constructed and its feasibility demonstrated. The system comprises a measurement planning tool, a measurement robot with vision guidance and a means of visualizing and comparing the measured results with the design data. The measurement planning tool is based on a commercial CAD system and enables the use of existing CAD models of the objects to be measured as a basis for planning. It generates a measurement model file (MMF) containing instructions for controlling the measurement robot, which is an optical coordinate measurement device based on the laser rangefinder principle and supplemented with a vision system for guiding the measurement to planned target points. The measured coordinate values can be compared with design values either graphically or numerically. The performance of the experimental system was demonstrated and evaluated. In the demonstration case a measurement sequence was planned and saved in the form of a MMF, whereupon the measurement robot was able to execute the sequence reliably according to the MMF and measure the planned target points with vision guidance. The pointing repeatability of the vision guidance function was 0.019 mRad (standard deviation), which is equal to 0.19 mm at a distance of 10 meters, and the corresponding pointing accuracy was better than 0.04 mRad. The 3D measurement had an average repeatability of 0.3 mm (standard deviation), and the absolute accuracy of averaged measurement results was better than ±1 mm (for x, y and z) in 81% of the cases. The next phase of the work will include piloting the system in an industrial application.
This paper presents the design and development of a real-time eye-in-hand stereovision system to aid robot guidance in a manufacturing environment. The stereo vision head comprises a novel camera arrangement with servo-vergence, focus, and aperture that continuously provides high-quality images to a dedicated image processing system and parallel processing array. The stereo head has four degrees of freedom but it relies on the robot end-effector for all remaining movement. This provides the robot with exploratory sensing abilities allowing it to undertake a wider variety of less constrained tasks. Unlike other stereo vision research heads, the overriding factor in the Surrey head has been a truly integrated engineering approach in an attempt to solve an extremely complex problem. The head is low cost, low weight, employs state-of-the-art motor technology, is highly controllable and occupies a small-sized envelope. Its intended applications include high-accuracy metrology, 3-D path following, object recognition and tracking, parts manipulation and component inspection for the manufacturing industry.
We have developed an algorithm for unsupervised adaptive classification based on a finite number of “prototype populations” with distinctly different feature distributions, each representing a typically different source population of the inspected products. Intermittently updated feature distributions, of samples collected from the currently classified products, are compared to the distributions of pre-stored prototype populations, and accordingly the system switches to the most appropriate classifier. The goal of our approach is similar to the objectives of the previously proposed “Decision Directed” adaptive classification algorithms but our solution is particularly suitable for automatic inspection and classification on a production line, when the inspected items may come from a finite number of distinctly different sources.
The recognition of prototype populations as well as the classification task proper may be implemented by conventional classifiers, however neural networks (NN) are advantageous in two respects: There is no need to develop separate mathematical models for each classifier because the NN does it automatically during the training stage. The parallel structure of NN has the potential for very fast classification in real time, if implemented by dedicated parallel hardware. This is particularly important for high speed automatic sorting on a production line.
The practical feasibility of the approach was demonstrated by two applied examples, wherein two prototype populations of apples are recognized and sorted by size and color derived by machine vision. Three “Boltzmann-Perceptron Networks” (BPN) were used, one to recognize the prototype populations, while switching between the other two, for optimally classifying apples into two size and color categories. It is shown that misclassifications by adaptive classification are reduced, in comparison to non-adaptive classification.
Due to their advantages, omni-directional mobile robots have found many applications especially in robotic soccer competitions. However, omni directional navigation system, omni-vision system and kicking mechanism in such mobile robots have not ever been combined. This situation brings the idea of a robot with no head direction into existence, a comprehensive omni directional mobile robot. Such a robot can respond more quickly and it would be capable for more sophisticated behaviors with multi-sensor data fusion algorithm for global localization. Despite recent advances, effective control and self-localization methods of omni-directional mobile robots remain as important and challenging issues. For this purpose, we utilize the sensor data fusion method in the control system parameters, self localization and world modeling. A vision-based self-localization and the conventional odometry systems are fused for robust self-localization, The methods have been tested in the many Robocup competition field middle size robots. The localization algorithm includes filtering, sharing and integration of the data for different types of objects recognized in the environment. This paper has tried to focus on description of areas such as omni directional mechanisms, mechanical structure, omni-vision sensor for object detection, robot path planning, and other subjects related to mobile robot's software.
An automated system for measuring the alignment accuracy of an exposed photosensitive film resist circuit pattern on a metalized ceramic substrate is described, The system is robust and capable of handling low contrast images with high noise levels and varying degrees of degradation of the circuit pattern. The technique that we will present involves estimating, with the aid of a calibrated vision system, the actual coordinates of two predefined salient features of the circuit pattern, component pads, and calculating the horizontal, vertical, and rotational deviation of the expose mask. The vision algorithm that was implemented will be detailed and its development as an optimization problem, to satisfy the speed, accuracy, and hardware constraints of the system will be discussed. Measurements are accurate to the nearest 2.5 microns and the processing time of each ceramic substrate is not more than sixty seconds using an IBM AT microcomputer…
This paper mainly introduces a solution to Automatic labeling machine based on the machine vision and WebService technology. The system adopts PC+PLC control architecture. It obtains barcode information by WebService to print barcode, and completes the PCB feeding, barcode stripping, vacuum adsorption, servo positioning, barcode labeling and such actions. The system can recognize the barcode and character by machine vision technology, and then uploads data by the WebService, so then completes the whole automatic process.
The rapid development of computer vision techniques has brought new opportunities for manufacturing industries, accelerating the intelligence of manufacturing systems in terms of product quality assurance, automatic assembly, and industrial robot control. In the electronics manufacturing industry, intensive variability in component shapes and colors, background brightness, and visual contrast between components and background results in difficulties in printed circuit board image classification. In this paper, we apply computer vision techniques to detect diverse electronic components from their background images, which is a challenging problem in electronics manufacturing industries because there are multiple types of components mounted on the same printed circuit board. Specifically, a 13-layer convolutional neural network (ECON) is proposed to detect electronic components either of a single category or of diverse categories. The proposed network consists of five Convolution-MaxPooling blocks, followed by a flattened layer and two fully connected layers. An electronic component image dataset from a real manufacturing company is applied to compare the performance between ECON, Xception, VGG16, and VGG19. In this dataset, there are 11 categories of components as well as their background images. Results show that ECON has higher accuracy in both single-category and diverse component classification than the other networks.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.