|Welcome to I-Cube: South Africa's leading provider of License Plate Recognition; Facial Recognition & Image Analysis|
ID using Image Analysis by I-Cube
Automotive product measurement
Camera Selector Chart
ID / Verification
Measurement and Determination using Image Analysis by I-Cube
DEMOSREQUEST IA CD \ I-Cube Intro Brochure I-Cube advantage Machine Vision Aggregate ID using Image Analysis by I-Cube Fisher Automotive product measurement Sugar Imaging Stockpile Volume Calculation Counting & Sizing Smoke Monitoring Software Vision gauge SDK: IQ STUDIO Workstations Viewing Station Digital Camera Selector Chart Lens BROCHURES media Cybernetics IPP Sugar Industry INTRODUCTION Chicken Counting SUGAR IMAGING SOLUTIONS Fisher Automotive measurements Art ID / Verification Car Wash Product Measurement and Determination using Image Analysis by I-Cube Directions USER MANUALS CV
DEMOSREQUEST IA CD \ I-Cube Intro Brochure
TRENDS IN MACHINE VISION
The machine vision industry is progressing at a rapid rate. The following sections provide an overview of where the technology is today, and what we can expect in the near future.
Perhaps the largest trend in the industry is toward host-based processing on PCs running in a Windows environment. The ever-increasing power of standard microcomputers, combined with open architecture and standard development environments are proving sufficient for a large number of applications. This is resulting in a decline in applications using specialized, embedded vision processors.
On the camera front, analog RS-170 cameras are still the norm, but the use of digital, progressive-scan cameras is on the rise. High-resolution (megapixel) cameras are becoming more common as many frame grabbers and vision boards now support them. Other trends include capturing only a portion of the image from the detector to increase throughput, and the use of 10 and 12 bits of pixel data. Also expected to see increasing use is newer serial communications such as Firewire (IEEE-1394), capable of delivering real-time (30 Hz) full-frame image rates and more.
As for lighting, traditional sources such as fluorescents and xenon strobes are still commonplace, but LEDs are becoming the favored illumination source due to their long life and relatively low cost.
Communications options have also expanded for machine vision systems, which are often a part of a larger automation system. Some systems now support Ethernet, DeviceNet, and other factory-wide communication protocols.
Vision applications can now be developed faster than ever, thanks to open systems and industry-standard development environments such as Microsoft Visual C++ and Visual Basic. Complete vision libraries, available in .DLL and .OCX (ActiveX) formats, allow complex vision programs to be built into easy-to-use Windows applications. Reusable vision code, combined with rapid application development environments, permit fast prototyping and evaluation of new systems as well as substantially shortened development time.
New vision algorithms and improvements upon existing ones are also beginning to appear. One example is advanced pattern recognition for object identification and location, which is now far more accurate and robust than prior generations. Some new tools can now perform pattern finding on objects which vary arbitrarily in size and rotation compared to the trained pattern, and which have substantial amounts of image degradation. These new geometric-based pattern matching engines rely on object contours rather than object grayscale patterns. Another example is the increasing use of color image analysis to perform classification and inspection tasks that were previously impossible or very difficult using traditional grayscale methods.
Machine vision systems for industrial inspection can now generally be classified into three types:
bullet Smart camera systems
bullet GUI ("point and click") oriented systems
bullet Traditional, fully programmable systems
Smart camera systems are basic systems that process image data at the camera. They are often the minimum-cost solution but generally have the least amount of power and flexibility. They typically have little or no programmability but can reliably perform certain basic inspection tasks.
GUI systems are designed to be programmable using a point and click, graphical user interface. Some versions of these systems can generate or incorporate traditional programming code as well. They offer much of the functionality and power of the traditional systems, often with greatly shortened development cycles. In addition, they offer the advantage that system modifications can often be made by manufacturing engineers, operators, or other plant personnel.
Traditional vision systems are fully programmable in a standard language such as C, C++, or Visual Basic. Many of these systems, like their GUI counterparts, take advantage of the ActiveX programming methodology wherein vision tools are manipulated using properties, methods, and events. These systems generally offer the highest level of flexibility and power but development times are typically longer and require more experienced personnel. Traditional systems are typically used for more demanding applications and in OEM applications where lower unit cost is a critical factor.
Machine vision is a key technology for both industrial inspection and process control. Although many early machine vision installations were targeted primarily on inspection tasks, the benefits and reliability have led many companies to utilize the technology for process control as well.
The lower cost of many vision systems has opened up the market for basic inspection applications in many types of manufacturing lines. Such systems are being widely deployed for part and defect detection, general size and shape inspection, and even color discrimination.
High resolution cameras and higher-speed vision systems are leading the way to solving applications that were previously impossible, or which required multiple cameras and/or processor boards. As a result, both system accuracy and throughput are vastly improved, and fewer design tradeoffs need to be made in today's machine vision systems.
OVERVIEW OF MACHINE VISION
Machine vision is the use of optical non-contact sensing to automatically acquire and interpret images, in order to obtain information and/or control machines or processes. A typical machine vision system consists of one or more monochrome or color video cameras, lighting, vision hardware (frame grabber and/or processor board), vision software (image processing/analysis), and a computer system. Machine vision is widely deployed in industry to improve productivity and quality. Inspection systems can process both two-dimensional and three-dimensional images using grayscale or color image analysis. Common industrial uses of machine vision include assembly verification, defect detection, gauging, identification, alignment, robotic assembly and control, sorting/grading, OCR/OCV, and process control.
Successful implementation of machine vision requires skill and knowledge in many different areas. Among the many disciplines involved in industrial machine vision technology are:
bullet Systems architecture design
bullet Image processing/image analysis
bullet Algorithms and software engineering
bullet Lighting, optics, and sensor technology
bullet Material handling
bullet Analog, digital, and video electronics
bullet Industrial and manufacturing engineering
bullet Quality control
In 1999, the North American machine vision market was estimated to be approximately $1.6 billion, including sales from manufacturers, system integrators and OEMs. Principal users of machine vision technology include the electronics, semiconductor, automotive, food, and pharmaceutical industries, but applications can be found in virtually every manufacturing industry.
I-Cube provides security and recognition systems in the following industry:
|Internal image processing system solutions boost flexibility|
Anyone producing in large quantities for the motor-vehicle industry has to
ensure that their production processes run particularly smoothly and
reliably. The major global manufacturers of soot particulate filters for
the motor-vehicle industry now use In-Sight vision sensors for greater
efficiency on the production lines.
The decision � at the Nienburg plant of the Engelhard Technologie GmbH company � to rationalise the handling processes by using robots went hand-in-hand with the desire to control this precisely and reliably by means of image processing. St�ubli, the robot suppliers, advised to look at the world�s leading specialist in image processing � Cognex. The very first online feasibility survey held in November at Cognex provided excellent comprehensive aspects for complete realisation. The remaining image processing steps were able to be solved independently by the Plant Maintenance department at the Nienburg plant. In December the entire system controlled by image processing underwent initial start-up by the robot system integrator. Thanks to the efficient cooperation and comfortable In-Sight Explorer development environment it was possible to have the �pick-and-place� system up and running in less than four weeks.
Precision guaranteed reliability
With this handling task, difficult facts had to also be reliably mastered. Immediately after the furnace the ceramic filters are forwarded automatically to the transport conveyor and then the robot cell. The robot must be able to detect, for example that the gripping of the type 3 part is done precisely and that it is then placed on Line 2 for further processing. The soot filter product variants differ from each other only in minor terms, however they have to be handled absolutely reliably by the image processing system.
The In-Sight 5100 vision sensor used in the robot cell must be absolutely reliable in recognising each shape amongst the roughly 25 different filter variants and also determine their exact position. To enable the robot to grip faithfully the measuring accuracy must be � 1mm. Because there are also filter variants of identical design but slightly different lengths, the image processing system must also be capable of reliably detecting these parts on the conveyor with their differing heights. Only then will the robot know which part has been picked up. The rotation angles of the mainly asymmetrically shaped parts are to be provided to the robot as extremely precise data. This has to be performed accurately as the subsequent reading of the 2D codes always has to take place at the same point for reasons of process reliability.
A great advantage in implementing this demanding task was the extraordinary reliable operation of the vision tools and algorithms that are integrated as a library into In-Sight. They are parts of the powerful PatMax vision software from Cognex. In contrast to edge detection through grey-scale correlation these tools operate using geometry-oriented object recognition. This operating principle mobilised an exceptional accuracy and reliability in recognising parts and proved to be of immense benefit. This was also shown to be a vital aspect in terms of the image processing system reliability. A target of 99.9 % was set in terms of image processing system availability. This is indispensable with regard to achieving a particularly economic automated three-shift mode.
The selected ceramic filters are then placed by the robot onto the conveyor belt that forwards them to packaging. The accurate position data as supplied by the In-Sight vision sensor then immediately controls the robot.
Fast comfortable configuration
Programming, configuring and organising the installation of the image processing system within the company enables short communication paths to be set up. This would help to achieve a speeding up of the project realisation coupled with greater flexibility in project work, and, last but not least, bring about a significantly faster response capability in any ongoing process adaptation.
Thomas Wente of Engelhard Technologie GmbH, responsible for project execution, said: �On the one hand, we considered it to be very important to be able to execute projects quickly and reliably, yet on the other hand we also wanted to be able to immediately perform any necessary changes to the process ourselves.�
On this note alone, it was also possible to master additional control tasks with regard to packing the soot particulate filters. By using an In-Sight 5100C colour vision sensor it is possible to verify whether the green quality label is available on each filter and whether the cardboard box contains the prescribed number of parts. By continuously checking the number of packed parts and by comparing the number actually produced it is possible to determine the amount of rejects/loss. The support of Cognex employees at the production line itself made it possible to completely solve the control task in only two days. Here too the benefit of having a very simple image processing programming process for the vision sensor using a spreadsheet operator interface was once again demonstrated.
Right from the start an additional expansion of the image processing system was planned for the production lines. This is why work was done straight away with the In-Sight Explorer development environment. This proved to be advantageous in terms of reducing time and costs for setting up as well as programming the networked vision sensors. The extremely user-friendly graphic working platform based on the Windows environment enabled a time-saving and application-specific effective solution for the complete vision tasks to be achieved. This serves to guarantee all aspects of the implementation including the continuous administration and communication of the vision sensor network with the production network.
The fact that the staff learnt very quickly how to use the In-Sight Explorer resulted in a significant reduction in vision development time while also providing a basis for functional reliability, system flexibility and integration into the company�s production network. This proved to be an important aspect during the following project extension. Another In-Sight has been integrated into the existing robot cell and two further vision sensors control the second robot cell which has now been installed.
Complex vision capabilities
In the past the localisation of objects or edges in the image processing system was based on the so-called grey-scale correlation. During this analysis procedure the image pixel grey stages are compared against the reference object whereupon the position is then calculated. This procedure soon reaches the limits of its capabilities when utmost precision is called for and where influences such as, for example rotation, scaling or fluctuating illumination and contrast ratios have to be mastered.
In contrast to the grey-scale correlation process, the patented Cognex vision software PatMax� procedures use the geometric basic structures of objects in a three-stage procedure. Firstly, the most important individual characteristics of an object such as the edges, dimensions, shapes, angles, arcs and shades are isolated and identified. The three-dimensional relationships between these central features of the practised image are compared against the real-time image. By analysing the geometrical data and the features and their three-dimensional relationship then it is possible to clearly define the object�s position with maximum accuracy. Features such as, e.g. outlines with low contrast can therefore be recognised much more reliably, accurately and faster.
PatMax is able to immediately recognise, for example by using a part outline, where it has to locate other characteristics despite rotation, displacement or concealment. The overall recording image no longer needs to undergo linear analysis. This in turn simplifies the feature detection process thereby rendering the vision system very fast, highly flexible and extremely reliable. PatMax records extremely high resolutions down to the subpixel range and provides reliable angular determinations of objects to 0.02 degrees. The vision tool is invariant to the object�s position, orientation and change of scale. Through simultaneous examination of the outline and structure of the object pattern it is possible to eliminate alternating illumination and contrast ratios.
I-Cube. All rights reserved. Revised: February 18, 2008 .