Monday, March 31, 2008

That joke isn't funny anymore

Machine-vision-system integrators are more than consultants--they actually have to know something about integration


There’s an old joke about consultants that reads something like, "Ask a consultant what time it is and they will borrow your watch and tell you the time." When I first heard the joke, I thought it was quite funny, especially since I have personally been asked questions about how to design machine-vision systems for various applications.

Unfortunately, while the joke may ring true to many considering deploying a machinevision system, it is not as amusing when one considers the disparate disciplines of optics, illumination, image processing, computer science, and mechanical engineering needed to develop such an application. Indeed, this is the main reason that the development of these systems is so challenging and at the same time so frustrating.

Since very few universities and colleges bundle these subjects into a single degree, it is difficult for developers to hire technical staff. Instead, they must rely on pooled knowledge from those with experiences in individual subjects.

At the outset, system development may seem easy. Light the product to be inspected, capture an image of it, and then trigger a reject mechanism should the part fail the inspection. When looked at from 30,000 ft, system development may seem trivial and—to those in management—inexpensive.

When examined from a microscopic level, however, the problem of designing a system becomes more complex. Just choosing a lens to image the subject may result in hours of NRE (nonrecurring engineering) time. Deciding on the type of lens required, optical mount, focal length, and resolution may appear easy, but, because of the lack of detailed specifications offered by many suppliers, an evaluation of a number of lenses may be required—a process that could take days.

Additional complications
This situation is further compounded by the fact that coupling a lens to several different cameras may result in very different images being obtained. As David Dechow, president of Aptúra Machine Vision Systems (Lansing, MI, USA; www.aptura.com) pointed out in our February Webcast, the different formats of imagers employed by camera vendors may result in varying levels of illumination rolloff or vignetting.

With the move to larger-format imagers, this problem is further exacerbated. Worse, if the digital camera you select does not have dead-pixel correction or flat-field correction, the resulting image may not be usable. As can be seen, simply selecting the correct optics and cameras is challenging. But system integrators face other tasks relating to lighting, choosing the correct software package and operating system, and how these are integrated into an industrial automation system controlled by PLCs. While college textbooks may help students understand the basic principles of all of these subjects, deploying machine-vision systems requires more. Luckily, most integrators are fully aware of this situation.

For those considering deploying a machine-vision system, a visit to an engineering facility may be most valuable. If you are led into a conference room and given a sales pitch, beware! Instead, ask for a tour of the engineering department, where you should expect to see workbenches strewn with optics, lighting, cameras, and half-complete computers. If people there appear busy and frustrated, take this as a very positive sign.

Often, however, potential customers visiting these facilities arrive unprepared, handing the company management a few questions and a part that they would like automatically inspected. Hence, the integrator must probe more deeply into exactly what needs to be inspected, the nature of the production line, the type of lighting used in the facility, and the previously installed computer systems— essentially borrowing the potential customer’s “watch” to ascertain the time. In such situations, having your “watch borrowed” is obviously quite a good idea, since it will only lead to the development of a more effective and efficient vision system.

Tuesday, March 18, 2008

The Needs of the Many

Parallel-processor-based systems offer system developers a path to real-time, high-speed image capture and analysis--but only in certain applications As many a Star Trek fan will tell you, one of the most memorable quotes from Star Trek: The Wrath of Kahn (1982) is uttered by Mr. Spock. When asked whether Captain Kirk should assume command, Spock replies that “logic clearly dictates that the needs of the many outweigh the needs of the few.” This same concept was obviously on the minds of developers of the World Community Grid (WCG; www.worldcommunitygrid.org), an organization intent on creating the largest public computing grid to benefit humanity.

The idea of the network is very simple. Researchers sign up with highly parallel computational tasks such x-ray crystallography and protein analysis. To perform this analysis requires large numbers of data sets to be analyzed, a procedure that can easily be accomplished on a distributed network of computers. Luckily, with more tahn 340,000 members and 840,000 processors networked online, the WCG is providing much of the computing power required.

But even with this number of distributed processors, the research tasks that need to be accomplished require an even larger number of computers. With this in mind, WCG developers are asking for donations—but not in the form of money. WCG wants to harness the power of your computer at home or at work to help speed this research. Basically, the idea is rather simple and resembles a peer-to-peer network.

To become a member of WCG, simply download a small program from the WCG Web site onto your computer. When your computer is idle, it requests data on a specific project from the WCG server. Your computer then performs computations on these data, sends the results back to the server, and asks the server for a new piece of work. Since each data set is only approximately 50 Mbytes, all of today’s PCs can easily handle the task.

The software also allows you to configure your system so that it can be set to perform these tasks during midnight hours or at weekends. To make this more interesting, you can set up your own “team”—get your friends and colleagues to join and accumulate “points” that, to be honest, are worth about as much as my frequent-flyer miles! So, instead of turning your computer in the office off when you leave for home you can leave it on knowing that you are contributing to invaluable research on cancer, climate change, and human proteome folding.

Avoiding gridlock
While many research projects such as these lend themselves naturally to parallel distributed processing, so do many machine-vision and image-processing tasks. In stereo image processing, for example, two processors can be used to simultaneously process image data from two independent cameras.

Indeed, in this issue, Homer Liu of QuantaView (Tokyo, Japan) describes how two Intel Xeon processors have been used for this very task (see “Vision-based robot plays ping-pong against human opponent,” p. xx). With the advent of dual- and quad-core processors, this trend is likely to continue as software vendors rework their code to take advantage of parallel-processing concepts.

To achieve the optimum performance for parallel-processor-based systems, however, developers will need to closely match the I/O, processing, and storage capabilities of such systems. Today’s Camera Link-based systems, for example, can be used to transfer data from a single camera to a host-based frame grabber at rates of up to 850 Mbytes/s using 85-MHz transceivers.

However, there is no single or multiprocessor von Neumann-based CPU commercially available that at present could possibly process individual images at this data rate, relegating such camera-based systems to high-speed image analysis where image data are captured and then later played back for image analysis. Because of this, it is likely that for the foreseeable future, heterogeneous networks of distributed computers may remain useful only for large-scale algorithmic research projects such as those currently running on the World Computing Grid.