Parallel-processor-based systems offer system developers a path to real-time, high-speed image capture and analysis--but only in certain applications As many a Star Trek fan will tell you, one of the most memorable quotes from Star Trek: The Wrath of Kahn (1982) is uttered by Mr. Spock. When asked whether Captain Kirk should assume command, Spock replies that “logic clearly dictates that the needs of the many outweigh the needs of the few.” This same concept was obviously on the minds of developers of the World Community Grid (WCG; www.worldcommunitygrid.org), an organization intent on creating the largest public computing grid to benefit humanity.
The idea of the network is very simple. Researchers sign up with highly parallel computational tasks such x-ray crystallography and protein analysis. To perform this analysis requires large numbers of data sets to be analyzed, a procedure that can easily be accomplished on a distributed network of computers. Luckily, with more tahn 340,000 members and 840,000 processors networked online, the WCG is providing much of the computing power required.
But even with this number of distributed processors, the research tasks that need to be accomplished require an even larger number of computers. With this in mind, WCG developers are asking for donations—but not in the form of money. WCG wants to harness the power of your computer at home or at work to help speed this research. Basically, the idea is rather simple and resembles a peer-to-peer network.
To become a member of WCG, simply download a small program from the WCG Web site onto your computer. When your computer is idle, it requests data on a specific project from the WCG server. Your computer then performs computations on these data, sends the results back to the server, and asks the server for a new piece of work. Since each data set is only approximately 50 Mbytes, all of today’s PCs can easily handle the task.
The software also allows you to configure your system so that it can be set to perform these tasks during midnight hours or at weekends. To make this more interesting, you can set up your own “team”—get your friends and colleagues to join and accumulate “points” that, to be honest, are worth about as much as my frequent-flyer miles! So, instead of turning your computer in the office off when you leave for home you can leave it on knowing that you are contributing to invaluable research on cancer, climate change, and human proteome folding.
While many research projects such as these lend themselves naturally to parallel distributed processing, so do many machine-vision and image-processing tasks. In stereo image processing, for example, two processors can be used to simultaneously process image data from two independent cameras.
Indeed, in this issue, Homer Liu of QuantaView (Tokyo, Japan) describes how two Intel Xeon processors have been used for this very task (see “Vision-based robot plays ping-pong against human opponent,” p. xx). With the advent of dual- and quad-core processors, this trend is likely to continue as software vendors rework their code to take advantage of parallel-processing concepts.
To achieve the optimum performance for parallel-processor-based systems, however, developers will need to closely match the I/O, processing, and storage capabilities of such systems. Today’s Camera Link-based systems, for example, can be used to transfer data from a single camera to a host-based frame grabber at rates of up to 850 Mbytes/s using 85-MHz transceivers.
However, there is no single or multiprocessor von Neumann-based CPU commercially available that at present could possibly process individual images at this data rate, relegating such camera-based systems to high-speed image analysis where image data are captured and then later played back for image analysis. Because of this, it is likely that for the foreseeable future, heterogeneous networks of distributed computers may remain useful only for large-scale algorithmic research projects such as those currently running on the World Computing Grid.