Years ago, our publishing company produced a magazine entitled Computer Graphics World. Its premise was rather simple. Editors wrote about a number of different graphics components -- display controllers, monitors, software -- that were used in a number of different markets ranging from architectural and mechanical CAD to 3-D graphics rendering for motion pictures and desktop publishing.
Back then, there were lots of companies that produced different hardware and software. Because of this, there were numerous display controller manufacturers and monitor vendors that were only too willing to spend their advertising dollars to support the magazine.
One thing happened, however, that caused both the downfall of a number of companies in this industry and eventually the magazine. Although part of this was due to market consolidation, technology itself played a larger role.
Years ago, only fixed-frequency monitors were available and to get them to synchronize with a specific horizontal and vertical frequency, a specialized display controller was needed. To meet this demand, companies developed display controllers that supported other companies’ frequency-specific monitors.
In the 1990s, with the development of multisync monitors and multisync circuitry on the PC motherboard, nearly every monitor manufactured could be supported without the need of a specialized display controller. What happed after that is history.
Today, only gamers purchase graphics boards, while many industrial systems simply drive their displays from the PC. The graphics “market” is primarily a software industry replete with companies making games and architectural, mechanical, and electrical CAD and animation packages. While all of these packages together represent a large total market, the industries they serve are radically different.
Like the computer graphics market of the late 1980s, the machine-vision market is still evolving. It will only take time before a combination of technological innovation and consolidation reduces the number of companies that currently participate. For this to happen, a technological change will first need to occur.
At present, thousands of cameras and hundreds of frame grabbers exist that use interfaces such as Camera Link, GigE, USB, and FireWire. Because only two or three of these standards are currently supported on most motherboards, system developers need to purchase specialized frame grabbers to support many of the others. To process images as they are captured from digital cameras, many of these boards incorporate FPGAs that allow image processing functions to be processed in a pipelined fashion before the images are stored in the host PC memory.
Now imagine, however, a high-speed interface that can support a range of different camera clocks, one that is fully deterministic like Camera Link, features all the benefits of PC-based software such as FireWire, and one that is incorporated onto the system’s motherboard. Couple this innovation with the power of multicore CPUs and a graphics processor on the motherboard and the need for multiple cameras supporting different camera interfaces and frame grabbers with on-board processing capability will rapidly diminish.
While the jury remains undecided on what the type of camera interface will be, one thing is for certain. When it is finally developed and accepted, it will herald the demise of a number of smaller companies currently offering both camera and frame grabber products. While this will provide a boon for system developers whose time to integrate machine-vision systems will rapidly diminish, it will also provide an opportunity for vendors of machine-vision software.
Rather than support multiple camera standards, drivers, and frame grabbers, they will be able to concentrate more fully on developing higher-level machine-vision algorithms that run rapidly on multicore hosts. History will repeat itself -- the only question is when. If the computer graphics industry is any benchmark, such advances are at least a decade away.