By the standards of the day, signal splitting required a great deal of storage memory and computer processing. Early video wall processors were prohibitively expensive for most organizations.
“They used to be just for command-and-control rooms and specialty entertainment purposes,” says Brawn.
“One way around the signal splitting problem, when dealing with a permanently installed system, was to synchronize laserdiscs (LDs),” says Greenberg. “This process involved content specially prepared by a production house. One LD player per display would provide one part of the overall image. This method had remarkably good results, but only limited applications and no real-time image splitting.”
Content could not yet be moved or resized. Switching required rewiring or a patch panel.

Compared to projectors, video walls’ tiled arrays of digital displays can provide higher aggregate resolution, enabling clearer images for large audiences.
By the late 1980s, lower-cost charge-coupled device (CCD) memory and more affordable, powerful and flexible processors helped reduce the cost of managing video walls. And new programmable processors meant the medium could be used more creatively, exploiting ‘multi-image’ capabilities. The screens began to change, as well.
“Around 1989, they went to rear-projection CRTs, which were often called ‘cubes,’” says Greenberg. “Each of these involved a CRT projector mounted inside an enclosure with a rear-projection screen. Cubes helped make the gaps between displays negligible and increased brightness and contrast, making video walls a more viable presentation medium.”
Cubes were indeed a significant step forward, becoming the primary video wall component for years to come. The concept still survives today, but with Digital Light Processing (DLP), liquid crystal display (LCD) and light-emitting diode (LED) backlighting technologies supporting higher resolution for crisper text and graphics than CRTs could.
Meanwhile, advances in processors enabled computers with multiple graphics cards to manage multiple windows of content simultaneously. By the late 1990s, processors could accept both digital and analogue sources of content.
“It was the start of the modern video wall era,” says Greenberg. “Since then, costs have come down, capabilities have improved and the medium has become more effective in a wide variety of venues.”
Configuration options
In a matrix-switch video wall configuration, dedicated ‘switchboard’ hardware supports the multiple input sources and output targets and can translate between different video formats.
Another option is a ‘daisy chain’ scalar setup, where one input is looped through multiple monitors, each of which picks up one quadrant of the signal. This is a simple setup, as the daisy chain capability is usually built into the screens, but it allows only one input at a time.

Today’s software and processors make it easier for video walls to switch between one full-size image and a variety of smaller images, like this project for Westman Communications Group in Brandon, Man. Photos courtesy Hiperwall
Video processors provide greater flexibility for using multiple simultaneous sources and displaying content at different sizes and placements, similar to a real-time TV studio.
“They could be used in shopping mall concourses, for example, partitioned to display ads, wayfinding information and other content,” says Michael Ferrer, sales and operations manager for NEC Display Solutions.
Processor-based systems are still a more expensive option, however, and often require overprovisioning. So, to move beyond earlier systems’ limited scalability, an increasing number of video walls are now being based on a ‘distributed’ system.
In this type of configuration, autonomous content sources and targets are all connected through a standard communications system, such as a local area network (LAN), which can then be controlled with desktop software. This arrangement can either connect ‘dumb’ screens to computers or, like other digital signage networks, use displays with built-in computers.
“Embedded computers in the displays make it much easier to deliver the content over Ethernet connections,” Greenberg says.
“There are many variants of open pluggable specification (OPS) modules on displays to drive content and minimize wiring issues,” says Ferrer. “You can also use wireless-fidelity (Wi-Fi) or cellular connectivity for each screen.”
In any case, it is easy to add more displays as desired and there is less need for specialized hardware than in the past, reducing costs. Brawn Consulting estimates a distributed system can be built for around $73,000, compared to $90,000 for a comparable video processor-based system.