By Chaki Ng
Today, many public-facing organizations make a wealth of information about themselves available online, but provide little for visitors to interact with on-site. Recently, a group of researchers at the Massachusetts Institute of Technology (MIT) Media Lab designed and deployed an ‘intelligent’ signage system, referred to as Glass Infrastructure (GI), to enable groups of users to more easily interact with data and explore connections between people, projects and ideas.
The trend toward adding interactivity and computing to physical spaces has been due in part to the ease with which the places people work, live and travel can be networked, along with decreasing costs for the corresponding installed equipment. With these factors in mind, MIT’s GI is intended as a new type of visitor information kiosk for thematically organized places like stores, museums and research labs, extending websites into physical space.
The development of the project involved placing 30 touch-sensitive screens in strategic locations throughout MIT’s Media Lab complex. This system allows guests to learn about the lab’s research. It also recognizes them by their radio-frequency identification (RFID) tags and actively provides suggestions of projects and people they may be interested in throughout the building, offering background details and contextual information.
So, the kiosks are context-dependent, displaying information that relates not only to each screen’s location, but also to the user(s) standing in front of it. It is a novel—if not the first—application of such techniques to open, public displays.
In making the lab’s own website more ‘tangible,’ the initial purpose of the GI project was to develop an open information technology (IT) framework that could be used anywhere. It uses artificial intelligence (AI) and a text-understanding system to organize data thematically and update it automatically as new projects and connections arise, relying on the lab’s Project List Database (PLDB) as its source.

MIT Media Lab placed 30 kiosk screens throughout its complex, which could respond both to touch and to RFID tags.
Another goal was to harmonize the process of navigating a large touch screen using one’s fingers to that of navigating a physical space using one’s feet. This was inspired in part by the architecture of the new building the lab had moved into in December 2009, which is full of glass and open spaces.
Intelligent content
The main elements in the GI user experience are persistent representations of the central MIT research groups. When the user clicks on one project, the others already in view simply move aside in a new arrangement.
Within the Media Lab, each screen corresponds to nearby research groups, displaying them as its default view. This provides an entry point for users and a means to link projects to the physical space they inhabit. The user can then shift focus to less or more detail and see how concepts overlap.
In addition to using the PLDB’s project descriptions and lists of affiliated researchers and groups as its sources of data, the network uses the Open Mind Common Sense (OMCS) platform and its associated inference algorithm system, Divisi, to ensure a consistent base of background knowledge. OMCS can learn new words by discovering their relationships to existing concepts, without needing the system to be redesigned or rebuilt.