Tangis In-Motion UI

Although the Tangis In-Motion UI was designed specifically for mobile use, it was generic in the sense that it supported a wide range of computing tasks, information formats, and interaction methods. It was intended to allow continuous interaction as a user moved from one situation to another, regardless of a change in the userís real-world task, location, social setting, physical environment, audio environment, etc. For example, an airline mechanic might use speech recognition and a head-mounted display to access manuals while performing a maintenance task, switching occasionally a high-resolution touch screen to view and manipulate schematics. Or a real estate agent might use an audio-only interface to get driving directions to a house, then switch to flat panel display upon arrival for viewing detailed info. The Tangis UI was designed to make human-computer interaction easier in a variety of situations by fully supporting a number of different interaction methods.

Lisa was responsible for UI design and iterative usability testing for this product. 

The following screenshots are from a vehicle-inspection demo application included with the Tangis In-Motion UI.  Note that the screen design and colors were optimized for a 640x480 head-mounted display.

Above: The Tangis UI was designed to support a dialogue between the user and the computer, with the computer making prompts and offering choices. The prompt ("Select a task") is displayed as text at the top of the screen and optionally spoken using synthesized speech.  The Primary (task) tab displays all valid responses to the current prompt ("Add vehicle info," "Add inspection info,"). The user can make a response by 1) pointing-and-clicking, 2) scrolling the mouse wheel, or 3) speaking the phrase. 

As soon as the user makes a selection, the UI presents the next prompt.

Above: The text along the bottom of the screen is the progress bar, which shows the user's previous responses and the current prompt.

Above: The user can input alphanumeric data using a keyboard or speech (via a phonetic alphabet).

Above: Information entered by the user is displayed in the content area, which occupies the right-hand portion of the screen.

Above: The Global tab contains commands that are not direct responses to the current prompt but are nonetheless valid at any time.  The user can switch to a different tab by speaking its name.  (All speech-enabled commands are displayed in gold text.) 

Above: The Media tab, when enabled, allows the user to adjust the view of the content area.

 Projects