Applied Research
U S WEST Advanced Technologies
4001 Discovery Dr.
Boulder, CO 80303 USA
+1 303 541 7028
[email protected], [email protected],
[email protected], [email protected]
The concept of multiple interacting devices is not new; however, in most cases the interaction is one-way and takes the form of a controlling device paired with a controlled device. Some examples include a television remote control in which input to the controller device causes the controlled device to change channels, modify volume, etc.; an electronic phone directory and dialer which emits tones that control the behavior of a telephone switch; or a stereo component system in which several independent, interacting devices send information to each other. Interestingly, Brown, Buxton, & Murtagh [2] observe that a multiple-device metaphor is used in a wide variety of computing applications which are not really multiple devices, e.g. the "Control Panel" for adjusting an application's or device's parameters.
In all of these cases, the particular devices that can interact are predetermined and the range of possible interactions is limited. We are interested in the design issues that arise when considering multiple computing devices, each of which is capable of carrying out a broad range of interactive applications, which might be combined in many different ways.
Our group has recently developed a prototype of real estate information service that uses a personal digital assistant (PDA) and a television that is part of an interactive television (ITV) services network. A PDA is a hand-held, mobile computing device that has built-in infrared and serial communications capabilities and which is designed to interact with other devices. PDAs are already used as remote controllers for televisions, however the power of PDAs will be most evident when they go beyond the role of a controlling device and are used as a companion computing device. A likely use for PDAs in the future is to interact with cable television services and networked computer systems as a companion device.
The primary goal of the PDA-ITV project was to investigate the design and use of this example of a multiple-device system. We hoped to support mobile users with the PDA, take advantage of the high-quality display capabilities of a television, and utilize the resources of a broadband information network. In this paper we describe the prototype and highlight several of the design challenges that arose.
Design constraints are more fluid in a multiple-device system for several reasons. First, there are several devices which differ in their input and output characteristics, and differing combinations of devices have novel input-output characteristics. Second, the user may accomplish different parts of a task using different devices, and task characteristics can change depending on the particular device combination that is available. Third, input and output are confounded, with the output to one device (e.g. a touchscreen) becoming the input to another (e.g. a television).
We identified several general questions that arise in the context of designing for multiple-device systems. These include:
The prototype real estate information system was developed using data from interviews and participatory design sessions with potential users. The requirements gathering, design sessions, and development activities were all influenced by the fact that our system utilized more than one device.
The application architecture is illustrated in Figure 1. A user interacts with a PDA using the touchscreen. A PDA communicates with an ITV service through a television settop box. Communication between the PDA and the settop box uses wireless infrared technology. The settop box communicates with a server through a cable system. Communication is bi-directional between the device pairs. This enables complex user-system interactions. For example, a user can perform an action on the PDA which is received by the settop box, which then downloads data from the server, which can then be displayed on the television and/or transmitted back to the PDA.
We refer to this situation as a dual-device system since there are only two devices "visible" to the user: the PDA and the television. Here we discuss only the behavior of the PDA and television.
Task Structure | Search for a set of homes | Serial access to homes | Individual home browsing |
PDA Screens | House Selection Screen | House Information Screen |
Table 1. The task structure derived from user studies, and PDA screens developed for two of the tasks.
We identified a task structure composed of three phases: searching for candidate homes, serially accessing homes for inspection, and opportunistically exploring the information available about a particular home (top row of Table 1). Although the phases must occur in this order, an earlier phase can always be returned to directly from a later phase. We assumed that our overall design task was to support this task structure.
Participants began by giving a general description of how they wanted the system to look and behave. Next they specified the design by creating a house shopping scenario. Participants created representations of their input controls and output displays on two pads of paper, one representing the television screen and the other the PDA. Successive pages showed changes in appearance.
All participants placed input controls on the PDA. They all utilized the touchscreen capabilities and created maps, icons, buttons, and other controls for the PDA. Even when menus were displayed on the television screen, a companion representation was used on the PDA for selection.
The participants clearly expected the information on the television to be coordinated with the information on the PDA. If a selection on the PDA resulted in the display of information about a particular house on the television, for example, the PDA touchscreen would also be updated. This is quite different from a traditional remote control which clearly does not change its characteristics in response to what it is controlling. We adopted several of the participants' ideas for coordinating the PDA and television in the prototype.
Our design of the prototype interface had three aims: 1) to support users' task goals 2) to exploit users' major design ideas, and 3) to explore the design constraints of interacting devices. Participants suggested using the touch screen to display informative graphics such as a map or floor plan. Direct manipulation on the graphic displayed on the touch screen was used to control the television display. From this suggestion, we derived two of our design themes: 1) coordinated graphic displays on the PDA touch screen and television screen, and 2) direct manipulation on a PDA touch screen.
We decided that it was appropriate to have a separate PDA screen for each phase of the task analysis. The prototype supports the house selection and house information browsing components of the task analysis (Table 1 and Figure 2). We have not yet implemented the initial search phase. Consistent with the users' practice of moving between phases, navigation buttons allow users to go back to any previous screen.
During the house inspection phase, users wanted a variety of types of information in several categories. They also wanted to control the display of information from the PDA. We supported this by creating buttons for six types of house information on the House Inspection PDA screen. The buttons can be selected in any order to browse the information.
In the following sections, the House Selection screen and House Inspection screen are described in detail.
From the House Selection Screen, the user may return to the earlier search phase by pressing the "Search" button at the bottom of the screen. If the user wishes to see details about a house, he or she taps on the appropriate house on the PDA map.
The House Information Screen contains six house information buttons at the top and an information box in the center. The six information buttons provide access to various categories of information about the selected house. This information appears in the information box and may have accompanying information on the television screen.
If a television is not available, the floorplan presentation on the PDA is still useful as a stand-alone application. Users could navigate through a house and learn details about each room by taking the PDA with them on a real walkthrough, for example.
Without a television, the maps can be used to navigate to different locations or simply learn about the neighborhood. With the television, the user gains a visual feel for the neighborhood and can gather considerable information about each of the establishments.
The buyer could glance at the houses on the television and decide whether to begin a new search (if all of the houses looked like the wrong style, for example) or inspect some in detail. If a new search was desired, then the "Search" button at the bottom of the PDA screen is available.
The user may next wish to look at the details for a small number of candidate houses. By touching a house with the stylus, the user is taken directly to a description of the house on the House Information Screen and is shown a close up of the house on the television (right portion of Figure 2). By moving back and forth between these screens, the buyer simulates browsing the advertising section of a newspaper or real estate booklet.
Once a few candidates have been selected for serious consideration, the buyer might next begin examining the maps (Figure 4) to see where the candidate houses are located. Houses far from neighborhood schools or in undesirable locations could be rejected. After narrowing the search to a very few houses, the buyer might take video walkthroughs of each one (Figure 3). Finally, the buyer would use the "Realtor" button to get contact information about the realtor.
On a visit to the location, the buyer and realtor could take the PDA with them and use the "Maps" screen to navigate to the locations of candidate houses. Once at a house, the "Floorplan" screen would be used to find a house's rooms.
This scenario shows that a prospective buyer could learn a considerable amount about a home and a neighborhood before visiting the new location. The buyer could come to the realtor with several pre-selections. The scenario also shows the use of the system both in a dual-device situation and as a mobile, stand-alone PDA application (using data downloaded through the cable television system). In addition to the advantages of current online real estate systems and interactive home finding services [1,9], the PDA-ITV system enables mobility, multi-user information sharing, greater presentation control, and the ability to use the service in the multiple contexts of house selection.
Information devices differ in their strengths. For example, a television is appropriate for pictures, videos, and audio output. PDAs are appropriate for display of text and some graphics such as simple maps or icons. An important part of our user requirements gathering was to capture information on which devices users might want to use for different information. We then placed graphical and schematic information on the PDA and detailed images and videos on the television (cf. [2]).
Value is added if applications on two (or more) devices are cooperative and complementary to each other. For example, a map on a PDA and pictures of houses on a television are useful in themselves. However, if the map on the PDA can be used to navigate through house pictures, then the combined application is more useful than either stand-alone application.
An analysis of materials currently used in real estate sales showed several types of information. The most basic information about real estate is textual, consisting of descriptions of houses' features and realtors' contact data. Pictures are very important and are typically the next step beyond text in house advertisements. Real estate offices also provide house pictures. Videos are also sometimes available. Later in the buying process, maps become important. Thus the nature of the information strongly biased where we wanted to place it. High resolution image information went on the television while textual and schematic information went on the PDA. In the case of the maps, an interactive schematic map with main streets only was displayed on the PDA, while a more detailed map or neighborhood scene was displayed on the television.
Users need different types of information for different tasks. For example, we discovered that real estate buyers wish to perform an initial search without much effort. They are happy to view pictures and read descriptions in one location, at home or in a realtor's office for example. Thus we could provide information relevant to this activity on the television. Map and neighborhood information, on the other hand, is desired once buyers begin traveling to homes. Thus, this information must be represented on the mobile PDA.
All devices should be dealing with the same task, and a device should never be left in an old state. For example, a user may view a house on the television screen but then begin to read the description for a new house on the PDA. If a picture of the old house remains on the television there is potential for confusion.
Information about the same thing can be presented in more than one way. For example, information about a room can be conveyed in a picture, a text description, and a floorplan graphic. It is often desirable to display different types of information simultaneously on the same devices. Multiple representations allow systems to take advantage of the ability of users to manipulate one type of representation and see changes in another. For example, navigation through a house video is controlled on a schematic floorplan displayed on a PDA.
All control components and text information were placed on the hand-held device. Enough information was included on the PDA to enable users to perform some aspects of all the subtasks. Information on the television enhanced the ability of users to do the tasks, but it was not required.
PDA-augmented systems could support many different kinds of activities. For example, it is easy to imagine that a user might want to contact a realtor through the PDA (a communications application), use the PDA to calculate mortgage payments or apply for a loan analysis (financial services applications), or get instructions for how to get to an address from a current location (a transportation application that might even use global positioning data). In fact, future computing applications will involve a plethora of specialized devices.
While our initial prototype focused on the home buyer as a primary user, several other potential users would be involved in the use and support of a real system. A house seller might be interested in listing their home and need to arrange for the appropriate data to be gathered (e.g. video) and published. Real estate agents would become involved both using the system on behalf of their buyers and arranging for sellers to publish their house information. Information providers would become involved in the storage, maintenance, and delivery of the material.
Multiple-device applications complicate the problem of matching the capabilities of devices to users' tasks [5,7]. They also greatly complicate any attempt to produce a taxonomy of devices [3,6] since their usage patterns change through time and vary depending on their combination (e.g. is exploring a floorplan on a stand-alone PDA different from using the same PDA application to control a video walkthrough?). There are many new challenges and issues for user interface designers when they begin to think about multiple device environments. Our goal in this paper was to begin exploring some of these issues in the context of a dual-device system and to identify some principles for multiple-device user interface design.
The authors would like to acknowledge the contributions of our colleagues: Mike King, Monica Marics, Michael Muller, Carrie Rudman, Patricia Somers, Lynn Streeter, and Scott Wolff; and a visiting colleague: Scott Hudson.
2. Brown, E., Buxton, W., & Murtagh, K. Windows on tablets as a means of achieving virtual input devices. Proceedings of INTERACT-90: Third IFIP Conference on Human-Computer Interaction, (1990), Amsterdam: Elsevier Science Publishers, 675-681.
3. Card, S., Mackinlay, J., & Robertson, G. A morphological analysis of the design space of input devices. ACM Transactions on Information Systems, 9, (1991), 99-122.
4. Fitzmaurice, G. Situated information spaces and spatially aware palmtop computers. Communications of the ACM, 36, 7, (July 1993), 39-49.
5. Jacob, R., Sibert, L., McFarlane, D., & Mullen M. Integrality and separability of input devices. ACM Transactions on Computer-Human Interaction, 1, (1994), 3-26.
6. Mackinlay, J., Card, S., & Robertson, G. A semantic analysis of the design space of input devices. Human-Computer Interaction, 5, (1990), 145-190.
7. Sears, A., & Shneiderman, B. High precision touchscreens: Design strategies and comparisons with a mouse. International Journal of Man-Machine Studies, 34, (1991), 593-613.
8. Weiser, M. Some computer science issues in ubiquitous computing. Communications of the ACM, 36, 7, (July 1993), 75-85.
9. Williamson, C, & Schneiderman, B. The Dynamic HomeFinder: Evaluating dynamic queries in a real-estate information exploration system. Proceedings of SIGIR Conference, ACM, (1992), 339-346.