Interactive Public Displays

Making Public Displays Interactive Everywhere Sebastian Boring and Dominikus Baur ■ University of Calgary

L

arge public displays are becoming increasingly common. Because they’re digital, their visible content can be freely adjusted, and the underlying computing architecture enables practically unrestricted flexibility. Despite their capabilities, they haven’t moved far beyond advertising posters: their content is controlled by either their owners and providers or the companies showing passive (but sometimes animated) ads on them. So, they’re mostly noninteractive information broadcasters with severely limited possibilities. Such displays could provide content that’s more interactive. A proposed tracking For example, they could lend technology has led to several display space “on the go” or enprototype applications that gage and entertain passersby. employ mobile devices to However, their public nature interact with large public and individual capabilities and displays. In turn, these characteristics, such as location and size, lead to various chalprototypes have led to an lenges. These challenges include overarching interaction different input capabilities and concept that allows for public viewing distances. deployment regardless of One common, promising way the space’s characteristics to interact with large public dis(for example, layout and plays is through mobile devices. technologies). Because these devices are readily at hand, they can become personal devices for manipulating the displays’ content (for example, through multitouch). Most important, this approach allows for interaction at a distance—a situation that becomes more necessary as display sizes increase. To empower designers to develop ways to interact in a broad range of settings, we’ve developed a conceptual framework and technical implementation that rely solely on the public displays and us28

March/April 2013

ers’ mobile devices. We based this framework on our previous prototypes,1–3 each of which explored one aspect of using mobile devices to interact with distant displays (see Figure 1). This framework will let us transform the existing implementations into a unified toolkit that allows rapid prototyping of such interactions. Instead of using (possibly) more accurate tracking technologies that must be preinstalled wherever the interaction should take place, our implementation allows for instantaneous deployment in a multitude of environments and scenarios without altering the surroundings.

Designing for Universal Use We believe that, to allow for unified use in different environments, designers must take into account the available technologies and how people behave in those environments. For some practical tips related to these issues, see the sidebar.

The Available Technologies Mobile-based interaction techniques for large displays4 usually require the system to know the spatial relationship between the devices and displays. Accomplishing this is easy when you use sophisticated tracking technologies5 in controlled environments. However, generalizing such interactions to most public spaces containing one or more large displays is cumbersome or even impossible. So, using external (from the public space’s perspective, artificial) hardware might be sufficient to test a concept but often doesn’t allow for true public use in various environments. The only technology a designer can assume to be preinstalled in the environment is the large display. However, as we mentioned earlier, different types of displays might have different capabilities.6

Published by the IEEE Computer Society

0272-1716/13/$31.00 © 2013 IEEE

(a)

(b)

Figure 1. Projection as a central metaphor. Users aim their mobile device at a large public display. (a) If the device is showing its own content, it projects that content onto the display. (b) If the device is showing live video from its camera, it projects touch input onto the display.

Additionally, the displays’ users have various mobile devices with differing capabilities. Designers can overcome this issue usually only when deploying a prototype in and for one setting. They already know the display’s size and resolution, and they can distribute selected mobile devices to people who want to interact with the display.

How People Behave When developing for one space, designers can predetermine the display’s optimal viewing distance. However, in a situation involving multiple public spaces and displays, each space and display might have different optimal viewing distances and positions.7 This requires both identifying the display being used and deriving a mapping between mobile devices and the displays. In addition, people might move around the environment. For example, if people move in front of someone interacting with the display, that person might change location to continue the interaction. Also, people might be interrupted (for example, passersby might talk to them) and thus might break out of a sequential interaction process. Finally, several people might want to interact together if the display size allows for this. So, estimating the ever-changing nature of even one public space is difficult.8

A Universal Approach to Tracking Arbitrary Displays To deal with the issues we just discussed, we first had to create a way to identify where a person (and his or her mobile device) is with respect to one or more displays in the environment. To allow for the broad mass of people to interact with

Deploying Mobile Interaction on Large Displays

I

f you intend to deploy interactive systems and interaction techniques in several different spaces, you must take into account those spaces’ unique characteristics. Here are two practical, basic tips.

Use Existing Hardware Interaction techniques should use the hardware in the environment—in our case, large displays and mobile devices—to avoid the deployment of external hardware. So, identification of the spatial relationship between a mobile device and the displays should work without modifying any of the involved devices.

Consider Everyone People are mobile; they’ll move around and change their distance from or orientation to a display they’re interacting with. They also might want to interact together and join or leave the display at any time. An interaction technique should work regardless of the physical factors and must be interruptible.

our system, we used the mobile device’s camera as the main tracking source because most mobile devices have one. Previous research has also looked at this but relied mostly on fiduciary markers.4 Although these markers provide accurate results, they’re limited. They’re either placed around a display, thus altering the physical space, or shown digitally on the display, thus consuming valuable screen space. The mobile camera must also see them at all times, limiting mobility as people interact with a display.9 IEEE Computer Graphics and Applications

29

Interactive Public Displays

Figure 2. Tracking based on a display’s content. The software calculates the image features of both the display and each video frame of a mobile device. It then matches these features (the red lines) to derive the spatial relationship between the display and device.

Finally, they usually aren’t aesthetically appealing and might alienate people.

Real-Time Markerless Tracking An alternative is to use the entire display as a visual marker (see Figure 2). Leigh Herbert and his colleagues first described this approach.10 Their system uses Speeded-Up Robust Features, an algorithm that tries to find a template image within a captured image.11 We use Fast Retina Keypoint (Freak), a similar algorithm that performs better, for real-time tracking.12 First, Freak extracts feature points from the template image (here, the display’s content) and the captured image (here, a frame of the mobile device’s live video). It then filters the points from both images and matches them by trying to find points that correspond well with respect to their features. Finally, it uses the matched points to calculate a transformation matrix from one image plane to the other. With this matrix, the system can transform every point (given in x and y pixel coordinates) from one image’s coordinate system to the other. Because this approach is based solely on image features, the camera doesn’t need to see the display fully in its viewfinder. Instead, a subregion is sufficient, provided that it contains enough features for successful matching. In addition, knowing the display’s physical dimensions lets us estimate the device’s distance from and orientation to it—giving us six degrees of freedom. We extended Herbert and his colleagues’ approach with our Virtual Projection1 approach to allow for dynamic content. Instead of comparing each video frame’s features with the display features taken at application launch, the display 30

March/April 2013

periodically captures its content (that is, takes a screenshot) and calculates the captured image’s features. The intervals used for capturing new image content naturally depend on the content and its update rate. For example, showing a movie requires faster update rates than showing static images. However, we found that updating the image features between 10 and 20 times a second suffices for a variety of content. Because feature detectors need high computational resources, the system works as follows. First, each connected mobile device streams its video (frame by frame) to the display, using a wireless network. Next, the display calculates the transformation matrix. Finally, using the matrix, the system notifies all mobile devices of their correct spatial relationship. A problem that arises from transmitting video frames wirelessly to a display is that—owing to changing image features—a temporal mismatch likely exists between the current content on the display and the most recent camera frame. Even slight changes between the content and a frame might cause severe calculation errors in the transformation matrix and spatial relationship. To compensate for this delay, the display temporarily stores captured screenshots and corresponding features in a queue. Once a live video frame from a mobile device’s camera arrives for processing, the display selects the screenshot closest to the received frame through time stamps of both images. To do so, the mobile device and display synchronize their local clocks periodically. Ultimately, this approach minimizes the temporal and visual offset between a live video frame and the content that was shown at the time of capture. The resulting system allows for real-time, markerless tracking of one display with continuously updated content.1

Scaling to Multiple Displays The tracking we just described assumes one target display. However, this often isn’t the case in public settings (for example, several displays in a subway station); sending frames to a single display won’t suffice. To overcome this, our system adds a server component to the architecture (in the same way as we described in previous research2). The server runs on either a dedicated machine or one of the computers driving a display. All the displays and mobile devices communicate through this server. The displays send their screenshots to the server, which then computes the image features. The server handles incoming frames, matches them with all known displays, and responds to the mobile clients.

Although this approach works well for a handful of displays, it doesn’t scale up for scenarios with many displays (for example, New York’s Times Square). More connected displays require the server to compare more screenshots to incoming frames, which dramatically decreases the detection speed and thus harms real-time interaction. However, with the increasing accuracy of GPS as well as built-in compasses and accelerometers in mobile devices, we can minimize this processing load drastically. Each video frame also contains the device’s ■■ ■■ ■■

rough location in the space (through GPS), direction (through the compass), and local orientation (through the accelerometers).

You might argue that this alone suffices to track the device. However, it isn’t accurate enough to achieve pixel-exact tracking. Nevertheless, through this, the system can exclude displays that the mobile device definitely didn’t point at when it captured the live video frame. At most, the server has to compare the video frame to a small number of displays.

changing video (25 fps). Third, current mobiledevice cameras have wide-angle lenses, limiting the operational range. If a person stands too far from a display, the captured video frames might not contain enough features to allow for comparison. However, we can bypass this by transmitting the frames in full size, without scaling them down first. Nevertheless, future improvements to mobile devices should eliminate these limitations.

We built several prototype applications to better understand and investigate new interaction concepts for mobile devices and large displays. Applications of Our Tracking Using our tracking algorithm, we built several prototype applications to better understand and investigate new interaction concepts for mobile devices and large displays. Here, we present two of the applications.

Performance and Limitations Our current implementation is for the iPhone 4, 4S, and 5 (with iOS 6 support) but potentially runs on any mobile device with a camera. In normal operation, the system uses 320 × 240-pixel images from the camera, which are sufficient for fast, accurate tracking. The frames can be scaled down if faster transmission over the air is required. With one display, our system achieves approximately 22 fps with a response time of roughly 50 ms on a machine with a 3.4-GHz Intel i7 processor. Naturally, these results decrease as the number of displays increases. However, as we described earlier, if mobile devices augment the image with positioning data, our system can keep the number of necessary comparisons relatively low. Furthermore, as stationary and mobile computers’ processing speed increases and transmission mechanisms get faster, the processing times will further decrease. We encounter the typical limitations related to optical tracking. First, a feature-based approach requires displays to contain content with rich features. Although this is in practice rarely an issue, uniformly colored backgrounds challenge our approach. Second, depending on a display’s resolution, the rate for capturing screenshots might be slower than the display’s update rate. For example, 15 screenshots per second isn’t enough for a fast

Virtual Projection With Virtual Projection, shaking the mobile device opens the view manager, in which users create a view or switch to an existing one. Users can interact with the view directly on the device. When users aim the device at a display and hold their fingertip on the view, the application projects the view onto the display. The area covered by the mobile device constitutes the device’s reference frame (viewing frustum). Releasing the finger fixes the projection at its current location. We created several examples that use different variations of this interaction sequence. We compared Virtual Projection to relative input techniques (touchpads) and absolute input techniques (minimaps) in a series of target alignment trials.1 It compared well (although it wasn’t necessarily better). The obvious use of Virtual Projection is to clone the mobile device’s entire view. However, the ability to decouple a projection from the device allows for subsequent interactions on the device (which are then updated on the projection). For example, users can create a Post-It-like note, write on it, and project it on the display (see Figure 3a). At any given time, users can change the message; the change is then reflected on the display. Sometimes, the mobile device’s screen might show only part of an image. Virtual Projection IEEE Computer Graphics and Applications

31

Interactive Public Displays

(a)

(b)

Figure 3. Projecting a mobile device’s content onto a large display. The display can show (a) exactly what’s on the mobile device’s screen or (b) an entire image while the device shows a portion of it.

(a)

(b)

Figure 4. The mobile device’s viewing frustum (reference frame) can serve as input to either (a) the mobile device in a “magic lens”-like fashion or (b) another projection already on the display. In the latter case, the user is applying a filter to a photograph on the display.

can show the entire image, making the projection much larger than the view on the device (see Figure 3b). A thin white border on the display denotes the subregion visible on the device. When users navigate the image, Virtual Projection applies those changes to the projection. The reference frame can also be used for input to the device’s view. For example, to quickly select a subregion of a photograph on the display, users aim the device at the subregion and tap the device’s viewfinder. The application then updates both views accordingly. Another example uses this input style in a continuous fashion. Users can “hover” their mobile device over a map on the display, which updates the device’s view in real time (see Figure 4a). Moving 32

March/April 2013

the device sideways changes the location; moving closer to or farther from the display zooms the view. Likewise, users can employ the reference frame to change the projection’s appearance. For instance, users can apply filters to photographs on the display (see Figure 4b). Filters affect all projections that allow filtering. The filter settings (for example, grayscale or invert) are adjustable on the device’s viewfinder. While the device is projecting the filter, moving the device changes the outcome accordingly in real time.

Touchable Facades To investigate how our tracking system would fare in a true public setting, we applied our Touch Projector2 approach to the Ars Electronica Center

(a)

(b)

Figure 5. Projecting input onto a large public display. (a) The media facade of the Ars Electronica Center in Linz, Austria, was the setting. (b) Using our Touchable Facades application, multiple people could simultaneously change the facade’s colors by touching the video feed on their mobile device’s viewfinder.

in Linz, Austria. This building’s facade hosts approximately 40,000 LEDs embedded in 1,087 addressable windows (see Figure 5a). Our Touchable Facades application let multiple people change the facade’s color simultaneously. Users could aim at the facade and “touch” it in live video shown in their device’s viewfinder (see Figure 5b).3 We presented Touchable Facades to a broad audience during the Ars Electronica Festival—a digitalarts festival held every September. Although we wanted to have many people interacting simultaneously, the restrictions of Apple’s App Store in 2010 didn’t allow for uploading applications that directly use the raw camera frames. So, we had to hand out three of our devices to passersby during the festival. No instructions were necessary; people learned how to use the application simply by observing others doing so. Having multiple people interacting simultaneously presented challenges not present when testing with one person only. Most important, users were at different distances from the facade. Users farther from the facade had less accurate input because the facade was smaller in the mobile device’s viewfinder. To adjust for this, we used Touch Projector’s zoom feature. If the building’s representation in the viewfinder was too small (the user was farther away than expected), the application zoomed in until the building fit in the live video. If the building was too large (the user was closer than expected), the application did nothing. That’s because we didn’t foresee any problems with higher accuracy. Nevertheless, for most users, this approach ensured a constant control-display ratio independent of their distance. In addition, they

could open a tool palette on demand. Although Touch Projector allowed for such controls on the facade, we placed them on the mobile device to keep the drawing canvas as large as possible. The feedback we gathered was generally positive; interviewees confirmed our assumption that such a system is easy to learn and use. (A more detailed description of the evaluation results appears elsewhere.3) Several interviewees considered the ability to change the appearance of the facade (and thus partly of the city) in real time as a form of digital graffiti. However, some participants became frustrated. This was due mostly to the parallel nature of our application. One interviewee said, “It is good to interact in a parallel way if you know the person. But if you don’t know the person, you are kind of fighting over pixels and space to draw. It’s kind of annoying.”3 On the other hand, two participants used such interaction to create a strobe-like effect by alternating the facade’s fill color between black and white. So, our application supported both collaborative and competitive interactions, and the public deployment uncovered interesting aspects of the application.

Generalizing Interactions: Projection as Metaphor Each of these applications employed similar interaction styles. To initialize interaction between a mobile device and a display and to identify the spatial relationship between the device and display, users simply point their device’s camera at the display. Once the device has detected the display, the device covers the area on the display defined by the device’s frustum. As we mentioned earlier, IEEE Computer Graphics and Applications

33

Interactive Public Displays

(a)

(b)

(c)

(d)

(e)

(f)

Figure 6. The user’s movement can result in distortions caused by (a) nonperpendicular projections and (b) a pixel mismatch between the mobile device and the large display. We can remove distortions to best suit the application: (c) using only the rotation, scale, and translation components; (d) using only scale and translation; (e) using only translation; and (f) selecting only the target display to show content in a full-screen fashion.

users employ this reference frame to project either content or touch input. Three aspects turn this metaphor into a powerful interaction technique. First, everyone understands the concept of optical projection. Second, the applications behave as users expect because the reference frame is tightly coupled to the mobile device’s motion. Finally, the applications work at arbitrary distances and orientations, although both influence the results.

34

viewfinder. Now, users can “touch” content on the display by touching its representation in the viewfinder.2 Any touch on the device is projected onto the target display (see Figure 1). In both cases, the reference frame defines the interaction area—regardless of what’s being projected onto the display.

Addressing User Mobility

The Mobile Device Defines the Interaction Area

User movement introduces certain complications that could interfere with the successful application of our projection metaphor.

The obvious application of this metaphor is to use the spatial relationship to mimic the behavior of optical projection (as in Virtual Projection). That is, the mobile device’s contents are shown on the large display exactly within the region the mobile device covers. At the same time, the projection behaves predictably: moving the mobile device (and thus changing the covered area) changes the projected content’s position accordingly.1 Also, just as with an optical projection, the distortion is mapped oneto-one, allowing for full control (rotation, scaling, and translation) of a projection. Projection of touch input (as in Touchable Facades) also exploits the spatial relationship (and the transformation between the two displays). Showing the live video in the mobile device’s viewfinder turns the device into a see-through display, which lets users see the target display when aiming the device at it. This see-through nature further preserves immediate feedback because users can observe their actions directly in the device’s

Dealing with keystoning. As users move, the normal vector (the vector pointing forward) of their mobile device’s camera changes. When this vector isn’t perpendicular to the target display, keystoning occurs.13 Keystoning makes a square or rectangular image look like a keystone shape—a trapezoid (see Figure 6a). For content projection, keystoning might result in heavily distorted output. For touch input, the larger the covered area is, the less accurate the touch input will be. This is because a pixel on the mobile device now translates to a larger physical area of the target display (see Figure 6b). For content projection, we eliminate keystoning by considering only the projection’s rotation, scale, and translation components (see Figure 6c). However, we can exclude more aspects of the transformation at will. Removing the rotation component creates projections whose scale depends on the mobile device’s distance from the display but whose

March/April 2013

(a)

(b)

Figure 7. Manipulating the interaction area (reference frame). (a) Users can freely adjust the interaction area’s size. (b) They can also decouple the interaction area from the device.

edges are always parallel to the display’s boundaries (see Figure 6d). Removing the scale component by ignoring the mobile device’s distance results in a fixed-size, rectangular projection whose position is the only controllable variable (see Figure 6e). Considering only the target display’s identity renders the projection in a full-screen fashion (see Figure 6f). Which adjustment is appropriate depends on both the display type (for example, a wall display versus a tabletop) and the application. For touch input, we can also use any of the approaches we just described. To preserve the absolute and direct mapping between a touch on the mobile device and the resulting touch event on the display, we must distort the live video image. To do so, we calculate the transformation between the covered area and the adjusted one. We use that transformation’s inverse to distort the video image (that is, to provide visual rectification). Although this guarantees a constant controldisplay ratio on the device’s viewfinder, it doesn’t take into account the user’s distance from the display. To deal with this, we used the zoom feature we mentioned earlier. After distorting the live video, the mobile device zooms the video image in accordance with the distance from the display. That is, the farther away a user is, the more the device zooms in, ensuring that moving a finger n pixels on the device translates to moving m pixels on the display. Being a part of the overall metaphor, zooming is also available for projecting content, which then allows adjusting the projection’s size (see Figure 7a). Temporarily decoupling the interaction area. Another limitation of the original metaphor is the tight coupling between the mobile device and interaction area. This requires users to always aim at the target display. Natural hand tremors (slight, involuntary hand movements) worsen this by adding jitter, making it hard to keep the interaction area steady. To overcome this, we let users decouple the interaction area from the device. This lets them

freely move the device without affecting the area’s position or orientation (see Figure 7b). For content projection, the current projection stays at that position and can now be viewed without jitter. Touch input projection similarly benefits from this technique. Users can pause the live video, which results in a still image. However, the mobile device can no longer give immediate feedback—one of our approach’s advantages. To address this, we overlay the paused video image with a computergenerated graphic representing the display’s content in the interaction area. With continuous updating of this overlay, users perceive immediate feedback (and see the results of other users’ interaction). Ultimately, this lets users select a region of the display for subsequent manipulation. Combined with automatic zooming (see Figure 7a), this technique allows for precise interaction regardless of the user’s distance from the display.2

O

ur approach lets people aim at a display and start the interaction (by projecting touch or content) regardless of their distance from or orientation toward the display—much as if the display were directly at hand. However, we also discovered shortcomings related to our interaction concept. For example, because Touchable Facades provided no feedback about where other users were, users got frustrated when they interacted competitively. This problem arose because participants were too far from each other. We conclude that our interaction concept needs further adjustment to fully address multiperson interaction at a distance. We believe that—given our technology and the toolkit we’re developing—future research can directly address this.

Acknowledgments The Deutsche Forschungsgemeinschaft (DFG), the German State of Bavaria, the iCORE/NSERC/SMART IEEE Computer Graphics and Applications

35

Interactive Public Displays

Chair in Interactive Technologies, Alberta Innovates Technology Futures, the Natural Sciences and Engineering Research Council of Canada, and Smart Technologies funded this research. Steven Feiner helped us create Virtual Projection. Antonio Krüger and Michael Rohs provided input during the initial design phase of Touchable Facades; Alexander Wiethoff, Sven Gehring, and Johannes Schöning provided input during its development. Andreas Pramböck, Stefan Mittelböck, and Horst Hörtner (Ars Electronica Center) provided technical support during the preparation phase and festival; Patrick Baudisch, Sean Gustafson, and Andreas Butz collaborated with us on Touch Projector.

References 1. D. Baur, S. Boring, and S. Feiner, “Virtual Projection: Exploring Optical Projection as a Metaphor for Multidevice Interaction,” Proc. 2012 SIGCHI Conf. Human Factors in Computing Systems (CHI 12), ACM, 2012, pp. 1693–1702. 2. S. Boring et al., “Touch Projector: Mobile Interaction through Video,” Proc. 2010 SIGCHI Conf. Human Factors in Computing Systems (CHI 10), ACM, 2010, pp. 2287–2296. 3. S. Boring et al., “Multiuser Interaction on Media Facades through Live Video on Mobile Devices,” Proc. 2011 SIGCHI Conf. Human Factors in Computing Systems (CHI 11), ACM, 2011, pp. 2721–2724. 4. R. Ballagas et al., “The Smart Phone: A Ubiquitous Input Device,” IEEE Pervasive Computing, vol. 5, no. 1, 2006, pp. 70–77. 5. N. Marquardt et al., “The Proximity Toolkit: Prototyping Proxemic Interactions in Ubiquitous Computing Ecologies,” Proc. 24th Ann. ACM Symp. User Interface Software and Technology (UIST 11), ACM, 2011, pp. 315–326. 6. P. Dalsgaard and K. Halskov, “Designing Urban Media Facades: Cases and Challenges,” Proc. 2010 SIGCHI Conf. Human Factors in Computing Systems (CHI 10), ACM, 2010, pp. 2277–2286. 7. P.T. Fischer and E. Hornecker, “Urban HCI: Spatial Aspects in the Design of Shared Encounters for

Visit CG&A on the Web at www.computer.org/cga

Media Facades,” Proc. 2012 SIGCHI Conf. Human Factors in Computing Systems (CHI 12), ACM, 2012, pp. 307–316. 8. A. Wiethoff and S. Gehring, “Designing Interaction with Media Facades: A Case Study,” Proc. 2012 Designing Interactive Systems Conf. (DIS 12), ACM, 2012, pp. 308–317. 9. N. Pears, P. Olivier, and D. Jackson, “Display Registration for Device Interaction—a Proof of Principle Prototype,” Proc. 3rd Int’l Conf. Computer Vision Theory and Applications (VISAPP 08), vol. 1, Springer, 2008, pp. 446–451. 10. L. Herbert et al., “Mobile Device and Intelligent Display Interaction via Scale-Invariant Image Feature Matching,” Proc. 1st Int’l Conf. Pervasive and Embedded Computing and Communication Systems (PECCS 11), SciTePress, 2011, pp. 207–214. 11. H. Bay et al., “Speeded-Up Robust Features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, 2008, pp. 346–359. 12. A. Alahi, R. Ortiz, and P. Vandergheynst, “Freak: Fast Retina Keypoint,” Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR 12), IEEE CS, 2012, pp. 510–517. 13. R. Sukthankar, R.G. Stockton, and M.D. Mullin, “Automatic Keystone Correction for Camera-Assisted Presentation Interfaces,” Advances in Multimodal Interfaces—ICMI 2000, LNCS 1948, Springer, 2000, pp. 607–614. Sebastian Boring is an assistant professor in the University of Copenhagen’s Human-Centered Computing Group. He previously was a postdoctoral fellow in the University of Calgary’s Interactions Lab. His research focuses on novel techniques that extend interaction beyond a display’s boundaries—particularly those of mobile devices. He’s interested mostly in how to detach input from the display in question. Boring received a PhD in computer science from the University of Munich. He’s a member of ACM. Contact him at [email protected] or [email protected]. Dominikus Baur is an independent researcher. He previously was a postdoctoral fellow at the University of Calgary’s InnoVis (Innovations in Visualization Laboratory). Baur is interested in personal visualizations and the promise of making personal data useful to their creators. He also investigates the implications of bringing visualizations to touch-enabled devices and multidisplay environments. Baur received a PhD in media informatics from the University of Munich. He’s a member of ACM. Contact him at dominikus. [email protected].

Selected CS articles and columns are also available for free at http://ComputingNow.computer.org. 36

March/April 2013

Making public displays interactive everywhere.

As the number of large public displays increases, the need for interaction techniques to control them is emerging. One promising way to provide such i...
4MB Sizes 1 Downloads 3 Views