3D User Interfaces

Navigation Tools for Viewing Augmented CAD Models Pierre Fite Georgel, Pierre Schroeder, and Nassir Navab ■ Technische Universität München

D

uring the production of an object, design modifications can occur because of, for example, corrections of design flaws, thirdparty components that must be replaced but are unavailable, or simply mistakes. So, undocumented discrepancies might exist between the CAD model and the final object. Although such discrepancies might not have consequences for the object’s functionality, this gap between the virtual model and the real object could become problematic when the model is used for maintenance and repair. One solution is to reengineer CAD models. However, this is expensive and overkill, considering that only a given percentage of a model might require modification. Rather than correct a model, we document it. We attach images of the real object to the model. Using augmentedreality (AR) approaches,1 we align the images to the CAD coordinate system, thereby creating mixed views.2 These views show the real scene with the object in place and the model superimposed on it. This permits a visual inspection of the discrepancies between the model and the built object. However, we can’t navigate in a mixed view as freely as in a 3D model because the frustum on which the picture is textured can be translated only on the z-axis, and the viewpoint (defined as a 3D position, an orientation, and a field of view) can’t be translated. Additionally, for navigation in this mixed world to stay consistent with existing CAD software, we need methods for browsing images. We want to avoid going through a list of images sorted by names or dates when we could more easily navigate by using geometric information about the viewpoints. To solve these problems, we first developed a zoom-and-pan user interface for navigating within a mixed view. We then created tools that let users change direction and focus. These two tools allow Published by the IEEE Computer Society

navigation within a set of images using virtual 3D points, thus letting users intuitively access other mixed views.

Interaction within a Mixed View A mixed view is a bit different than a regular augmentation. A regular augmentation displays the image and a 3D model that has been moved to the camera coordinate system. In a mixed view, the image is textured on a 3D frustum that’s fixed in the CAD coordinate system. Our VID system (Visual Inspection and Documentation; Mixed views combine images see the “Related Work in Mixing of real-world objects with CAD Models and Real Images” visualizations of CAD models sidebar) offers a different type of of those objects. A zoom-andinteraction in this mixed world. pan user interface helps users Users can add or remove 3D navigate within these views. components. They can translate Other tools let users navigate the frustum in z to visualize the within a set of registered model from the image’s front images using virtual 3D points, or back. They can also modify the transparency of all objects— thereby intuitively accessing frustums, in the case of images, other mixed views. or models, in the case of CAD components—thus letting them obtain a good mix of virtuality and reality. Figure 1 shows the current VID interface.

Mixed-View Positioning We register all the images stored in the database or with the model to the model’s coordinate system. By registered, we mean that we know the camera’s internal (focal, skew, and so on) and external (position and orientation) parameters. K denotes the matrix of the internal parameters; for the external parameters, R denotes the rotational part, and t denotes the translational part. A 3D point X relates to an image point p of the camera as follows:

0272-1716/09/$26.00 © 2009 IEEE

IEEE Computer Graphics and Applications

65

3D User Interfaces Figure 1. The VID (Visual Inspection and Documentation) GUI: (a) the tree view for browsing the component list, (b) the 3D view, which displays mixed views, (c) the interaction plug-in used to manipulate the virtual camera and to browse the images in 3D, and (d) the list of thumbnails of the room’s images.

a

b

d c

p ~ K(R X + t). (For a basic explanation of the notation used in the article, see the related sidebar.) A virtual viewpoint has seven degrees of freedom that we must set: one for the field of view, three for rotation, and three for the camera’s center of projection. We define the center of projection in the CAD coordinate system as O = -Rt. The image frustum (see Figure 2) is a quadrangle formed by four 3D corners C j with j ∈ {1 … 4}. We define those corners as

(f VC) from K is straightforward; Figure 2 represents this opening angle. To obtain a view centered on the image, we must rotate the virtual camera RVC. To form this matrix, we use the image frustum corners. We first define the three unit vectors: Vx ~ C2 - C1 (the red, horizon4 tal vector in Figure 2), Vz ~ O − 1 ∑ i=1C i (the green, 4 out-of-the-plane vector), and Vy ~ Vz × Vx. We then form the rotation matrix: RVC = [Vx Vy Vz]. 

C j = aRK–1c j + O,

(2)

(1)

Zoom-and-Pan Interaction where a ∈ R + defines the frustum translation in z and c j represents the corners in pixel coordinates. Computing the virtual camera’s field of view

To fully exploit modern high-resolution cameras in photo-based AR, we adapt the idea of zoom and pan from 2D user interfaces. Regular desk-

Image frustum (C i ) Field of view (φvc )

Mixed view

Focus (F ) Area of interest (A ) i

(a)

Center of projection (O )

(b)

Figure 2. Mixed-view positioning. (a) A virtual camera under a zoom-and-pan motion. When the virtual camera rotates to center on the focus, the field of view changes to fit the area of interest, and the center of projection stays unchanged to avoid parallax. (b) The resulting mixed view, with a gray transparent model focused on the white dashed area. 66

November/December 2009

top screens offer a resolution of approximately 2 megapixels, whereas professional cameras offer 15 megapixels. Additionally, we can’t expect to have the complete screen available for the augmentation because standard GUIs of CAD software are cluttered with different tools. Typical 2D UIs for zoom and pan let users set a zoom factor and a center of interest (focus) materialized by an image point f. This describes the area of interest to display, which will be defined by its four corners aj. Using Equation 1, we can obtain Aj (or F, respectively) from image points aj (or f). We now redefine RVC from Equation 2 using the 3D points Aj and F. We define Vx and Vy in the same manner. Only the last vector Vz changes, as follows: Vz ~ O - F. We set the field of view fVC the same way as before, using Aj instead of Ci. To focus the mixed view on a particular area of interest, the user sets the area of interest in a 2D thumbnail, making navigation in a mixed view intuitive. Additionally, the user can zoom out of the image to gain access to a more contextual view of the model surrounding the image. Figure 3 shows some results.

3

3D Navigation To gain acceptance, methods for navigating augmented CAD models must be intuitive (that is, use simple interactions). Our proposed tools are intuitive yet fully exploit the available geometric information that a set of mixed views offer. Choosing the next viewpoint to visit in a given direction might sound like a trivial task. Unfortunately, it’s much more complex if we want to provide natural navigation. From a mathematical viewpoint, we can clearly say whether one viewpoint is to the left or right of another because we know where the camera is in 3D. For example, in Figure 4, this approach would work perfectly for viewpoints 1 and 2. However, this doesn’t take into account the direction in which the camera is looking. Imagine what happens if the user is at viewpoint 3 and chooses to move to the right on the basis of only the viewpoint positions. This would lead him or her to viewpoint 4, which isn’t satisfactory because the user requested a right-side view of the scene, and viewpoint 4 shows the left side.

Figure 3. The results of zoom-andpan motion. The virtual camera focused on the area represented by the marked rectangle in the thumbnail image allows a detailed view of the area of interest, but still with access to information about the context from the thumbnail. The virtual model is in red and gold; the dark blue is the background of the rendered scenes.

4

1 2 Figure 4. An example virtual scene in the VID (Visual Inspection and Documentation) system. The green-gold pipe is part of the CAD model that was augmented with pictures; the cones represent the locations of a subset of mixed views. Notice the geometric ambiguity. Viewpoint 1 is to the left of viewpoint 2, which can be extrapolated from the centers of projection. However, this doesn’t work for viewpoints 3 and 4, where the relation is inverted; we must use the relative position between the scene and the views.

IEEE Computer Graphics and Applications

67

3D User Interfaces

Related Work in Mixing CAD Models and Real Images

A

n immense amount of research has focused on mixing CAD models and real images and on interacting with them. Here we look at examples pertinent to our research discussed in the main article.

Augmented Reality and CAD Augmented reality (AR) is generally applied to video streams or live streams,1 but many industrial projects use photo-based AR (augmentation based on a single still image) because it’s easier to integrate in most workflows. Nassir Navab and his colleagues use AR to reengineer a CAD model and create an as-built model.2 Katharina Pentenrieder and her colleagues use AR in factory planning.3 They want to know whether a factory upgrade is feasible in reality; for example, they want to verify whether a car production line can handle a new model with different dimensions. Both of these projects look mainly at the measurement problems, answering questions such as “How long is this pipe?” or “Does this car fit here?” Mirko Appel and Nassir Navab have investigated augmented 2D models.4 They align technical drawings (2D models) and images to create coregistered orthographic and perspective images. These images mix floor maps and pictures, thus adding information from the map to the reality. However the use of 2D models is fading away, and 3D CAD models are turning into a functional tool for managing and operating factories. The models are often linked to inventories and sensor information used for monitoring the plant. This is why we want to create scalable solutions to document 3D models. We’ve integrated our proposed tools in VID (Visual Inspection and Documentation),5 which we use to verify a model on the basis of images of the object.

VID—Industrial AR VID is a project between the Technische Universität München (TUM), Siemens Corporate Technology, and

Areva NP. It aims to create augmented CAD software that lets CAD engineers verify and document their models. VID is a client that connects to the database storing the CAD model. We extended the usual database, which contains only physical objects (that is, CAD components), to store images, image data, and viewpoints. VID also includes tools to register images to the model’s coordinate frame.5,6 VID’s user interface (see Figure 1 in the main article) is similar to that of any CAD software. On one side is a hierarchy tree that divides the object’s architecture; for example, we divide a power plant into the categories of facility, building, level, and room. Each room lists all its components, which we divide into systems: civil, pipe, machine, and air conditioning. On the other side is a renderer that can display CAD components in real time. The user can navigate freely in this subwindow, as in any CAD visualization software. Unlike classic CAD viewers, VID can display mixed views (see the main article). When the user activates a mixed view, VID restricts the virtual-camera motion. In the original version, when VID displayed a mixed view, the user could manipulate only the frustum position and the different objects’ transparency. Users could also access images through a list of names or thumbnails and could sort it by date or name but not by location. So, we developed the tools we describe in the main article to address these challenges.

3D Navigation Other researchers have worked on accessing images by location. In “Photo Tourism: Exploring Photo Collections in 3D,” Noah Snavely and his colleagues offer a good example of image-based 3D navigation.7 From the Internet, they collected many images of a specific real-world scene. They registered the image set (for example, compute the relative position between cameras), using image features and structure from motion. (Structure from motion reconstructs an object’s 3D structure by simultaneously estimating the captured scene’s geometry and the camera

The problem is that we don’t want the next viewpoint to be in a specific direction; we want the selected viewpoint to picture the scene in this specific direction. To consider both position and orientation, we use the local position of virtual 3D points.

Notation

I

n this article, we denote matrices with a bold uppercase letter (A), vectors with a bold lowercase letter (a), 3D points with a calligraphic uppercase letter (A), image points with an italic lowercase letter (a), triangles formed by, for example, the corner abc as n(a, b, c), squares formed by the corner abcd as u(a, b, c, d), and the positive equivalence by ~. 68

November/December 2009

Virtual 3D Points First, we determine in which direction each camera is pointing. Using this line of sight, we approximate the focus depth (the maximum distance to all visible objects). Virtual 3D points represent this limit; they’re virtual because they aren’t directly linked to the real structure. For each camera i, we compute these points as the intersection of the (half) line of sight (from image i starting from p) Li ( p) = {X X = aRi K−i 1 p + Oi , a ∈ R+ } and the 3D structure . B. We define the points as X i ( p) = Li ( p) ∩ B . Often, little information is available about the scene (a coarse reconstruction, some models, points used for calibration, camera positions, and

geometry.) This provides a sparse reconstruction and camera viewpoints (3D position and orientation). Users can display the image frustum (representing the camera) and the sparse structure and then select a given frustum and navigate through the images. Photo tourism (also called photosynth) offers three navigation methods. First, object-based navigation lets users access all images that visualize a given object represented abstractly by the sparse reconstruction. Second, the user can select another frustum, and the virtual cameras will move to the selected viewpoint. Third, starting from a source image, the user can move in six directions: zoom in, zoom out, left, right, up, and down. The system automatically selects the next image, using the sparse structure in the source image. However, a sparse reconstruction might not always be available or feasible, limiting this approach’s usefulness. Furthermore, photo tourism has no tool for navigating (zoom and pan) within an image, forcing the user to interact directly in the 3D world, which isn’t always intuitive. Snavely and his colleagues have introduced alternative navigation tools: scene-specific controls for orbits and panoramas.8 To determine the controls, they use reconstructed feature points and angles of views. Another popular approach is the movie map, in which a multicamera system moves through an environment.9,10 In most movie map systems, the cameras are mounted on a car that travels along the street. These methods offer 3D navigation between images: from any view, the user can either go to a following position (where an image was acquired) or turn left or right (from the image’s current position). Because these systems use a fixed setup to record the image sequences, the geometry between the views is perfectly known and stays constant over time. So, the system always behaves the same. Google Maps Street View uses such a system to provide a virtual view of the urban environment.

so on), so we can’t guarantee that Li intersects the available 3D information. So, B must be a continuous, abstract representation of the scene. We decided it must be some outer boundary of the scene that we can easily compute from the available data (we cover this in more detail later). Figure 5 illustrates the idea of the virtual points lying on a bounding surface. Now that we know each camera’s focus depth, we can classify the relative position between views.

Direction Classifier We want to classify neighboring views in six clusters: left, right, up, down, zoom-in, and zoom-out. These clusters will be the directions available in

References 1. G. Schall, E. Mendez, and D. Schmalstieg, “Virtual Redlining for Civil Engineering in Real Environments,” Proc. 2008 IEEE and ACM Int’l Symp. Mixed and Augmented Reality (ISMAR 08), IEEE CS Press, 2008, pp. 95–98. 2. N. Navab et al., “Cylicon: A Software Platform for the Creation and Update of Virtual Factories,” Proc. 7th IEEE Int’l Conf. Emerging Technologies and Factory Automation (ETFA 99), IEEE Press, 1999, pp. 459–463. 3. K. Pentenrieder et al., “Augmented Reality-Based Factory Planning—an Application Tailored to Industrial Needs,” Proc. 2007 IEEE and ACM Int’l Symp. Mixed and Augmented Reality (ISMAR 07), IEEE CS Press, 2007, pp. 1–9. 4. M. Appel and N. Navab, “Registration of Technical Draw­ ings and Calibrated Images for Industrial Augmented Reality,” Machine Vision Applications, vol. 13, no. 3, 2002, pp. 111–118. 5. P. Georgel et al., “An Industrial Augmented Reality Solution for Discrepancy Check,” Proc. 2007 IEEE and ACM Int’l Symp. Mixed and Augmented Reality (ISMAR 07), IEEE CS Press, 2007, pp. 111–115. 6. P. Georgel et al., “How to Augment the Second Image? Recovery of the Translation Scale in Image to Image Reg­ istration,” Proc. 2008 IEEE and ACM Int’l Symp. Mixed and Augmented Reality (ISMAR 08), IEEE CS Press, 2008, pp. 171–172. 7. N. Snavely, S. Seitz, and R. Szeliski, “Photo Tourism: Exploring Photo Collections in 3D,” ACM Trans. Graphics, vol. 25, no. 3, 2006, pp. 835–846. 8. N. Snavely et al., “Finding Paths through the World’s Photos,” ACM Trans. Graphics, vol. 27, no. 3, 2008, article 15. 9. R. Mohl, “Cognitive Space in the Interactive Movie Map: An Investigation of Spatial Learning in Virtual Environments,” PhD dissertation, Education and Media Technology, Mas­ sachusetts Inst. Technology, 1981. 10. M. Uyttendaele et al., “High-Quality Image-Based Interac­ tive Exploration of Real-World Environments,” IEEE Computer Graphics and Applications, vol. 24, no. 3, 2004, pp. 52–63.

the image interaction GUI. The user can then decide in which direction to move the view. The classification will be based on the relative position in the image of the virtual 3D points’ projection. We consider the source image s and its neighbors ns. We project all virtual points X ns ( p) to the camera s. This returns the points ps (X ns ( p)) in s; for simplicity, we define ps (X ns ( p)) = psn s ( p) . To summarize, psn s ( p) is the projection on s of the virtual point issued from the point p defined from ns. To classify the relation between s and ns we only have to analyze the position of virtual points psn s ( p) in relation to the area of interest a sj and the focus fs by applying this test: IEEE Computer Graphics and Applications

69

3D User Interfaces Lines of sight Virtual 3D points

Model Bounding surface Centers of projection Figure 5. The top view of an augmented CAD model. The bounding surface encapsulates the model and the centers of projection. The virtual 3D points are the intersections between the lines of sight and the bounding surface; they inform about the relation between the scene and the viewpoints.

■■

left— psn s ( f ns ) ∈ ( f s , a1s , a s4 ) ,

■■

right— psn s ( f ns ) ∈ ( f s , a s2 , a3s ) ,

■■

up— psn s ( f ns ) ∈ ( f s , a1s , a s2 ) ,

■■

down— psn s ( f ns ) ∈ ( f s , a3s , a s4 ) ,

■■

zoom-in— ∀j, psn s (anj s ) ∈  (a1s , a s2 , a3s , a s4 ) , and

■■

zoom-out— ∀j, pnss (a sj ) ∈  (a1ns , an2s , a3ns , an4s ) .

We set anj s (or f ns , respectively) to the corners (or the center) of neighboring image ns. So, we can set a sj (or fs) to the corners (or focus) of the area of interest of source image s by using the zoom-and-pan GUI, thus letting users specify more precisely the next view request. To sort the neighboring frames, we first verify whether they’re zoom-in or zoom-out; if they’re neither, we verify whether they’re left, right, up, or down. To get the next image in a specific direction, for the direction (left, right, and so on) we pick the image with its psn s ( f ns ) closest to fs, and for the zoom, we pick the image with the biggest projected area. We always verify that X ns ( p) is in front of s, so the proposed image is always looking in the same direction as the source.

Implementation Details To develop our software, we used standard .NET components and some self-made components; for example, the rendering engine is based on OpenGL. We merged the zoom-and-pan interaction and 3D navigation into one reusable UI component (see Figure 1c), which is connected to the database and rendering engine. It retrieves information about the local geometry of the scene and image data from the database and sends updates 70

November/December 2009

for the virtual camera and the image to be displayed to the rendering engine. All interactions are mouse controlled and transmitted in real time. We tested our software on CAD models from industrial plants. These models consist of different systems, levels, and rooms. The 3D points used for registering the images are on a room’s walls.2 An ellipsoid represents the room’s outer bounds; it’s a simple geometric structure that allows fast collision detection but approximates the 3D scene’s boundaries well. To compute the ellipsoid, we use Nima Moshtagh’s Minimum Volume Enclosing Ellipsoid algorithm.3 This ellipsoid is the smallest ellipsoid that contains the centers of projection and the 3D points used for registration. It could also incorporate models or sparse reconstructions. This representation deals naturally with open environments. Figure 6 shows the results of the computed ellipsoids. The choice of ellipses to represent the room’s outer bounds worked well for this model because it was clustered room-wise. Because the rooms consist of a set of cuboids, they’re well represented by ellipses.

Experiments To verify that our 3D navigation algorithm offers natural navigation, we conducted a study with 10 participants (two women and eight men, ages 24 to 32) and two sets of registered images. For each set, we asked the participants to select two images they felt were to the left (or right) of a designated source image. They had never visited the scene depicted in the images, and we presented the sets randomly. We measured the time each participant spent on each set. The first set had nine images: the source, three images that the direction classifier considered to be to the left, three it considered to be to the right, and two outliers (images from the same scene but unrelated to any of the other images). We considered this set the simplest because we acquired it from a tripod that only rotated. The average time to perform the classification was 1 minute, 20 seconds. Figure 7 shows this experiment’s results. It demonstrated that the algorithm handles such a scenario perfectly; the algorithm classified the images in agreement with the participants (0 percent misclassification). The second set also consisted of nine images: the source, four images to the left, three images to the right, and one outlier. This set was more complex because the camera motions consisted of rotation and translation. Additionally, some views were extremely similar, sometimes making the decision

Figure 6. The results of the Minimum Volume Enclosing Ellipsoid algorithm, using 3D points (the red dots) that are available for registering the mixed views. Each image represents a different room in the building, showing how the ellipsoid fits the environment’s outer boundary.

more difficult. The average time to perform the classification was 3 minutes, 29 seconds. Figure 8 summarizes the results. The algorithm still agreed with 88 percent of the participants’ decisions. Additionally, 50 percent of our algorithm’s errors involved image D, which was extremely close to the source. However, the position the algorithm selected for this image still agreed with 66 percent of the participants, which shows that our algorithm works well even with ambiguous views. Finally, Figure 9 shows the results of a request for a zoom-in, zoom-out motion. This mode of motion offers a good way to reach a specific part of the model.

H

ere we focused on applying discrepancy checking and AR visualization for construction and commissioning of power plants. Other applications such as offshore installations and large infrastructures such as airports, shopping centers, and highrises could also take advantage of our techniques. Left

10

Right

Outlier

No. of decisions

8 6 4 2 0

True False

A

B

C

D E Image

Source

F

G

H

B

D

Figure 7. Experimental results for the images in set 1 (image rotation). From eight images (labeled A–H), the participants selected two they felt were to the left of the source and two that were to its right. If our algorithm’s decision agreed with that of a participant, we labeled it true; otherwise, we labeled it false. As the chart shows, the algorithm correctly classified all the images. Image B is clearly to the left because the door is to the extreme left of the source image and centered in B. Image D is clearly to the right because the door almost disappeared.

IEEE Computer Graphics and Applications

71

3D User Interfaces

Left

10

Right

6 4 2 0

A

B

C

D E Image

F

G

H

Source

Future research could be twofold. On one hand, further photogrammetric and computer vision methods need to be developed to automatically quantify discrepancies and eventually rectify and

Figure 9. Results of a request for a zoom-in, zoomout motion applied to the center image. The bottom right shows the outermost view; the top right shows the innermost view. The local geometric relation is visible in the 3D scene on the left; the viewpoints’ directions don’t need to be aligned.

72

Outlier

True False

8 No. of decisions

Figure 8. Experimental results for set 2 (image rotation and translation) around the source image. Image D is an example image positioned to the left; G is an example image positioned to the right. The algorithm mostly agreed with the participants. Most errors happened with image D, which is extremely close to the source but still to the left; see the pump that appeared on the left.

November/December 2009

D

G

update the existing CAD model. On the other hand, AR navigation and interaction tools need to be introduced that allow automatic labeling and intuitive navigation in a mixed-reality environment.

Acknowledgments Areva NP and Siemens Corporate Technology partially funded this project. We thank Mirko Appel for continual discussion and support and are grateful to Stuart Holdstoc, who took the time to proofread the article. We also thank the IEEE CG&A staff for helping to finalize our manuscript.

in computer vision from École normale supérieure de Cachan. He received the Areva NP innovation prize in 2007 for his work on augmented reality on construction sites. Contact him at [email protected]. Pierre Schroeder is working on his Diploma thesis at Université Blaise Pascal. His research interests are computer vision, artificial intelligence, and software engineering. Contact him at [email protected].

References 1. N. Navab, “Industrial Augmented Reality (IAR): Challenges in Design and Commercialization of Killer Apps,” Proc. 2003 IEEE and ACM Int’l Symp. Mixed and Augmented Reality (ISMAR 03), IEEE CS Press, 2003, pp. 2–6. 2. P. Georgel et al., “An Industrial Augmented Reality Solution for Discrepancy Check,” Proc. 2007 IEEE and ACM Int’l Symp. Mixed and Augmented Reality (ISMAR 07), IEEE CS Press, 2007, pp. 111–115. 3. N. Moshtagh, Minimum Volume Enclosing Ellipsoids, tech. report, School of Eng. and Applied Science, Univ. Pennsylvania, 2005.

Pierre Fite Georgel is working on his PhD in computer science at Technische Universität München. His main research interests are industrial augmented reality and 3D computer vision. Georgel has a master’s

Nassir Navab is a full professor and the director of the Computer Aided Medical Procedures & Augmented Reality institute at Technische Universität München. He also has a secondary faculty appointment at TU München’s Medical School. His research interests include medical augmented reality, computer-aided surgery, and medicalimage registration. Navab has a PhD in computer science from INRIA and the University of Paris XI. He’s on the Medical Image Computing and Computer Assisted Intervention Society’s board of directors and on the IEEE Symposium on Mixed and Augmented Reality’s steering committee. Navab received the Siemens Inventor of the Year Award in 2001 for his work in interventional imaging. Contact him at [email protected]. Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.

EXECUTIVE STAFF PURPOSE: The IEEE Computer Society is the world’s largest association of computing professionals and is the leading provider of technical information in the field. MEMBERSHIP: Members receive the monthly magazine Computer, discounts, and opportunities to serve (all activities are led by volunteer members). Membership is open to all IEEE members, affiliate society members, and others interested in the computer field. COMPUTER SOCIETY WEB SITE: www.computer.org OMBUDSMAN: Email [email protected]. Next Board Meeting: 17 Nov. 2009, New Brunswick, NJ, USA

EXECUTIVE COMMITTEE President: Susan K. (Kathy) Land, CSDP* President-Elect: James D. Isaak;* Past President: Rangachar Kasturi;* Secretary: David A. Grier;* VP, Chapters Activities: Sattupathu V. Sankaran;† VP, Educational Activities: Alan Clements (2nd VP);* VP, Professional Activities: James W. Moore;† VP, Publications: Sorel Reisman;† VP, Standards Activities: John Harauz;† VP, Technical & Conference Activities: John W. Walz (1st VP);* Treasurer: Donald F. Shafer;* 2008–2009 IEEE Division V Director: Deborah M. Cooper;† 2009–2010 IEEE Division VIII Director: Stephen L. Diamond;† 2009 IEEE Division V Director-Elect: Michael R. Williams;† Computer Editor in Chief: Carl K. Chang† *voting member of the Board of Governors

†nonvoting member of the Board of Governors

BOARD OF GOVERNORS Term Expiring 2009: Van L. Eden; Robert Dupuis; Frank E. Ferrante; Roger U. Fujii; Ann Q. Gates, CSDP; Juan E. Gilbert; Don F. Shafer Term Expiring 2010: André Ivanov; Phillip A. Laplante; Itaru Mimura; Jon G. Rokne; Christina M. Schober; Ann E.K. Sobel; Jeffrey M. Voas Term Expiring 2011: Elisa Bertino, George V. Cybenko, Ann DeMarle, David S. Ebert, David A. Grier, Hironori Kasahara, Steven L. Tanimoto

Executive Director: Angela R. Burgess; Director, Business & Product Development: Ann Vu; Director, Finance & Accounting: John Miller; Director, Governance, & Associate Executive Director: Anne Marie Kelly; Director, Information Technology & Services: Carl Scott; Director, Membership Development: Violet S. Doan; Director, Products & Services: Evan Butterfield; Director, Sales & Marketing: Dick Price

COMPUTER SOCIETY OFFICES Washington, D.C.: 2001 L St., Ste. 700, Washington, D.C. 20036 Phone: +1 202 371 0101; Fax: +1 202 728 9614; Email: [email protected] Los Alamitos: 10662 Los Vaqueros Circle, Los Alamitos, CA 90720-1314 Phone: +1 714 821 8380; Email: [email protected] Membership & Publication Orders: Phone: +1 800 272 6657; Fax: +1 714 821 4641; Email: [email protected] Asia/Pacific: Watanabe Building, 1-4-2 Minami-Aoyama, Minato-ku, Tokyo 107-0062, Japan Phone: +81 3 3408 3118 • Fax: +81 3 3408 3553 Email: [email protected]

IEEE OFFICERS President: John R. Vig; President-Elect: Pedro A. Ray; Past President: Lewis M. Terman; Secretary: Barry L. Shoop; Treasurer: Peter W. Staecker; VP, Educational Activities: Teofilo Ramos; VP, Publication Services & Products: Jon G. Rokne; VP, Membership & Geographic Activities: Joseph V. Lillie; President, Standards Association Board of Governors: W. Charlton Adams; VP, Technical Activities: Harold L. Flescher; IEEE Division V Director: Deborah M. Cooper; IEEE Division VIII Director: Stephen L. Diamond; President, IEEE-USA: Gordon W. Day

revised 1 May 2009

IEEE Computer Graphics and Applications

73

Navigation tools for viewing augmented CAD models.

Navigation tools for viewing augmented CAD models. - PDF Download Free
7MB Sizes 1 Downloads 3 Views