Feature Article

Virtual Prototyping of Shoes Tom Kühnert, Stephan Rusdorf, and Guido Brunnett ■ Chemnitz University of Technology

I

ndustrial product design usually involves two phases: artistic design and technical design. Because artistic designers are concerned mainly with integrating aesthetics and functionality, design concepts usually must be modified to satisfy technical constraints. Whereas technical design commonly employs a digital prototype, artistic design might not require a digital model. Although today’s modeling systems (for example, LightWave, Mudbox, or Rhinoceros) offer many possibilities to create virtual models, their interfaces differ too much from traditional approaches to gain great acceptance from artistic designers. So, early design phases often suffer from inefficient workflows because of A proposed 3D user interface the lack of digital models; for exmimics a shoe designer’s ample, designers might need to conventional work style. recreate the artistic model sevThe methods and concepts eral times. developed for this interface Shoe design begins by selectare applicable to other areas— ing (or creating) an appropriate for example, designing car last. (For more on lasts, see the interiors or cockpits. related sidebar). Using a pencil, the artist draws design lines on the last. Because one last can be used for many designs, it has a paper or plastic cover on which the artist draws. For technical editing of the design, the design lines must be digitized. The usual approach is to separate the cover from the last, unfold it into the plane, and digitize the lines in 2D. Depending on the CAD/CAM system, the succeeding steps might consist of further processing this 2D data or performing a back-transformation of the design lines onto the virtual last. An alternative approach uses an optical system to digitize the design lines in 3D while the cover is still attached to the last. This method, however, is rare because the 3D digitization is more time-consuming and expensive than the 2D procedure. We’ve designed a system for virtual prototyp30

September/October 2011

ing of shoes with an interface that uses a tracked pen and a tracked last as interaction devices. To invoke an operation, the user touches the last with the pen’s tip. To input design lines, we developed a sketching algorithm fast enough to process complex sketches in a few milliseconds. This method correctly interprets different drawing styles ranging from continuous drawing to oversketching. To control the functionality, users can choose from three pen-based interaction metaphors: tapping, moving, and tilting. Users can employ these pen movements to interact directly with design elements, to input values, or to make selections from a menu. We integrated the menu into the last to deliver haptic feedback. Compared to a virtual menu, this physical menu is easier to use and provides a less artificial feel.

Interaction Devices: Calibration and Collision Detection If we had employed a force-feedback device, we could have created our interface without using a physical last. However, abandoning the physical last wasn’t desirable because it serves as a prop for intuitive input and provides a close link to the traditional work style. By holding the last, users can intuitively change the virtual last’s position and orientation. We refrained from equipping the pen with a button (or similar feature) to keep the pen interaction as simple as possible. Instead, users invoke all interactions through the three interaction metaphors. We’ve implemented two prototype systems. The first uses the ART (Advanced Realtime Tracking) optical tracking system and a large back-projection display wall. For optical tracking, we attached markers to the pen and last. (Figure 1a shows the pen with the attached markers.) The second system is a single-person workplace that uses the Fastrak magnetic tracking system and a 24-inch

Published by the IEEE Computer Society

0272-1716/11/$26.00 © 2011 IEEE

stereoscopic display. If disturbing environmental factors (for example, spots for the optical system or metal for the magnetic system) are eliminated, both systems have sufficient precision (1 to 2 mm) for our tasks.

Calibration The interaction metaphors require precise detection of the pen and last’s location and time of contact. Precisely matching the interaction devices’ local coordinate systems (LCSs) with their digital 3D models’ LCSs is essential. Because manual calibration is time-consuming and insufficiently precise, we developed a semiautomatic method that distinguishes between the calibration of the pen and that of any other object. (The Fastrak magnetic tracking system comes with a calibrated pen, so we didn’t need to calibrate that pen.) The pen. When tracking starts, we associate the target (the pen) with an LCS S with the same orientation as the world coordinate system W. If we hold the pen vertically during this process, its symmetry axis is, with sufficient precision, parallel to an axis of S. However, a nonnegligible distance generally exists between the pen’s tip and the origin O of S. To match O with the pen’s tip, we use an optimization based on this criterion: The points are identical if a rotation of the pen around its tip causes no change in the position of O with respect to W. Obtaining the necessary input data requires a measurement step. For this step, the user places the pen on a nonelastic skidproof surface and rotates

(a)

(b)

Lasts

A

last is a physical object whose shape is an abstraction of the human foot. Lasts are used for both designing and producing shoes. Their shapes differ between shoe companies and are confidential because they represent the shoemaker’s knowledge about a good fit of the shoe. Only a small portion of the last changes to accommodate fashion trends.

the pen around its tip for about 30 seconds without any translation. This produces a point cloud with approximately 2,000 points. Every point P i represents the position of O with respect to W at a specified time. These points are approximately on a sphere’s surface (see Figure 1b) because they have (up to measurement errors) the same distance to the pen’s tip. The sphere’s radius defines the pen calibration’s current error. So, we search for the transformation T that minimizes the radius of the sphere created by the transformed point Pi′ : Pi′ = Ri ⋅ T ⋅ Pi , where Ri denotes the measured rotation matrix corresponding to P i. Figure 1 shows the calibration steps. The measurement yields the spherical green point cloud in Figure 1b. In this case, the measured error is 53 mm. The optimization produces a new point cloud P ′ with an error of 0.7 mm, shown in Figure 1c. We apply T to the target’s LCS, resulting in a correct correspondence between the marker positions and the pen. Figure 1d shows the pen and its markers.

(c)

(d)

Figure 1. The pen’s calibration. (a) The pen with markers attached. (b) Measured samples (and markers). (c) Optimized samples. (d) The transformed markers. The correct transformation between the virtual and the real-world marker positions is required for matching the virtual and the real-world object.

IEEE Computer Graphics and Applications

31

Feature Article

(a)

(b)

Figure 2. Calibrating the shoe last. (a) The sampled point cloud. (b) The calibration result. Registration reduces the calibration error from 205.00 mm to 1.02 mm.

The last. This procedure also begins with a measurement step. We move the pen on the last and capture the pen’s position and the last’s position and orientation. The measurement time should be long enough to capture approximately 2,000 samples. Creating a regular sampling of the object isn’t necessary; sampling points from the object’s most important regions is sufficient. Clearly, those regions also must be in the corresponding 3D model. We transform the sampled positions to the last’s LCS. Figure 2a shows the sampled point cloud in the last’s LCS. The calibration error (the average distance of the sampled points to the 3D model) is 205 mm. For the registration, we use the iterative closest point (ICP) algorithm.1 The criterion for the alignment is to minimize the average distance between the sampled points and the last’s vertices. We apply the resulting transformation to the target’s LCS. To ensure that the algorithm finds the global optimum, manually precalibrating the object might be necessary. Figure 2b shows the calibration’s results. After registration, the average error is 1.02 mm. Although this error is much smaller than what manual calibration can achieve, we seek even higher precision. So, we modified the ICP algorithm. The standard method computes distances between the sample points and the vertices of the mesh representing the digital object (for example, the last). This computation can lead to large alignment errors, especially where only a few large triangles model the object. To overcome this problem, we compute distances between the point samples and the model faces. More precisely, we compute each sample point’s nearest neighbor on the object’s surface. The set of samples and the set of nearest neighbors then serve as input for the ICP algorithm. The algorithm must recompute the set of nearest neighbors at each step. 32

September/October 2011

This approach results in reduced calibration errors, especially for sparsely modeled objects. For the last model, which is effectively modeled, the registration error could decrease to 0.89 mm. Because calibration occurs only once for an interaction device, performance isn’t a major concern. Still, the computations require only a few seconds for point sets in the range of 2,000 points and models with approximately 100,000 triangles.

Collision Detection To detect collisions, we compute the distance between the pen’s tip and an associated contact point on the last’s surface. If this distance is less than a prescribed tolerance e, we assume a collision. We calculate the distance by the tip’s orthogonal projection onto the surface. To improve this computation’s performance, which is crucial because of the real-time constraints, we use a spatial subdivision of the last. Our experiments showed that the computation is fast enough to deal with objects with mesh sizes between 100,000 and 200,000 triangles.

Creating Design Lines Extracting a polygonal line from the sketched user input poses several challenges. The main problem resides in the various sketching techniques you can use to define the line. Such techniques include dotted, dashed, continuous, and oversketched lines (see Figure 3). Lines defined by several, overlying strokes require a complex extraction of the shape the user most likely intended. During sketching, the user relies on the visual feedback of what he or she has already drawn. To allow real-time interaction, the sketching algorithm must provide its output in a few milliseconds. Another problem is noise, which is due mainly to the tracking technology’s restrictions. Typically, optical and magnetic tracking systems’ positioning data is

(a)

(b)

Figure 3. The (a) sketching process and (b) various drawing styles. Lines defined by several, overlying strokes require a complex extraction of the shape the user most likely intended.

accurate up to one or two millimeters. In addition, erroneous values might occur if the environmental conditions disturb the system’s proper functioning. The sketching algorithm should be robust enough to cope with occasional disturbances that are present for only short periods.

Sketch Recognition During sketching, the system inserts each new pen position into a set D of data points. Because we want to support different sketching styles, we don’t assume a spatial or chronological order of this input data. As output, we create a polygonal line P = (P0, P1, ..., P n) representing the sketch. To create this polyline, we continue its course iteratively into the barycenter of a restricted set of data points. So, the vertices P i of P usually aren’t elements of D. With each new point inserted into D, we reapply the entire sketch interpretation algorithm, as we now describe. The marching principle. We assume that the starting  conditions P have already been determined. 0 , V0  Each Vi is a normalized vector describing the direction from point P i–1 to P i. The subset D′(P i) of D comprises those data points in the viewing volume  Sτ (Pi ) ∩ C Pi , Vi , α , where St (Pi ) is a sphere with the center P i and radius t, and C Pi , Vi , a isa cone with the apex Pi, an axis in the direction Vi , and the opening angle a. The radius t of St (Pi ) is the viewing distance. In each step of the iteration, we search for the next P i+1 in D′(P i). For this search, we determine   a new marching direction Vi+1 by deflecting Vi toward the barycenter M(D′(P i)) of the points in D′(P i):

(

(



)

)

(

)

  Vi + M (D ′ (Pi )) − Pi . Vi+1 =  Vi + M (D ′ (Pi )) − P We then create  the next P i+1 at the distance l from P i in Vi+1 . The value of l limits the curvature of the sketch represented by P because the directional change in each iteration step is bounded. If l is too large, the points P created in the marching might trail behind a curve segment that turns tightly. If it’s too small, P might turn in front of the same curve segment. Consequently, we must adjust l to the maximum curvature that the algorithm will support. In our implementation, a l of 2.5 mm delivered the best results. We can adjust the viewing volume’s parameters t and a in each step. They increase if D′(Pi) contains too few points for a proper prediction of direction. The user-defined threshold k specifies the minimum number of points in the viewing volume. For the examples in this article, we used k = 1. If the viewing volume contains fewer than k points, we increase t, and a gradually. The marching terminates if the maximally extended viewing volume contains fewer than k points. The initial and the maximum values for the viewing range depend on the sketch’s scale. In our application, the viewing distance is 1 cm initially and extends to 2 cm, whereas the opening angle is 70 degrees, extending to 120 degrees at maximum. We found that the algorithm performs best if we reset the viewing-volume parameters to their initial values in the next iteration step. A vertex of the polyline in which the marching terminates is an endpoint. Figure 4 illustrates the marching procedure. IEEE Computer Graphics and Applications

33

Feature Article

Figure 4. The marching phase’s four steps. Dark squares are elements of the set D of data points. White squares are elements of → D′, the set of points in the viewing volume. Circles are elements of polygonal line P. Arrows indicate the vectors Vi ; dotted lines indicate the viewing volume. The second step shows a case in which we must extend D′ because no elements of D would be in D′.

Sketch extraction. The marching should start from an endpoint as defined in the previous section or from the nearest neighbor in D to an endpoint. (Using an endpoint instead of the nearest data point smoothes the input data, improving the sketch extraction’s results.) Because many endpoints exist, the problem is selecting one that yields a sketch close to the curve the designer intended. Because of our algorithm’s real-time constraints, the choice of an appropriate endpoint can be based only on simple heuristics. Our method considers two criteria. The first is the stroke length. A stroke consists of all points sampled on the surface without leaving the last’s e-proximity; that is, whenever the user enters the last’s e-proximity, a new stroke starts. The stroke’s length is the number of its points. The list of strokes is a data structure stored in addition to the unorganized set D. The second criterion is the sketch’s weight n— that is, the number of points taken into account to compute a sketch during the marching. We compute n as n=

∑ D′ (P ) ,(3) i

i

where |M| is the cardinality of a set M. The stroke length gives some evidence of the attention the designer has given to a particular section of his or her sketch. So, we select the largest stroke and identify its midpoint PM. Because the marching algorithm is fast, we can use the marching paradigm to determine an appropriate endpoint. We perform a marching on D starting from PM in both possible directions. Except for special cases (which we discuss in the next section), these two instances of the marching create two endpoints PE1 and PE2 . To decide between these candidates, we again start two marching procedures, beginning with PE1 and PE2 . We consider the resulting curve with the greater weight to 34

September/October 2011

be more in accordance with the designer’s intent. So, that curve represents the current sketch extraction from the dataset. Corners and loops. To allow a sharp turn (a corner) during a sketch, we must consider some special cases. So, all elements of D have a counter, which we increase each time the element is visible in any viewing volume. Depending on the sampling distance of the input data and of P, we use most elements of D multiple times when viewing volumes of subsequent points in P. One special case is when the algorithm would terminate because fewer than k elements of D are visible in the maximum viewing volume. Instead of terminating the algorithm, we set the opening angle a to 360 degrees so that the viewing volume equals St (Pi ) . To prevent the algorithm from tracking back over the already processed elements, the only elements of D′(P i) that contribute to the barycenter of D′(P i) are those with a counter value below the average counter value of all elements in range t. If more than k points meet this condition, the marching continues with the original values for a and t. Incorporating this special case, the sketch recognition can follow sharp corners up to an angle of approximately 135 degrees. Another case is a possible loop in the sketch, which would cause an infinite loop in the algorithm we described earlier. To detect this case, we test P for self-intersection. If any intersection occurs, we use the intersection’s angle to evaluate whether a loop has been detected and the algorithm should terminate. This method still allows self-intersections of P. Discussion. This approach has major advantages over most other research on sketch interpretation. It implicitly smoothes out noise due to the tracking system or inadequate human handling. Because of the limited viewing volume, this approach implicitly discards outliers. In addition,

Figure 5. Users sketch a curve and pause briefly to indicate that they’ve finished the first sketch. They can then create a second curve that modifies the initial curve. Users can repeatedly modify the resulting curve in the same manner.

the algorithm reacts instantaneously to a local redrawing or overdrawing. We don’t restrict the user to a particular technique but allow the most common drawing styles. The algorithm is fast enough to produce complex sketches in a few milliseconds. It allows even extensive oversketching, feature detection, and limited self-intersection in the line.

Sketch Modification The user must also be able to change existing lines. Our discussion with professional shoe designers showed low acceptance for approaches that use control points to modify curves (such approaches are standard in CAD systems). So, we decided to use an approach similar to drawing and modifying curves on paper: modifying a curve by a second curve. Thomas Baudel2 and others have presented the general idea of such modification. (For more on Baudel’s research and other CAD research related to our approach, see the related sidebar.) In our implementation, the users sketch the first curve and pause briefly (2 seconds) to indicate that they’ve finished the first sketch. They now can create a second curve that modifies the initial curve. This modification’s results are determined by a heuristic that takes into account the number of intersection points, the intersection angles, and the lengths of the created segments and the resulting lines. Users can repeatedly modify the resulting curve in the same manner (see Figure 5).

Tap, Move, Tilt The framework of design lines provides the basis for completing the design, by adding shoe components such as pieces of leather, seams, and decorative items. Here we look at the three interaction metaphors.

Tapping Users invoke a tap by touching the drawing surface with the pen’s tip and lifting the pen from the surface without moving it on the surface itself. Proper collision detection is crucial to this form of interaction. Although we’ve presented how to determine entering and leaving the e-proximity, we still must establish the means to distinguish a tap from other surface-related interactions. Distinguishing among these interactions is necessary because the tracking system’s high sampling rate (60 Hz for the optical tracking)—together with its inaccuracy—will lead to multiple tracking positions of the pen while in the e-proximity, even if the user only quickly taps on the surface. To overcome this problem, we introduce a minimum sampling distance for the pen interaction. The interaction routine won’t recognize a new point unless the point distance from the last valid point exceeds this threshold. In this way, the system distinguishes a tap from other interaction events on the last’s surface. Tapping can execute a multitude of user commands, ranging from a simple selection to defining special points in the design. This metaphor also plays a central role in menu interaction, which we describe in detail later. Taps correspond to mouse clicks in classic mouse interaction. However, taps can be used in 3D on a nonplanar surface.

Moving Pen movement on the last’s surface corresponds to mouse movement in classic 2D interaction. As we mentioned before, we can detect when the pen enters the last’s e-proximity. If the user creates new points with distances beyond the sampling distance threshold, the system regards this event IEEE Computer Graphics and Applications

35

Feature Article

Related CAD Research

I

n classic CAD modeling, users interact with 2D representations of 3D objects. VR intends to overcome this paradigm by offering users techniques to work directly with 3D models. This tracked interaction employs devices (for example, a pen, pointer, or gloves) to modify a visual representation of the object. Several papers on 3D user interfaces have proposed two-handed interaction.1–4 Tovi Grossman and his colleagues used two trackers to enable digital tape drawing for automotive styling on large displays.2 However, the actual curve design is in 2D. Ken Hinckley and his colleagues’ research relates more closely to our approach in that they use a doll’s head as a proxy of a digital model together with a second interaction device (a cutting-plane prop).3 By adjusting these objects’ relative position and orientation, users can determine intuitively how the virtual model of the head is being cut. To improve widget manipulation, Jean-Bernard Martens and his colleagues introduced a physical cube that represents a virtual cube with widgets on its sides.4 Using a pointer, the user operates the virtual controls. Haptic feedback facilitates widget manipulation and has resulted in greater user acceptance. Sketching algorithms convert user-created point sets into continuous curves that can be incorporated in designs. Emilio Brazil and his colleagues showed how to use a mini-

mal spanning tree to produce a polygonal line from a point set.5 To use a minimum spanning tree for different drawing styles, several parameters must be determined. Because this determination can’t be automated, this approach isn’t appropriate for our application. Yang Liu and his colleagues fit a B-spline into the point set starting from an initial segment.6 Although this approach produced promising results, it requires an approximately constant point distribution. Also, the algorithm’s calculation time is in the range of seconds. Tevfik Sezgin and Randall Davis described another method in which different single primitives such as lines, arcs, and ellipses approximate the point cloud.7 This approximation isn’t sufficient for our application. To modify a given design curve, we must provide a natural, intuitive method. Such a method must follow Thomas Baudel’s idea of modifying a curve by defining a second curve and combining it with the initial curve.8 Raimund Dachselt and Anett Hübner gave an extensive overview and classification of the various menus established or tested in recent VR research.9 We investigated several key ideas in their paper, especially radial context menus. Variations of the traditional pie menu appear, for example, in the research of Dominique Gerber and Dominique Bechmann, who designed a rotary menu for VR.10

as a move. The metaphor ends when the user lifts the pen from the surface. It’s important to consider how the tracking inaccuracy increases when the pen moves faster. We experienced that the e-proximity often is too small to deal with such increased inaccuracy. To define appropriate proximity values for all situations, we introduced a dependency between e and the pen’s speed. When the pen is working on the surface, a high speed leads to an increased e value. This approach works only because virtually no basic interaction in the design finishes with a rapid pen movement. Instead, users tend to slow their movements as they finish a move or a drawing before they remove the pen from the surface. We provide three different examples of the moving metaphor. We begin by explaining the necessity of moves for repositioning the major design items. A design line and its associated objects (for example, a leather piece) change their shape while they’re moved on the surface (see Figure 6a). So, monitoring the deformation when the component moves is important. With the moving metaphor, the user can both specify an exact location and gain a sense of the component’s deformation during the movement and an impression of how it looks in a different position. 36

September/October 2011

A slightly different example of the moving metaphor is creating parallel lines (see Figure 6b). The third example is measuring distances on the surface. This interaction requires a move because on a curved surface, the user must provide a path between the two points of interest.

Tilting Tilting places the pen on the surface and rotates the pen around the tip. Because we consider all possible rotations except the rotation around the pen’s symmetry axis, this mode offers two additional degrees of freedom (DOF). To convert the change in the pen’s orientation to values, we evaluate the angle differences between two orientations: when the pen is set on a menu button (we describe these buttons in more detail later) and when the pen is tilted. If input in two dimensions is required, we evaluate both spherical coordinates. Otherwise, we project the vectors representing the orientations to a plane to obtain a single angular value. Depending on the context, we interpret the rotational differences as absolute or relative values. An early version of the system could use all three DOF; that is, it also took into account the pen turning around its symmetry axis. However, when we asked users to use all three DOF, most

Ilya Rosenberg and his colleagues presented a flexible force-sensitive resistor that we could have used to cover the shoe last, to replace the tracking of the pen.11 (For more on shoe lasts, see the other sidebar.) For our application, this approach’s main problem was fitting the material to the last’s curved surface. We seek a solution in which users can easily exchange the last.

References 1. D.A. Bowman et al., 3D User Interfaces: Theory and Practice, Addison Wesley Longman, 2004. 2. T. Grossman et al., “Creating Principal 3D Curves with Digital Tape Drawing,” Proc. 2002 SIGCHI Conf. Human Factors in Computing Systems (CHI 02), ACM Press, 2002, pp. 121–128. 3. K. Hinckley et al., “Passive Real-World Interface Props for Neurosurgical Visualization,” Proc. 1994 SIGCHI Conf. Human Factors in Computing Systems (CHI 94), ACM Press, 1994, pp. 452–458. 4. J.-B. Martens, A. Kok, and R. van Liere, “Widget Manipulation Revisited: A Case Study in Modeling Interactions between Experimental Conditions,” Proc. 13th Eurographics Symp. Virtual Environments 10th Immersive Projection Technology Workshop (IPT-EGVE 07), Eurographics Assoc., 2007, pp. 53–60. 5. E.V. Brazil, L.H. de Figueiredo, and I. Macêdo, “Curve Recon­

(a)

struction from Noisy Data,” tech. poster presented at 2006 Brazilian Symp. Computer Graphics and Image Processing (Sibgrapi 06), 2006; http://w3.impa.br/~ijamj/files/publications/ sibgrapi2006/poster/brazil-CurveReconstruction.pdf. 6. Y. Liu, H. Yang, and W. Wang, “Reconstructing B-spline Curves from Point Clouds—a Tangential Flow Approach Using Least Squares Minimization,” Proc. Int’l Conf. Shape Modeling and Applications (SMI 05), IEEE CS Press, 2005, pp. 4–12. 7. T.M. Sezgin and R. Davis, “Handling Overtraced Strokes in Hand-Drawn Sketches,” Making Pen-Based Interaction Intelligent and Natural, tech. report FS-04-06, AAAI Press, 2004, pp. 141–144. 8. T. Baudel, “A Mark-Based Interaction Paradigm for Free-Hand Drawing,” Proc. 7th Ann. ACM Symp. User Interface Software and Technology (UIST 94), ACM Press, 1994, pp. 185–192. 9. R. Dachselt and A. Hübner, “Three-Dimensional Menus: A Survey and Taxonomy,” Computers and Graphics, vol. 31, no. 1, 2007, pp. 53–65. 10. D. Gerber and D. Bechmann, “The Spin Menu: A Menu System for Virtual Environments,” Proc. 2005 IEEE Conf. Virtual Reality (VR 05), IEEE CS Press, 2005, pp. 271–272. 11. I.D. Rosenberg et al., “Impad: An Inexpensive Multi-touch Pressure Acquisition Device,” Proc. 27th Int’l Conf. Extended Abstracts on Human Factors in Computing Systems, ACM Press, 2009, pp. 3217–3222.

(b)

Figure 6. Two move operations. (a) Moving a line across the last. (b) Creating a parallel line. With the moving metaphor, the user can both specify an exact location and gain a sense of the component’s deformation during the movement and an impression of how it looks in a different position.

of them couldn’t input their desired values. This is because users can’t easily handle the three rotations as independent inputs. The current implementation of tilting clearly is a more appropriate use of the rotational DOF and has been well accepted by test users, as we show later.

The Menu Using taps to press virtual buttons on a surface is an intuitive interaction. So, the last has several

virtual buttons representing the menu’s first level. To access further levels, users can employ the tilting metaphor.

Placement For menu placement, the surface patches representing the buttons must be large enough and placed as conveniently for the workflow as possible. We use the plane area on the top of the last for the menu. This placement is possible because IEEE Computer Graphics and Applications

37

Feature Article

(a)

(b) Figure 7. Using the tilting metaphor to select an item in the menu. (a) Opening the menu. (b) Two possible selections. Using this metaphor limits the menu tree’s depth, simplifying interaction.

no design occurs in that region. This position is as close as possible to the design area; however, to access the menu, users might need to rotate the last. If they must perform this action frequently, it will disturb the workflow. So, we provided shortcuts for frequent operations. For example, when users design lines, an icon appears near the currently edited line; users can tap the icon to finish editing and create the next line. The necessary button size is related to the user’s handling skills because positioning the pen while concentrating vision on the virtual objects requires some experience. Furthermore, the tracking system’s accuracy affects the minimal button size. When users rapidly tilt the pen, the pen position’s error might increase temporarily. Consequently, the button’s size can’t be reduced arbitrarily. For our prototype system, we restricted the menu area to contain only four large buttons because we deal mainly with inexperienced users.

Structure and Handling Controlling many functions by pressing only a small number of buttons would require a deep menu tree, which would complicate interaction. To limit the menu tree’s depth, we employed the tilting metaphor. When users place the pen’s tip on 38

September/October 2011

a button and don’t lift the pen again immediately, a selector menu opens, indicating the possible tilt directions in 3D (see Figure 7). The menu items are arranged around the button. By rotating the pen, users can select any of them. Lifting the pen confirms the selection and initiates the selected function. The correspondence between a button and the selector menu that opens when users press the button is context sensitive. For example, when users press the edit button, what the selector menu displays depends on the object type the users will edit. In the project’s early stages, we experimented with the number of items in the selector menu. When we equipped the selector menu with more than five or six elements, many users felt overwhelmed by the selection possibilities or had difficulties selecting the desired item. The latter problem was due to the interaction’s limited accuracy, which sometimes resulted in the selection accidentally changing when users lifted the pen from the menu surface. The final design with five elements (plus the return item) was well accepted in our user study, which we discuss later. This selection also lets users access all the necessary modeling functions within a maximum of three menu levels.

Results and Evaluation We developed the shoe design application as a research project in cooperation with industrial partners. During the project, we presented the system to a number of potential users. Exhibits at different shoe fairs attracted the attention of designers and other representatives of the European shoe industry. In addition, we visited or invited different shoe companies to obtain the feedback of as many designers and manufacturers as possible. All our industrial partners expressed that they saw a clear benefit with our approach. Furthermore, they saw virtual rapid prototyping as a way to significantly reduce manufacturing expenses. In designing our user study, we considered the following aspects. The availability of shoe design experts is limited because there aren’t many of them. So, we used any possible occasion (such as fairs) to involve available experts in the evaluation. We considered quantitative measurements (for example, timings) as less relevant for two reasons. First, the comparable alternative software all requires extensive 3D modeling experience, which the designers don’t have. Second, our approach compares to the traditional approach solely in the drawing of design lines. Measurements of

Handling the pen and last

27

Sketching

2

18

6

3

2

0

4

Good Undecided Unsatisfactory

Menu handling

24

Model completion

4

22 0

10

20

30

2 40

50

60

70

1 80

3

0

Not evaluated

6 90

100

Percentage of participants Figure 8. Participants’ acceptance ratings of four different aspects of the system. The number in each section of each bar indicates the number of participants who gave that response.

this functionality alone, however, wouldn’t be meaningful to the overall approach. We were most interested in how users accepted the interface and in their subjective evaluation of its usability. So, we established a user acceptance study. However, in response to this article’s reviewers, we also assessed quantitative data with a different test setup.

The User Acceptance Study We conducted the sessions in a standardized way. After the participants received a brief introduction to the system (approximately 15 minutes), we had them create a simple design. We then asked them to realize certain features of a test design. They had 10 to 15 minutes to complete each task. We offered assistance when necessary. However, not all participants could complete all the tasks in the allotted time. Finally, we interviewed the participants. We received the feedback of 31 people with a shoe design background. Among these were 22 artistic designers (four of whom were female) and nine technical designers (one female). Their experience ranged from a few years to several decades. However, the interviews didn’t show any correlation between experience and acceptance of our system. To cover the prototype’s main aspects, we defined four criteria: handling the interaction devices, sketching, menu handling, and model completion. We assessed these criteria on a scale ranging from ■■

■■

■■



good (the approach was acceptable or acceptable with minor changes such as changes to specific functions, menu structure, or visualization details) to undecided (the interface idea was good, but major improvements were required) to unsatisfactory (the interface idea wasn’t practical).

We didn’t consider a more detailed rating because of the absence of comparable alternative approaches. Figure 8 shows the participants’ responses. Handling the interaction devices. The basic idea of using the pen and last to mimic a conventional designer workspace proved successful. Most participants immediately accepted the interaction objects as familiar. In a few seconds, they could precisely handle the proxy objects while concentrating their view only on the virtual objects. Only a few participants (often those accustomed to CAD software) showed initial reluctance toward the interface and asked to use the mouse to control and edit the virtual last. Even those participants experienced no problems handling the interaction objects as intended. A few participants tended to hold the pen close (within the e-proximity) to the surface even when intending no interaction. This behavior didn’t disturb the sketching algorithm. However, participants sometimes involuntarily invoked the tapping metaphor for selecting design items. Because tapping in an empty area of the last clears the selection, some participants occasionally lost their selection. We plan to address this problem. Sketching. The feedback regarding our sketching algorithm varied with the participant’s background. Shoe designers who were accustomed to drawing lines on lasts were comfortable with the system because it adapted to their personal drawing style. We observed many drawing styles among the different participants. Slightly oversketched lines, dashed lines, and continuous lines accounted for most of those styles. Participants with an artisticdesign background could easily modify curves with the tools. Participants with a technical-design background responded more reservedly. They used continuous line drawing exclusively, and some IEEE Computer Graphics and Applications

39

Feature Article 18 Introdution time Sketching tasks Model completion tasks Model variation tasks

16

Time required (min.)

14 12 10 8 6 4 2 0 User 1

(a)

User 2

User 3

User 4

User 5

User 6

9 8

Wrong menu seletions Undos in sketching Undos in model completion

No. of occurrences

7 6 5 4 3 2 1 0 (b)

User 1

User 2

User 3

User 4

User 5

User 6

Figure 9. Performance measurements for sketching, model completion, and model variation. (a) The time to complete the tasks. (b) The number of errors participants made. We used time measuring and error counting to evaluate the performance variance of a number of users and to assess the intuitiveness of the system.

of them missed control-point-based curve-editing functionality. Interestingly, all the participants who assessed the high-level modeling functions (such as parallel or mirrored line creation)—which clearly aren’t part of the traditional work style—could use them easily. As we mentioned, our approach to collision detection can cause unwanted user input, under certain circumstances. During sketching, users can create unwanted strokes on the last if they hold the pen too close to the surface while not intending to draw. Fortunately, the visual feedback usually causes such users to move the pen outside the proximity where collision is detected. The resulting strokes are rather short, and their influence on the design line’s course is minimal. To correct the course, removing a false stroke isn’t necessary. Instead, an oversketching of the intended course completely eliminates the unwanted influence. 40

September/October 2011

Menu handling. Most participants could understand and operate the menu without extensive training. They immediately understood the principle of tapping. When we introduced them to the tilting metaphor, it proved helpful to let them think of the more familiar use of a joystick. After making this connection, all participants could use the selector menu in one to two minutes. When participants operated the system for a longer time, the speed of handling the selector menu increased rapidly. This quick familiarization is an important advantage of the selector menu over other forms of menus. Most menus function by navigating a pointer though the menu items. Such an approach requires high attention from the user and continuous visual feedback. Selection in our menu is based only on the tilt direction. Experienced users can tilt the pen as soon as the menu opens and immediately lift the pen to activate the associated function. Users don’t have

(a)

(b)

(c)

Figure 10. Three lasts. (a) One with shoe components. (b) One with the design lines. (c) An empty one. Moderately experienced users can create a design like this in 30 minutes to an hour.

to wait for visual feedback as long as the number of elements in the selector menu is small enough that the tilt directions are clearly separated. One participant requested alternative access to the menu buttons to avoid a last rotation in cases in which accessing the menu surface is difficult. So, we developed a feature that’s now in the current system (the user study didn’t evaluate this feature). Besides the menu that’s bound to the last, we offer a menu on a separate flat surface patch. Users can place this patch anywhere in their workspace (for example, on a table). To register this menu, users use the pen to specify the patch’s location. Model completion. This step includes creating virtual leather parts, seams, decorative items, shoelaces, and the sole. All participants agreed that the implemented features are sufficient to construct common designs of moderate complexity. We received valuable feedback about functions we could add to facilitate more complex designs (for example, automatic creation of leather stripes or Velcro components).

The Quantitative Assessment The second study assessed quantitative data on our interface’s usability at the Point of Shoes exhibition in Pirmasens, Germany, in April 2010. Because of the absence of comparable software approaches, we focused on the variance of successful use of our system among different users. All six participants had more than five years’ experience in artistic design (participants 1–4) or technical design (participants 5 and 6). The test included sketching, model completion, and model variation. In each of these categories, participants had to complete tasks of increasing levels of difficulty. We limited the time for the introduction to the system to 20

minutes. However, the participants could start the test as soon as they felt comfortable. For each participant, we measured the time required for each task (Figure 9a shows the accumulated times for each category). The participants felt able to use the system after rather short periods (between 6 and 16 minutes). Easy sketching tasks, such as creating multiple parallel lines, took an average of 2 minutes, with the time varying less than half a minute. We provided the users with a draft design of a shoe, comparable to the shoe in Figure 10. The single tasks were recreating the design lines on an empty last (sketching), creating all virtual components from those lines (model completion), and changing a number of components afterward (model variation). Difficult sketching tasks, such as sketching a given design, took an average of 5.8 minutes, with a variance of 3.8 minutes. The two model-completion tasks took an average of 1.8 and 4.3 minutes (with variances of 0.6 and 2.3 min.). The two model-variation tasks took an average of 1.3 and 1.7 minutes (with variances of 0.3 and 0.7 min.). We also recorded the number of wrong menu selections (selecting to create a parallel line when the task was intended to mirror the line) and the number of undo operations during sketching and model completion (see Figure 9b). In general, we would expect a longer introductory phase to increase test performance. Although this was obviously true for some participants, this relation can’t be clearly deduced from our study because of the small population.

O

verall, we received very positive feedback regarding the general interface design. All the participants could handle the interaction objects IEEE Computer Graphics and Applications

41

Feature Article

as intended and, after a little explanation, could easily operate the sketching and menu functionalities. Our interaction approach proved appropriate to introduce artists and designers with little CAD experience to virtual prototyping of shoes. We observed a quick familiarization even with the more unusual interaction metaphors and a rapid increase of handling speed. We consider

Our interaction approach proved appropriate to introduce artists and designers with little CAD experience to virtual prototyping of shoes. someone who has worked with our system for about 10 hours to be moderately experienced. Such users can create a design like the one in Figure 10 in 30 minutes to an hour. User feedback convinced us to address the following issues. First, we want to investigate how a contact sensor at the pen’s tip (comparable to those used with digitization tablets) can reduce

stay connected.

Keep up with the latest IEEE Computer Society

problems with selecting model components. Another interesting research topic is how to integrate AR technology. The visual overlay of the physical last with design lines and other components would let users focus only on the last and might improve user performance. Furthermore, we see a high potential of using our interface not only for the early design phase but also for technical editing of the shoe model. Such use could lead to integrating our software with an existing CAD system for shoe design. Finally, we plan to extend the interaction concept to other applications such as cockpit or car interior design.

Acknowledgments We thank the Test and Research Institute in Pirmasens. Grant AIF 15199 (Arbeitsgemeinschaft industrieller Forschungsvereinigungen is the German Federation of Industrial Research Associations) supported this research.

References 1. S. Rusinkiewicz and M. Levoy, “Efficient Variants of the ICP Algorithm,” Proc. 3rd Int’l Conf. 3-D Digital Imaging and Modeling, IEEE Press, 2001, pp. 145–152. 2. T. Baudel, “A Mark-Based Interaction Paradigm for Free-Hand Drawing,” Proc. 7th Ann. ACM Symp. User Interface Software and Technology (UIST 94), ACM Press, 1994, pp. 185–192. Tom Kühnert is a PhD student in the Chemnitz University of Technology’s Computer Science Department. His primary research interests include computer graphics and geometric processing, particularly shape representation and analysis. Kühnert has an MS in computer science from the Chemnitz University of Technology. Contact him at tom.kuehnert@ informatik.tu-chemnitz.de. Stephan Rusdorf is a research associate in the Chemnitz University of Technology’s graphics group. His primary research interests include visualization, especially for large datasets, and human-computer interaction in VR. Rusdorf has a PhD in computer science from the Chemnitz University of Technology. Contact him at stephan.rusdorf@informatik. tu-chemnitz.de.

publications and activities wherever you are. TM

| @ComputerSociety | @ComputingNow | facebook.com/IEEEComputerSociety | facebook.com/ComputingNow | IEEE Computer Society | Computing Now

42

September/October 2011

Guido Brunnett is a full professor of computer graphics in the Chemnitz University of Technology’s Computer Science Department. He also directs the Institute of Mechatronics, a private research institute affiliated with the university. Brunnett has a PhD in computer science from the University of Kaiserslautern. Contact him at guido.brunnett@informatik. tu-chemnitz.de.

Virtual prototyping of shoes.

Virtual prototyping of shoes. - PDF Download Free
7MB Sizes 4 Downloads 3 Views