Graphically Speaking

Editor: André Stork

Democratizing Digital Content Creation Using Mobile Devices with Inbuilt Sensors Kapil Dev and Manfred Lau Lancaster University

“D

igital content creation” covers the creation of literally every genre of consumable digital material. The growth of this multibillion dollar content creation market depends directly on developments in computer graphics, user interactions and other related fields. Traditionally, stationary desktop computing devices have fulfilled the demands of casual and professional content creators, but the amount of graphical content produced on mobile devices is growing rapidly. In fact, over the past few years, the demand for mobile computing systems has grown at a much faster rate than the demand for desktop computing devices. Accordingly, new research directions that differ completely from traditional content creation practices are quickly emerging. Compared with traditional desktop-based platforms for content creation, a typical mobile computing system’s unique input and output modalities permit a more direct and natural interaction experience and thus motivate deeper levels of engagement during content creation tasks. We argue that investing in the development of techniques for creating content on handheld devices with inbuilt sensors, such as camera flashes and accelerometers, has the potential to extend computer graphics technologies to millions of mobile device users. In particular, some of the early barriers associated with creating graphics could be handled with novel sensing and input modalities.1 These barriers largely stem from a lack of support from the input devices, and interaction tasks and techniques associated with three basic human processes: perception, cognition, and motor activity. Although handheld computing platforms have their advantages, present-day mobile computing technology comes with its own limitations related to power supply, computational power, display, 84

g1gra.indd 84

January/February 2015

and interaction. In this article, we present the state of the art in digital content creation via mobile devices with inbuilt sensors and describe important research challenges and future directions. We aim to encourage both professional developers and novice users to leverage mobile devices’ novel sensing and input capabilities to create their own digital content.

Mobile Content Creation and Traditional Desktop Computing Why is the mobile computing paradigm integral to creating 2D and 3D graphics? Our observations indicate that the following important sources of inspiration contribute to the popularity and acceptability of mobile platforms.

Ubiquity Sketching or painting a real scene in an outdoor setting using the close-to-natural touch-input interface on a tablet is highly attractive compared with performing that same activity with a mouse on a stationary indoor computing platform (www.telegraph.co.uk/technology/picture -galleries/5559769/Amazing-iPhone-Art.html). For example, a user with a touch-enabled handheld device (such as a tablet or smartphone) can produce content more freely, and this pattern of content creation allows exploration of user creativity in new ways (see Figure 1a). Teachers and students can also use mobile devices and platforms to create graphical content outside the traditional static classroom environment, and architects, manufacturers, and designers can communicate their design ideas to each other more easily.

Natural Experience A typical high-end mobile device possesses several sensors and input/output modalities, including a

Published by the IEEE Computer Society

0272-1716/15/$31.00 © 2015 IEEE

12/15/14 5:25 PM

multitouch screen, a pen or stylus, a flash, one or two video and still cameras, an accelerometer, a global positioning system, a gyroscope, and a highresolution display. Current drawing and modeling techniques in computer graphics present challenges to novice users and typically require some training time. However, imaginative utilization of input and sensing devices on a mobile platform can make the user’s task easier. For example, painters might find it more natural to draw with a stylus while manipulating the content or environment on a touchscreen (see Figure 1b), whereas 3D modelers might find it easier to manipulate 3D shapes by using inbuilt sensors such as accelerometers (see Figure 1c). Many free and paid apps offer real-time input interfaces for creating digital content (www.autodesk.com/mobile-apps). Moreover, mobile augmented-reality-based techniques increasingly present graphical content creation interfaces that require little cognitive load.

Automatic pigment flow

(a)

Zoom in

Zoom out

Unique App Design Model The process involved in the design, development, delivery, usage, and revision of mobile apps is substantially different on a mobile platform than it is in a desktop development model. For mobile app users, it is relatively easy to raise concerns online to get the required support and app updates on demand—for example, the mobile content creation apps developed by Adobe and Autodesk have associated user review options. Not only does this help developers understand user needs, but it is also a way to collect user feedback to improve apps. In addition, compared with desktop software applications, mobile apps are substantially less expensive and are more easily installed via the fast networking capabilities on modern handheld devices.

Current Status of Sensor-Based Content Creation Research The past few years have seen a sharp increase in graphical content created on mobile devices, as described in various blogs (such as http://edition .cnn.com/2013/11/26/tech/innovation/this-is -how-76-year-old-david-hockney/ and http:// actionaad.tumblr.com/post/45692484417/ipad -content-creation-versus-consumption-again -again) and seen in mobile art galleries (www .iphoneart.com). Numerous mobile content creation apps can be used for drawing, sketching, painting, image compositing, 3D modeling, and object reconstruction, fulfilling the requirements of many users. In the film industry, Taz Goldstein (www.handheldhollywood.com)2 highlights the use of mobile apps, accessories, and techniques

g1gra.indd 85

(b)

Scale-up

Scale-down

(c) Figure 1. Creating content with mobile devices: (a) automatic watercolor pigment flow control (gravitational downward shift) using directional and positional sensors on an inclined mobile canvas similar to a watercolor technique, (b) a unique two-hand-, finger-, and stylus-based interface for more input space, and (c) accelerometer-based automatic transformations (manipulations) of 3D objects by tilting the device.

in professional film productions via handheld devices such as iPads or iPhones. The creators of the Oscar-winning film Searching for Sugar Man (http:// money.cnn.com/video/technology/2013/02/19/ t-iphone-app-oscar-film.cnnmoney/), for example, used an iPad to shoot sequences in the movie with an app called 8mm Vintage Camera. Today’s mobile art makers are even starting to organize IEEE Computer Graphics and Applications

85

12/15/14 5:25 PM

Graphically Speaking

Table 1. A brief summary of mobile content creation techniques. Content

Author

Technique

2D

DiVerdi5

Lightweight geometric model for brush strokes

Blatner6

3D

Sensor or input device

Automatic

Application

Advantages

Touch and accelerometer

Yes

Yes

Watercolor painting

Pigment flow control via inbuilt iPad sensor, intuitive interface

T

Tangible painting application

Touch and accelerometer

Yes

No

Digital painting

Realistic experience with painting materials

T

Kim7

Stylized realtime video

Flash and camera

No

Yes

Painterly rendering

Uses flash to automatically stylize content

T

Christodoulakis8

Picture context management

GPS, compass, camera

No

Yes

Picture management and semantic map creation

Uses sensors for spatial context registration

T, S

Predy 9

Sculpting-based interface for 3D modeling for novice users

Touch

Yes

No

3D modeling

Very appealing to less experienced users

T

Langlotz10

Content creation using mobile augmented reality

Touch, camera, sensor fusion (GPS, gyroscope, accelerometer)

Yes

No

Basic 3D modeling, 2D sketching, and annotations

Mobile authoring for nonprofessionals

S

Au11

3D scene creation based on cameracaptured images

Compass, accelerometer, gyroscope, touchscreen (optional)

No

Yes

Panoramic 3D scene creation

Automatic content location calculation based on sensors

S

Cepeda12

Photorealistic compositing using inbuilt sensors

GPS, gyroscope, magnetometer

Yes

No

Digital compositing

Sensors perform vital functions to support the implemented system

T, S

Pintore13

Mobile mapping of building interiors

Accelerometer, magnetometer

No

Yes

Automatic 2D floor plan and 3D model

No 3D modeling S experience required, simple interface

conferences to discuss their work, sharing their experiences and best practices with each other (www.macworld.com/article/1155161/mobileartcon .html; www.mobileartacademy.com). The literature is starting to reflect this shift as well. Maureen Nappi traced the developments in gesture-based art creation—specifically, drawing and sketching—from the early days to the more recent smartphone and tablet-based art creation,3 presenting examples of contemporary artists and their experiences with iPads and iPhones to draw and paint. A novel art movement aimed at producing mobile art masterpieces is taking shape in response to the availability of creative tools on handheld mobile platforms. David Scott documented his own and 70 other mobile artists’ experiences in an attempt to provide guidelines for how to be more 86

g1gra.indd 86

Tablet or smartphone

Interactive

creative with mobile content creation apps.4 The section in his book called “Painters” gives excellent evidence of how serious professional artists, filmmakers, and illustrators are taking mobile apps. To further explore this technology, we discuss some of the most recent and intriguing techniques related to sensor-based mobile content creation by grouping them by type of content that a technique helps to create (2D or 3D) and by the mobile sensors employed. The categories have some overlap, mainly in techniques that use multiple sensing and input devices, but a primary sensor or device is easily identified in such solutions. Table 1 gives a brief overview of the reviewed techniques.5–13 Although some of the discussed techniques and solutions might appear elementary, they motivate us to look at the endless future opportunities they encourage.

January/February 2015

12/15/14 5:25 PM

2D Content Here, we refer to 2D digital content as a result of interactive activities such as sketching, drawing, and picture capturing. Interactive content creation involves direct or indirect interactive input from the user. Sensors might be employed to provide more input space, thus making the technique semiautomatic. Fully automatic manipulation of content is allowed by incorporating one or more sensing and input devices from a mobile computing platform. Techniques relying on direct or indirect automatic input (mainly from inbuilt sensors) allow for content manipulation and context capturing. Touch and accelerometer. Stephen DiVerdi and his colleagues developed an efficient interactive procedural algorithm to create watercolor-like brush strokes on tablet devices.5 The Photoshop Eazel app (www.photoshop.com/products/mobile/eazel) utilizes this efficient stroke model for interactively creating watercolor paintings. Because precise simulation of watercolor effects is highly resource intensive, the app uses a lightweight geometric model to create brush strokes. It allows the creation of high-quality results and uses an exclusive five-finger touch interface to perform various operations. Additionally, to provide more realistic experience during interactive painting, the authors used the iPad device’s accelerometer to link its screen orientation with automatic gravity-based watercolor pigment flows. In an effort to bridge the gap between real and digital art creation practices, Anthony Blatner and his colleagues presented an interactive tabletbased digital painting app that gives the feeling of working with real artistic materials such as canvas and paints.6 In this app, a digital artist can tilt the mobile device to introduce realistic lighting effects (see Figure 2) similar to those in real oil color paintings. The inbuilt accelerometer interactively provides the required input for the surface shading algorithm. Pen and touch. To add to the mobile device’s touch surface, various pen-based input techniques and devices provide users with advanced features while they interact with their handheld platforms. Gordon Kurtenbach has described his experience developing a popular drawing and sketching app (SketchBook Mobile) and suggested that artists like the small and mobile format of digital sketching (see Figure 3).14 In addition to providing a faster and more accurate input, such apps provide support for a more realistic experience, such as that of using an ac

g1gra.indd 87

tual pen, by incorporating pressure sensitivity. Compared with fingers, a pen provides a more precise input that is suitable in applications requiring selection and manipulation of small on-screen objects and their vertices, edges, and faces. Also, various design innovations are being incorporated that allow for inserting physical buttons for more input and opportunities to vary the tip features (http://adonit.net/jot/touch/). Applications, such as a sketching app, can now create a variety of brush strokes similar to those available on desktop-based applications. Flash. High-end mobile devices come with an inbuilt flash to not only enhance the picture-taking experience but also to allow for novel content creation approaches. Currently, a device’s flash can provide input to content creation in an automatic, real-time fashion. For example, it can be used to analyze scene information for novel effects in real time, such as for independent stylization of foreground and background scenes. A mobile computational photography framework utilizes an inbuilt flash to produce cartoon-like, stylized (nonphotorealistic rendering) video in real time on a tablet device.7 The system exploits two buffers that store flash and nonflash images of a scene separately. Additionally, the user can determine the way these images are utilized in subsequent content creation. Directional and positional sensors. Although originally designed to provide rich picture management functionality, one solution helps create personalized maps by utilizing directional and positional input.8 One of its features registers the geometric representations of objects on top of the camera’s images by automatically getting information from location and directional sensors during the picture-capturing process (Figure 4). In a similar way, interactive picture creation activities by an artist, such as making a portrait of a natural location, can automatically associate the location information with it. This process can be referred to as context-aware art or context-aware modeling.

3D Content A variety of 3D object representation schemes are based on polygonal and spline surfaces, constructive solid geometry, and procedural modeling, among others. To model objects based on these representations, both interactive and automatic modeling tools exist on desktop computing systems that provide a highly complex set of features and functions. For mobile computing devices, IEEE Computer Graphics and Applications

87

12/15/14 5:25 PM

Graphically Speaking

(a)

(b)

(c) 6

Figure 2. Tangible painting app on a tablet device. (a) An artist can tilt the tablet device to automatically introduce lighting effects based on a virtual light source, but (b) lighting effects can also be introduced using finger-based gestures. (c) Another example painting depicting gloss and lighting effects.

supporting functionality similar to what desktopbased tools can do is still a long way off. However, mobile devices are getting the research community’s attention for basic techniques because of sensors’ flexibility for interactive tasks. Touch and gestures. Recent developments in 3D modeling applications and corresponding research in input through multitouch interfaces are gaining momentum. Studies aimed at finding efficient models of 3D object manipulation using touchbased hand gestures are emerging as well. Leslie Predy and her colleagues presented a case study of their 3D modeling app (123D Sculpt) for the Apple iPad (Figure 5a).9 Based on a sculpting metaphor, usability testing with experienced and novice audiences provided encouraging results compared with a similar desktop-based modeling application. Camera. For the most basic type of content creation, a mobile device’s camera provides the required image and video data to be used for further processing or for embedding in content creation 88

g1gra.indd 88

projects. However, the input from the camera could also provide support for implementing interaction tasks and techniques in computer graphics environments. More recently, mobile augmentedreality-based content creation is gaining popularity because of its reduced complexity in such interfaces for interactive modeling with 3D objects. Tobias Langlotz and his colleagues designed an in situ 2D and 3D content creation system for mobile augmented reality (Figure 5b).10 With a nonprofessional audience in mind, the system implemented on a Windows Mobile HTC HD2 device allows both creation and manipulation of 3D primitives. Two types of working environments have been addressed: small and large (a desktop and an outdoor scene, respectively). The basic types of 3D objects that can be created directly in the virtual environment by touching the screen are cubes, spheres, and cylinders. To annotate the environment, 2D objects are included in the system, such as lines, circles, and freehand marking (see www.youtube .com/watch?v=TjUwRIRzCus). Because the interactive modeling of 3D objects

January/February 2015

12/15/14 5:25 PM

is still tedious for many designers, a handheld device’s camera could become an assistive tool for overcoming the more complex aspects of this activity. For object reconstruction, solutions utilizing a mobile device’s camera can help create a 3D model of an object using image-based modeling. Autodesk’s mobile app 123D Catch uses techniques from image-based modeling (https://itunes.apple .com/us/app/123d-catch/id513913018) to help designers create a 3D model of an object by combining a set of object images captured from different angles. In addition to capturing image and video data with an inbuilt camera, the incorporation of a depth sensor could allow for capturing more accurate 3D content. The recent Google Tango project aims to provide support features based on mobile 3D reconstruction and depth sensing (https:// www.google.com/atap/projecttango/#project). Directional and positional sensor fusion. Andrew Au and Jie Liang recently developed a technique for the automatic creation of panoramic 3D scenes on Windows Phone 7 devices by leveraging the phone’s directional sensors.11 Specifically, they used a compass and an accelerometer to provide a three degree of freedom (DOF) orientation and to create 3D scenes in association with the cameracaptured images. In addition, the system employs an accelerometer during scene navigation and allows instantaneous sharing of created content on social networking sites. In a preliminary work, researchers attempting to create a tool for previsualization on mobile devices demonstrated how prerendered objects could be embedded in real time in a camera’s captured images.12 This process, called compositing, is vital in

(a)

(a)

(b) Figure 3. High-quality images created by artists using Sketchbook Mobile on iPad devices.14 A pen can provide a more precise input than fingers.

production and requires both artistic and graphics creation skills to make visually appealing images. The author utilized four mobile sensors in the process—an accelerometer, a compass, a GPS, and a gyroscope—to determine the geometric structure of the virtual environment for lighting effects. The

(b)

Figure 4. Directional and positional sensors for constructing (a) a 2D scene representation of (b) the picture content.

g1gra.indd 89

IEEE Computer Graphics and Applications

89

12/15/14 5:25 PM

Graphically Speaking

(a)

(b)

Figure 5. 3D Shape Creation and Manipulation. (a) The drag gesture can manipulate object parts in a sculpting-based interface.9 (b) Example images depict the mobile augmented-reality-based object creation interface.10 (Courtesy of T. Langlotz.)

camera orientation, for example, was calculated by using the device’s compass and gyroscope. Similarly, Autodesk’s free version of its 3D modeling and design app, Sketcher 3D (https://play.google.com/ store/apps/details?id=com.Doktor3D.Sketcher3D), implements graphical scaling operations on objects by using the device’s accelerometer. In this app, tilting the device to one side scales up the object and tilting to the other side scales it down. Giovanni Pintore and Enrico Gobbetti recently demonstrated a system for the automatic creation of 2D floor plans and representative 3D models of existing building interiors without requiring a specialized content creation experience.13 With a camera-based Android device, a user can walk between the rooms to capture video data from the interior walls; the captured video frames are then automatically spatially indexed with data from the mobile device’s accelerometer and magnetometer sensors. During later stages, statistical techniques help analyze the recorded data, calculating each room’s shape and the complete floor plan. The direction and tilt information from the mobile device’s sensors help the statistical techniques analyze the video frames to construct the needed visual representations.

Challenges and Future Directions The primary hurdle to realizing professional-grade content creation apps on mobile devices involves dealing efficiently with the limitations of handheld computing technology. Table 2 summarizes some of the important challenges and future directions related to hardware and software for mobile content creation systems with inbuilt sensors. 90

g1gra.indd 90

Hardware To raise important research questions, we focus on the following key aspects of mobile computing systems from the hardware perspective: computational requirements, sensing and input devices, and storage. Computational requirements. In simple terms, as realism in computer graphics images increases, the computational requirements see a corresponding rise. Thus, meeting the computational requirements for creating photorealistic and nonphotorealistic content at interactive rates will allow artists, teachers, and laypeople to use mobile content creation systems more effectively. To support such graphical computations, recent mobile computing technology has witnessed advances in the architecture and organization of mobile CPUs and GPUs. Although mobile GPUs are slowly increasing their capabilities,15 the way a particular technique combines raw processing, GPU capabilities, and remote computing facilities, if available, is more important for success. The general principle of remote computing involves sending computationally intensive tasks to resourceful remote servers and receiving the results back after the computations are done. Thus, it is extremely important for a graphics-intensive technique to innovatively balance computational requirements by optimally dividing tasks between mobile CPU, GPU, and remote computers. The criteria on which such dynamic division of computational resources between local and remote computers can be based involve the cost of network usage, user experience, and battery life. Moreover,

January/February 2015

12/15/14 5:25 PM

Table 2. Challenges for future mobile content creation systems. Area

Subareas

Topics of concern

Hardware

Processing

CPU, GPU, remote rendering, computational offloading

Storage

Internal, remote (cloud)

Sensing and input

New devices and sensors development

Interface

Desktop integration, collaboration, sensor integration novelties, natural gestures

Interaction

Creation, manipulation, human factors

Apps

Stability, algorithms, platform independence, user studies

Software

if a mobile app allows real-time collaborative work on a content creation task, the participating mobile device can be programmed to share computing resources. However, this might not be possible on all mobile platforms, especially on low-end devices that do not possess a GPU or advanced networking capabilities. Also, if a remote computing facility needs to be leveraged, the networking capabilities of mobile computing platforms must be made more reliable and fast. Sensing and input devices. Developing mobile apps or tools that incorporate the use of the novel sensing technologies available on mobile devices will help anyone with access to such devices to create digital content more naturally. Existing work in the computer graphics literature already tries to capture the physical aspects of artistic techniques in their simulation. For example, several computer graphics techniques such as washes, glazing, wet-in-wet, lifting off, and sponge allow artists to simulate the unique features of the watercolor medium. However, with direct input through a mobile device’s digital input and sensing devices, these techniques will become more natural in the future. Many new issues and challenges need to be resolved, however. For example, for washes, pigment might be applied to a sloping canvas (mobile device’s screen) using a brush in an overlapping strokes manner. Users could also produce random effects by shaking a mobile device as they would shake a piece of paper. In another example, “lifting off” involves removing watercolor pigments from dried color using a wet brush/tissue on selected areas of a canvas. A stylus with pressure sensitivity could simulate this effect, but additional parameters such as the amount of wetness, brush size, and pressure must be taken into consideration. Although pen-based pressure-sensitive devices are useful in content creation, in our opinion, input devices supporting the simulation of more traditional tools, such as a paint brush, might further invite artists to create content on mobile devices. Design characteristics, such as shape, size, tip, availability of control buttons, pressure sen

g1gra.indd 91

sitivity, and user feedback, might be exploited to improve the user experience. A more realistic paint brush would provide a more natural experience while painting, for example. Of course, this would require substantial research and development investment in such devices exclusively developed for mobile devices. Two-dimensional input space and corresponding interfaces have their limitations while interacting with content creation systems. Development of new sensing devices that let users specify input in a 3D space will add tremendously to the popularity of mobile content creation. For this to happen, the shape and structure might require some significant changes to help devices sense 3D input. Storage barriers. A typical 3D object modeling application requires both high computational and large capacity storage hardware. Moreover, the created content occupies a large chunk of device memory, and if it is created frequently, memory space could run out quickly. The internal memories of mobile computing systems have increased substantially, but they still fall short of content creation app requirements. Mobile cloud computing offers hope for resource-constrained devices to leverage not only the external storage but also the processing infrastructure. Taking advantage of already existing server, storage, software, and platform infrastructure, many of the performance barriers to realizing content creation on mobile devices can be alleviated. To achieve this goal, improvements in the general area of networking such as communication bandwidth, availability, and quality of service are highly desirable. Because of the possibility of varying graphical abstraction levels to allow flexibility in choosing the available computing hardware, we believe the creation of nonphotorealistic content will outpace the creation of photorealistic content (see the “Nonphotorealistic Renderings in Mobile Content Creation” sidebar).

Software The software traits of mobile content creation are closely related to the design, development, revision, IEEE Computer Graphics and Applications

91

12/15/14 5:25 PM

Graphically Speaking

Nonphotorealistic Renderings in Mobile Content Creation

S

ince the inception of the term nonphotorealistic rendering, a huge debate has raged about its applications and advantages.1 However, it has continuously received attention from researchers in various domains because this approach offers users the means to explore abstractions and thus provides freedom of artistic expression. Mobile devices can utilize such abstractions to overcome the resource constraints associated with limited raw processing, graphical processing, and memory capacity. We argue that the incorporation of nonphotorealistic elements in graphics renderings on mobile devices has the potential to offer users a new genre of graphics experience. One area that has received a great deal of attention in nonphotorealistic rendering is stroke-based rendering of im-

ages and surfaces. The work in this area could be adapted to mobile devices because of the constantly evolving touchinput facilities in these devices. Recent work discusses the key benefits of bringing nonphotorealistic rendering based content creation to resource-constrained handheld devices.2

References 1. J. Lansdown and S. Schofield, “Expressive Rendering: A Review of Non-Photorealistic Techniques,” IEEE Computer Graphics and Applications, vol. 15, no. 3, 1995, pp. 29–37. 2. K. Dev, “Mobile Expressive Renderings: The State of the Art,” IEEE Computer Graphics and Applications, vol. 33, no. 3, 2013, pp. 22–31.

and user experiences of apps written by efficiently utilizing the mobile device’s hardware. The following criteria are most important: interfaces, interaction, and design of apps. Natural interfaces and interactions. As noted by James Foley and his colleagues,1 the potential set of interaction tasks and techniques is virtually infinite, so looking for the most effective ones from several viewpoints is extremely important for mobile content creation apps. If we consider only, for instance, touch and pen input devices, we have a large space to choose from for interaction tasks and techniques (Figure 6). Due to the differences between desktop and mobile devices in the ways that input and output could be used, the implementation of these techniques shows much variation.16 Although automatic content creation might require the smallest amount of user input, interactivity requires support for at least six fundamental tasks: selection, positioning, path specification, orientation, quantifying, and text input. A set of controlling tasks (stretch, rotate, scale, and translate) is required to efficiently control the creation and manipulation of graphical objects. For 3D modeling, by using a device’s multitouch surface, gestures could prove to be useful by providing real-time, lifelike handling of graphical objects. However, many casually designed gestures can hinder a user’s performance. User interfaces for content creation applications need to be more intuitive and natural in order to succeed. In an effort to realize these essential features, user studies might need to be carried out to investigate the kind of comfort users expect. A recent work studied the effectiveness of touch-based techniques (finger drag and track) against mouse input.17 Although the authors reported mouse input 92

g1gra.indd 92

as the winner, they also made novel observations about touch-based interfaces providing a much larger input space compared with that provided by a mouse. Research is needed to map existing interaction tasks and techniques from the desktop input space to the mobile input space. For example, the inbuilt accelerometer is useful for supplying an automatic input, so it will be intriguing to see how we can implement and test a subset of the currently existing collection of interaction tasks and techniques prevalent in professional 2D and 3D content creation tools on desktops and other stationary devices. We believe that the variety of combinations is extensive, but again, user studies might be fruitful in determining the best. A surge in research activities aimed at developing novel interaction techniques via multitouch screens and sensors, especially in ACM CHI conferences, has already started to take place. Increasing the popularity of mobile content creation apps. Mobile content creation apps are currently used for comic strips, basic product design, onsite analysis, learning, design, and illustrations. To make such apps appeal to other user groups and application areas, we need to meet the important challenges related to the software engineering aspects of the app design and development model.18 For example, users need to be able to use such apps without the fear of losing their work in the middle of a task. The biggest challenges are related to porting existing applications to mobile platforms, developing efficient algorithms for a real-time experience, provisioning features and functionality support similar to that found on desktop computers, and creating seamless integration and interoperability with both mobile and desktop systems.

January/February 2015

12/15/14 5:25 PM

One viable option for getting existing content creation applications on mobile devices is to try to port them to mobile computing platforms. However, the porting task is not as straightforward as it appears because there are major differences between stationary and mobile computing paradigms. Existing applications need to be reengineered in completely novel ways to make them run on mobile platforms. The graphics programming APIs for mobile devices are in embryonic stages in terms of supported features. More recent ones, such as OpenGL ES, provide advanced functionality for programming 2D and 3D graphics, but there is still a wide gap that requires work, such as improving texturing, inter-API compatibility, shading functionality, interaction, and interface design, among others. Additionally, new algorithms developed by researchers in the content creation field should be able to scale on mobile platforms, a factor that we believe should determine their efficiency and effectiveness.5 Apps that allow for collaboration among artists and designers would give a new impetus to creativity, but they will also require an infrastructure for shared data storage and processing. It is likely that mobile cloud computing will serve as a backbone for this infrastructure—Autodesk’s 360 Mobile enables easy mobile collaboration through cloudbased storage services.

D

esktop-based content creation models must be modified, adapted, and redesigned to realize the potential of pervasive content creation. To do that, we need interfaces that allow a person to more naturally interact with the device. Future research could lead us to a new paradigm of touch- and sensor-based mobile content creation; to increase the input space, more sensing devices might be developed for the user’s space instead of the device’s space (such as hand- or finger-wearable sensors). Identification of user requirements will be vital in deciding what needs to be included in such a small package. A mobile device’s body might provide input in 3D space—for example, elastic/flexible mobile phones that could be stretched or compressed. We believe there will be a paradigm shift not only in the technological aspects of mobile devices but also in applications and social aspects of digital content creation.

Acknowledgments We would like to thank the editor for his valuable comments to improve the article and all the authors who

g1gra.indd 93

Tip characteristics

Contact points Feedback

Pressure

Stylus

Touch input

Manipulative features Gestural features

Direct vs indirect Finger characteristics

Finger Degree of freedom

Handedness Number of fingers

Figure 6. Parameters for implementing touch-based content creation techniques on mobile computing platforms. Inputs through fingers involve considerations such as the number of fingers, handedness (left or right), finger characteristics (shape, size, and fingerprint), and degrees of freedom (one, two, or more). A stylus has its own parameters: pressure sensitivity, stylus tip contact points (single or more), feedback (vibration), and tip characteristics (point, flat, or round).

provided the images from their articles for use in this paper. Support for this work was provided by Microsoft Research through its PhD Scholarship Program.

References 1. J.D. Foley, V.L. Wallace, and P. Chan, “The Human Factors of Computer Graphics Interaction Techniques,” IEEE Computer Graphics and Applications, vol. 4, no. 11, 1984, pp. 13–48. 2. T. Goldstein, Hand Held Hollywood’s Filmmaking with the iPad & iPhone, 1st ed., Peachpit Press, 2012. 3. M. Nappi, “Drawing w/Digits_Painting w/Pixels: Selected Artworks of the Gesture over 50 Years,” Leonardo, vol. 46, no. 2, 2013, pp. 163–169. 4. D.S. Leibowitz, Mobile Digital Art: Using the IPad and IPhone as Creative Tools, CRC Press, 2013. 5. S. DiVerdi et al., “Painting with Polygons: A Proce­ dural Watercolor Engine,” IEEE Trans. Visualization & Computer Graphics, vol. 19, no. 5, 2013, pp. 723–735. 6. A.M. Blatner et al., “TangiPaint: A Tangible Digital Painting System,” Proc. Color and Imaging Conf., 2011, pp. 102–107. 7. T.H. Kim and I. Lin, “Real-Time Non-Photorealistic Viewfinder on the Tegra 3 Platform,” Stanford Univ., 2012; www.stanford.edu/~kimth/cs478/proj/kimth _irvingl_nprViewfinder.pdf. 8. S. Christodoulakis et al., “Picture Context Capturing IEEE Computer Graphics and Applications

93

12/15/14 5:25 PM

Graphically Speaking

9.

10.

11.

12. 13.

14. 15.

16.

for Mobile Databases,” IEEE MultiMedia, vol. 17, no. 2, 2010, pp. 34–41. L. Predy et al., “123D Sculpt: Designing a Mobile 3D Modeling Application for Novice Users,” Proc. CHI 12 Extended Abstracts on Human Factors in Computing Systems, 2012, pp. 845–848. T. Langlotz et al., “Sketching Up the World: In Situ Authoring for Mobile Augmented Reality,” Personal and Ubiquitous Computing, vol. 16, no. 6, 2012, pp. 623–630. A. Au and Jie Liang, “Ztitch: A Mobile Phone Application for 3D Scene Creation, Navigation, and Sharing,” Proc. 19th Int’l Conf. Multimedia, 2011, pp. 793–794. R.R. Cepeda, “Real-Time Context-Aware Compositing,” master’s thesis, Univ. of Bristol, 2012. G. Pintore and E. Gobbetti, “Effective Mobile Mapping of Multi-Room Indoor Structures,” The Visual Computer, vol. 30, nos. 6–8, 2014, pp. 707–71. G. Kurtenbach, “Pen-Based Computing,” ACM Crossroads, vol. 16, no. 4, 2010, pp. 14–20. T. Akenine-Moller and J. Strom, “Graphics Processing Units for Handhelds,” Proc. IEEE, vol. 96, no. 5, 2008, pp. 779–789. R. Ballagas et al., “The Smart Phone: A Ubiquitous Input Device,” IEEE Pervasive Computing, vol. 5, no.

1, 2006, pp. 70–77. 17. S. Radhakrishnan et al., “Finger-Based Multitouch Interface for Performing 3D CAD Operations,” Int’l J. Human-Computer Studies, vol. 71, no. 3, 2012, pp. 261–275. 18. M.E. Joorabchi, A. Mesbah, and P. Kruchten, “Real Challenges in Mobile App Development,” Proc. Symp. Empirical Software Engineering and Measurement, 2013, pp. 15–24. Kapil Dev is a graduate student in computer science at Lancaster University. His research interests include nonphotorealistic rendering, computer graphics, and human-computer interaction. Dev has an M.Tech in information systems from Netaji Subhas Institute of Technology. Contact him at [email protected]. Manfred Lau is an assistant professor in the School of Computing and Communications at Lancaster University. His research interests are in computer graphics, HCI, and digital fabrication. He has a PhD in computer science from Carnegie Mellon University. Contact him at [email protected].

Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.

Subscribe today! IEEE Computer Society’s newest magazine tackles the emerging technology of cloud computing. The

ENCE G R E V CO N

computer.org/ cloudcomputing

20 + Interoperability Challenge42s Sensor Data in the Cloud

+

es 10

ng Challe ds and 28 nter Tren + Dataceal Cloud Security Practic

+

+ +

Some20ama 2014 14 JULY zing also here MAY dcomputing org/clou mputing Som www.computer. cloudco e amazing also here uter.org/

mp www.co

94

g1gra.indd 94

22 22

SEPTEMBER 2014

January/February 2015www.computer.org/cloudcomputing

12/15/14 5:25 PM

Democratizing digital content creation using mobile devices with inbuilt sensors.

Democratizing digital content creation using mobile devices with inbuilt sensors. - PDF Download Free
3MB Sizes 0 Downloads 9 Views