Computers in Biology and Medicine 56 (2015) 89–96

Contents lists available at ScienceDirect

Computers in Biology and Medicine journal homepage: www.elsevier.com/locate/cbm

Multiple texture mapping of alveolar bone area for implant treatment in prosthetic dentistry Koojoo Kwon, Dong-Su Kang, Byeong-Seok Shin n Department of Computer and Information Engineering, Inha University, Incheon, Republic of Korea

art ic l e i nf o

a b s t r a c t

Article history: Received 14 May 2014 Accepted 3 November 2014

Treatment using implants is frequently employed in prosthetic dentistry. In this method, determining the bone density of the upper and lower jaws is important. Generally, a dentist can recognize the condition of the alveolar bone to be manipulated using a cone-beam computed tomography (CBCT) image. However, communicating the data to the patient is a challenge because it is difficult for the nonprofessional person to interpret the image, which contains a distribution of pixels with similar density. We present an intuitive texture mapping method of the alveolar bone area for application in implant treatment. Our method aims to help patients better understand the treatment process by using a textured image that includes several different texture patterns that reflect the density of the alveolar bone area. We segment the area in accordance with the density of corresponding parts in the alveolar bone and the gingiva. By simplifying the boundary of each segmented region, the distribution of pixels with similar density on the alveolar bone area can be easily recognized. Next, the texture patterns for several segmented regions are mapped onto the alveolar bone area using the graph-cut algorithm, which is used for smooth texture mapping at the boundary of the segmented region. The result is an applied texture on the alveolar bone area that corresponds to the bone structure. Our method is helpful for facilitating communication and understanding of treatment using dental implants. & 2014 Elsevier Ltd. All rights reserved.

Keywords: Bone density Dental implants Graph-cut algorithm Prosthetic dentistry Texture mapping

1. Introduction The alveolar bone, which supports the teeth, can perform the complete function of the teeth [1,2]. Tooth loss and gingival disease can cause collapse of the alveolar bone. An artificial tooth can be inserted using dental implants when a tooth is lost [3,4]. However, when the alveolar bone tissue has disappeared, a variety of problems can occur including improper insertion of the implant screws or tooth loosening after treatment. Therefore, the condition of alveolar bone is important because an implant is used to support dental prosthetics instead of a tooth in the alveolar bone. When a CBCT image of a jaw has been obtained, dentists usually examine the distribution of density to confirm the condition of the alveolar bone tissue. However, explaining the current state of treatment or surgical plans for future treatments to patients using the standard CBCT images is difficult because only those with specialized knowledge can interpret such images. Here, we provide a method for visualizing the density distribution of the alveolar bone using texture patterns. Our method can help nonprofessional people easily understand the structure of the alveolar bone. A patient's alveolar bone density during treatment using

n

Corresponding author. Tel.: þ 82 32 860 7452. E-mail address: [email protected] (B.-S. Shin).

http://dx.doi.org/10.1016/j.compbiomed.2014.11.005 0010-4825/& 2014 Elsevier Ltd. All rights reserved.

dental implants can be intuitively visualized using our method. We can efficiently plan a schedule for treatment because our method enhances the patients understanding. For example, before an implant treatment, a dentist can use our texture mapping method to explain that the condition of the jaw should be enhanced before treatment due to poor alveolar bone health. The density value of the alveolar bone is quantized into discrete levels with several predefined threshold values. We determine the density value range between the minimum and maximum values of a segmented area. Then, we divide the range evenly by the number of levels. The number of levels is determined by the user. We perform segmentation corresponding to the density of each area and then map bone texture patterns to their corresponding areas. However, different texture pattern shapes cannot be easily and smoothly connected on the seam boundary between different segmented areas because the pattern of the bone tissue is cancellous. The repeated texture continuously follows as a tile attached to the region when the target area for texture mapping is larger than the size of the texture. In order for the boundaries to appear naturally synthesized, deforming the boundary is necessary. Recently, the example-based texturing technique has been mainly used to generate a large version of the textured image using texture pattern analysis of the boundaries [5]. The example-based texturing methods are categorized into pixel-based approaches [6–10] and patch-based approaches [11–13]. The pixel-based method can

90

K. Kwon et al. / Computers in Biology and Medicine 56 (2015) 89–96

connect the boundary region more naturally than the patch-based method. The pixel-based method analyzes the similarity of texture patterns using a pixel unit when two patterns are interconnected. However, the patch-based method analyzes the similarity using the patch unit, which is a set of pixels. The patch-based method is faster than the pixel-based method because it analyzes the texture pattern using a patch, rather than individual pixels. The minimal error boundary [14] and the graph-cut algorithm [15] are presented as patch-based methods. We selected the graph-cut algorithm for texture mapping on the alveolar bone area because its computation time is fast, allowing for appropriate communication between the dentist and the patient. Our contribution is the application of a texture on the alveolar bone area that corresponds to the bone structure. Previous methods only visualize the density distribution of bone using a gray-scale or pseudo coloring technique. It provides non-intuitive result. We propose a novel method for visualizing intuitively the density distribution using mapping some texture patterns. Our method is helpful for facilitating communication and improving the patients understanding of treatment using dental implants. Dentists usually use density values from the CBCT image to confirm the inner condition of the bone. Alternatively, they can use a pseudo coloring method. Although these methods provide an exact view of the condition, the connectivity between several levels of the segment region is not sufficiently represented. However, we can visualize the inner structure of the bone using our texture mapping method. In Section 2, we describe the multiple texture mapping method for the alveolar bone area. The results are shown in Section 3. Lastly, we summarize and conclude our work.

2. Method Fig. 1 shows the process of our method. Our texture mapping procedure consists of two steps. First, we segment the target area into several regions using defined density values, and we refine the segment area by removing tiny spots or holes in this region (we will refer to this procedure simply as “refinement”). In the second step, the level texture that corresponds to the density value for the segmented region is mapped. The graph-cut algorithm is used for smooth texture mapping at the boundary of the segment region because each level texture has a different pattern. After we complete the texture mapping for the nth level, we sequentially

perform segmentation of the remaining area and texture mapping over the area that corresponds to the n 1 th level. By repeating this process for the number of levels, the bone density of several levels is efficiently visualized. This process covers the region of the previous level with the current level texture, as shown in Fig. 1, because the graph-cut algorithm connects different texture patterns, adjusting the boundary of the image to be overwritten based on the background image. 2.1. Segmentation for selecting an alveolar bone level Before the segmentation and texture mapping steps, we must identify the alveolar bone in the CBCT image. To determine the bone area, we should select a range of the possible density values corresponding to alveolar bone because each CBCT image can have a slightly different density value with respect to alveolar bone [16–18]. We test the density values of pixels in the CBCT image by scanning horizontally and vertically to determine whether the values are within the threshold range for alveolar bone. This process of identifying the alveolar bone region is shown in Fig. 2. The image is scanned horizontally and vertically. During horizontal scanning, we set the start position where the density value first increases to greater than the predefined density value of alveolar bone. We also set the end position where the density value decreases to less than that of alveolar bone (see the left of Fig. 2). We tag the pixels between the start and the end position as “IN”, and the rest are otherwise tagged as “OUT”. The same procedure is performed in the vertical direction. All of the pixels tagged as ”IN” in both directions define the pixel set corresponding to the alveolar bone region, and this pixel set is suitable for texture mapping because no holes exist. After selecting the alveolar bone area, we assign texture levels for each bone density threshold range (right of Fig. 2). The density range of the alveolar bone is divided into five levels by the user, and labels corresponding to each level are assigned (left of Fig. 2). Fig. 3 shows the result of segmentation for three regions corresponding to the levels shown in Fig. 2. The current level may occur as a very small region because of “a non-uniform distribution of the density value (referred to as an “ignorable pixel set”, IPS). Most occurrences of an IPS represent a cross section of the bone matrix rather than the cohesion of the adjacent bone tissue. Therefore, representing an IPS by the density of the adjacent region is not appropriate. In addition, the result of texture mapping does not provide good connectivity at the IPS boundary

Fig. 1. The process of texture mapping. After locating the alveolar bone area, the segmentation and texture mapping steps are repeated n times.

K. Kwon et al. / Computers in Biology and Medicine 56 (2015) 89–96

91

Fig. 2. Detection of the alveolar bone area (left) and the texture levels according to the density value (right). The inside of the alveolar bone is identified by testing the density value in the horizontal and vertical directions of the CBCT image (left). The density values for each of the five levels are then set by the user (right).

Fig. 3. The CBCT image (left) and the segmented and texture mapped images (right). The result of texture mapping is influenced by a segmented region. After mapping a texture onto the current level, mapping the texture of another level is not appropriate due to the ignorable pixel set (IPS).

Fig. 4. The segmented region applied to the graph-cut algorithm. The boundary of the nth level patch on the n  1 level patch (middle) and an adjustment boundary using the graph-cut algorithm between the start node (s) and the end node (t) (right).

because the texture patterns are mismatched at the joint between the IPS and neighboring pixels. Therefore, we must refine the segmented area to remove the IPS. The bi-literal filtering [19] or anisotropic diffusion filtering [20] methods can be used for refinement. However, these techniques are not suitable for our method because the time required to compute is too long. In this paper, we reduce each IPS by removing a portion of the pixels having a density value larger than the threshold in a predefined

refinement kernel. We also remove holes by filling the pixels that had a lower density value than the current threshold using the density of the texture level in the previous step. 2.2. Texture mapping using the graph-cut algorithm We perform texture mapping after segmentation. Connecting the boundary between the different texture levels of the regions as

92

K. Kwon et al. / Computers in Biology and Medicine 56 (2015) 89–96

naturally as possible is important. In this paper, we use a max-flow min-cut graph algorithm [21] to connect different level textures. This algorithm is useful for detecting the boundary of an object or for identifying a given region in a 2-dimensional image. The graph-cut algorithm is used often in some segmentation and registration methods because determining the similarity of the boundary is possible [22,23]. We use the graph-cut algorithm to modify the boundary of each level. Fig. 4 illustrates the process of our method. The graph-cut algorithm is applied between the regions of the different texture levels. The right image of Fig. 4 shows the segmented regions; we use the segmented region of the level patch. The middle image is the magnified view of the texture boundary. The right image of Fig. 4 depicts the process of applying the graph-cut algorithm to the texture boundary between two levels. Adjusting the boundaries of the nth level texture to reflect the consistency of the n  1 level texture is important for more naturally connecting the boundary between the levels. To adjust the boundaries, we perform the following steps: (1) We convert the level patch to the graph node from the pixel that corresponds to the two adjacent segmentation regions

(Fig. 4, middle). A dark gray solid circle indicates the pixel from the upper level texture, and a light-gray solid circle indicates the pixel from the lower level region. (2) Several paths on the graph from the start node (s), one pixel inside the upper level texture, to the end node (t), one pixel inside the lower level texture, are found (Fig. 4, right magnified area). (3) After calculating the difference between the density values of adjacent pixels, the adjustment boundary is chosen by selecting the nodes that have a minimum difference value, and we use this adjustment boundary as the region of the upper level texture to map onto the lower level.

3. Experimental results and discussion We used an Intel Core i5-250 (4-core) 3.30 GHz CPU with 2 GB main memory and an nVidia GTX 560 graphic accelerator for the experiments. The datasets are the CBCT images of alveolar bone obtained from representative patients, and the image resolution is

Fig. 5. Five texture patterns for each density value of the alveolar bone area.

Fig. 6. A segment area before refining (left) and after refining (right).

Fig. 7. A comparison of the refinement area for texture mapping. The IPS region is equal to or less than 24% (top left), 32% (top right), 36% (bottom left) and 40% (bottom right) of the kernel.

K. Kwon et al. / Computers in Biology and Medicine 56 (2015) 89–96

700  700 pixels. We prepared five texture patterns for each density value level of the alveolar bone area, as shown in Fig. 5. Fig. 6 depicts the results of area refinement after segmenting the alveolar bone. As shown in the magnified image, the boundary of the segmented region is coarse and several tiny spots are found, which results in unnatural texture mapping. In addition, the

93

operation time for the graph-cut algorithm increases because the number of segmented areas also increases. Therefore, we must reduce the IPS, as shown in the right image of Fig. 6. Fig. 7 depicts the results of merging several small separate areas. We use a 5  5 filter for refinement because the result of applying a 3  3 kernel size is not sufficient to merge the areas,

Fig. 8. The result of level segmentation with different kernel sizes: 3  3 (left), 5  5 (middle), and 7  7 (right).

Fig. 9. The results of the texture mapping process. CBCT scanned image (left column), segmented image (middle column) and final texture-mapped image (right column).

94

K. Kwon et al. / Computers in Biology and Medicine 56 (2015) 89–96

and the computation time with a 7  7 kernel size is longer than with the 5  5 filter. The top left image in Fig. 7 shows the results of the merging operation when the number of pixels greater than the threshold value is 24% of the kernel. The results of repeating this procedure when the pixels represented 32%, 36% and 40% of the kernel are shown in the top right, bottom left, and bottom right images of Fig. 7, respectively. We verify that the merging of the separate areas is most effective when the refinement is performed with 40% of the pixel set in the kernel. Fig. 8 shows the results of segmentation with three differentsized filters. Some levels are merged with other levels as a result of using the 7  7 kernel because the size of the kernel is too large.

In addition, we do not use a 3  3 kernel for the subsequent experiments because this size shows a low quality of refinement. Fig. 9 shows the results of applying a texture map on the alveolar bone area. The left image shows the CBCT scanned image, the middle image shows the segmented image at each level, and the right image shows the final result. The final results are blended with the CBCT scan image by 30% and the level texture image by 70%. We can intuitively verify that the final images display an area that represents dense bone in the boundary region of the alveolar bone and coarse or less-dense bone in the middle of the alveolar bone area. Fig. 10 shows the results of our method to visualize the bone density at each level with respect to the same dataset. The density

Fig. 10. The result of applying our method on the alveolar bone area. Texture mapping on a CBCT image with 3, 4, and 5 level textures (1st image) and 2, 3, and 5 level textures (2nd image); the result of another CBCT image with 2, 3, and 4 level textures (3rd image) and 1, 3, and 5 level textures (4th image).

Fig. 11. The CBCT images (1st and 3rd columns) and texture mapping results (2nd and 4th columns). The results for alveolar bone with a tooth covered by a crown (top row), with an implant (middle row) and without a tooth (bottom row).

K. Kwon et al. / Computers in Biology and Medicine 56 (2015) 89–96

value that corresponds to an individual level texture can be changed to visualize the difference in bone density. A patient can distinguish the difference between the dense and coarse bone areas using these results. Fig. 11 shows the results of texture mapping with our method on several alveolar bone images. The top row is the alveolar bone with a tooth covered by a crown. The top right image shows the condition of the alveolar bone when a tooth and an implant are attached to it. The middle row depicts the resulting images that show the condition of alveolar bone in which an implant is inserted. The bottom row contains images of the alveolar bone without teeth. The tooth region on the images, which has the same density value as alveolar bone, is mapped by textures. Other regions, which have some dental restoration such as a crown or bridge, are excluded from texture mapping. As shown in Fig. 11, we can clearly observe the shape of the bone structure between the boundary and the inside of the alveolar bone areas. Additionally, we can visually verify the patterns at the bone region where the density value is diminished (bottom row of Fig. 11). Fig. 12 shows a magnified view of the area when an implant is inserted. The rectangular area of the CBCT image (left of Fig. 12) depicts the boundary between the implant and the inner structure of the alveolar bone. We cannot assure the bone tissue condition in the especially dark area where a tooth has been lost. However, we can show the state of the implant boundary region using our texture mapping method, as shown in the magnified view in Fig. 12. We can intuitively verify that the bone tissue is not tightly bound between the implant and the alveolar bone. Additionally, the pattern inside the bone around the implant is sparse. This result image is remarkable for visualizing the density distribution of bone inside in comparison with the previous methods such as the pseudo coloring on CT/MRI image. The texture patterns are helpful for understanding the inside structure of alveolar bone. The graph-cut approach is suitable for stitching several bone texture patterns since its processing time is faster than the others. To intuitively communicate the texture mapping results in a timely manner to a patient, the computation time should be short. Our method can be divided into two steps: segmentation and texture mapping; the average time for each step is listed in Table 1. More than 90% of the computation time is spent in the texture mapping step because a long computation time is required for many pixels to naturally connect at the boundary of each level. Reducing the time required for texture mapping is possible by decreasing the number of repetitions of the graph-cut algorithm.

95

However, the boundary connection between levels becomes unnatural in that case. The processing times for datasets 1 and 2 are longer than those of datasets 3 and 4 because the shape of each level is more complex and the numbers of levels within datasets 1 and 2 is greater than those within datasets 3 and 4. This result means that an increment in the number of segmented regions increases the number of patches needed to perform the graph-cut algorithm. Table 2 lists the performance of our method in several different computing hardware environments. Although the total computation time varies slightly depending on the system, texture mapping for the alveolar bone area can generally be performed within 1 s using current computing power. The results in Table 1 are obtained from testing on the ‘Intel Core i5-250CPU’ listed in Table 2.

4. Summary We propose an intuitive texture mapping procedure for the alveolar bone area to plan implant treatment in prosthetic dentistry. Our method aims to help patients better understand their course of treatment and to increase the effectiveness of prosthetic dentistry. First, we find the alveolar bone area and segment the Table 1 Computation time for each step (seconds). Data dataset dataset dataset dataset

1 2 3 4

Total computation time

Segmentation

Texture mapping

0.859 0.776 0.629 0.519

0.056 0.074 0.046 0.041

0.803 0.702 0.583 0.478

Table 2 The performance of our method in several systems (seconds). Data

dataset dataset dataset dataset

1 2 3 4

Intel Core i7-2600 3.4 GHz (8-core)

Intel Core i5-250 3.30 GHz (4-core)

Intel Core i5-750 2.67 GHz (4-core)

0.800 0.740 0.587 0.423

0.859 0.776 0.629 0.519

0.994 0.921 0.832 0.801

Fig. 12. The texture mapping result on alveolar bone with an implant. The condition of the nearby bone tissue implant is clearly visualized.

96

K. Kwon et al. / Computers in Biology and Medicine 56 (2015) 89–96

target area to map the texture depending on the bone density. Then, we smoothly map the texture at several levels of each bone density. A realistic alveolar bone image is generated using the graph-cut algorithm at the boundary of each level. Our method can help effectively improve the treatment plan because it helps facilitate communication and understanding of treatment using dental implants. Before an implant treatment, a dentist can use our texture mapping method to explain that the jaw condition should be enhanced before treatment due to poor alveolar bone health. In the future, studying 3-dimensional texture-based methods will be necessary to visualize the alveolar bone area in any direction. Conflict of interest statement None declared. Acknowledgments This research was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (No. 2012M3C4A7032781). This work was supported by INHA UNIVERSITY Research Grant.

[11] Q. Wu, Y. Yu, Feature matching and deformation for texture synthesis, in: Proceedings of the SIGGRAPH 2004, 2004, pp. 364–367. [12] A. Schdl, R. Szeliski, D.H. Salesin, I. Essa, Video textures, in: Proceedings of the SIGGRAPH 2000, 2000, pp. 489–498. [13] S. Lefebvre, H. Hoppe, Appearance-space texture synthesis, J. ACM Trans. Graph. 25 (3) (2006) 541–548. [14] A.A. Efros, W.T. Freeman, Image quilting for texture synthesis and transfer, in: Proceedings of the SIGGRAPH 2001, 2001, pp. 341–346. [15] V. Kwatra, I. Essa, A. Bobick, N. Kwatra, Texture optimization for examplebased synthesis, J. ACM Trans. Graph. 24 (3) (2005) 795–802. [16] N. Casap, E. Tarazi, A. Wexler, U. Sonnenfeld, J. Lustmann, Intraoperative computerized navigation for flapless implant surgery and immediate loading in the edentulous mandible, Int. J. Oral Maxillofac. Implants 20 (1) (2005) 92–100. [17] Y.S. Choi, H.E. Hwang, S.R. Lee, Clinical application of cone beam computed tomography in dental implant, J. Korean Dent. Assoc. 44 (2006) 172–181. [18] M. Cassetta, L.V. Stefanelli, S. Di Carlo, G. Pompa, E. Barbato, The accuracy of CBCT in measuring jaws bone density, Eur. Rev. Med. Pharmacol. Sci. 16 (10) (2012) 1425–1429. [19] C. Tomasi, R. Manduchi, Bilateral filtering for gray and color images, in: International Conference on Computer Vision 1998, 1998, pp. 839–846. [20] K. Krissian, S. Aja-Fernandez, Noise-driven anisotropic diffusion filtering of MRI, IEEE Trans. Image Process. 18 (10) (2009) 2265–2274. [21] F.R. Schmidt, E. Toppe, D. Cremers, Efficient planar graph cuts with applications in computer vision, in: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 351–356. [22] D. Mahapatra, Y. Sun, MRF based intensity invariant elastic registration of cardiac perfusion images using saliency information, IEEE Trans. Biomed. Eng. 58 (4) (2011) 991–1000. [23] D. Mahapatra, Y. Sun, Integrating segmentation information for improved MRF-based elastic image registration, IEEE Trans. Image Process. 21 (1) (2012) 170–183.

References [1] R.B. Johns, T. Jemt, M.R. Heath, J.E. Hutton, S. McKenna, D.C. McNamara, et al., A multicenter study of overdentures supported by Branemark implants, Int. J. Oral Maxillofac. Implants 7 (Winter (4)) (1992) 513–522. [2] P.J. Henry, D.E.D.E. Tolman, C. Bolender, The applicability of osseointegrated implants in the treatment of partially edentulous patients: three-year results of a prospective multicenter study, Quintessence Int. 24 (February (2)) (1993) 123–129. [3] K. Verstreken, J.V. Cleynenbreugel, K. Martens, G. Marchal, D.V. Steenberghe, P. Suetens, An image-guided planning system for endosseous oral implants, IEEE Trans. Med. Imaging 17 (5) (1998) 842–852. [4] R.A. Mischkowski, M.J. Zinser, J. Neugebauer, A.C. Kubler, J.E. Zoller, Comparison of static and dynamic computer-assisted guidance methods in implantology, Int. J. Comput. Dent. 9 (1) (2006) 23–35. [5] A.A. Efros, T.K. Leung, Texture synthesis by non-parametric sampling, in: IEEE International Conference on Computer Vision, Kerkyra, 1999, pp. 1033–1038. [6] Z. Bar-Joseph, R. El-Yaniv, D. Lischinski, M. Werman, Texture mixing and texture movie synthesis using statistical learning, IEEE Trans. Vis. Comput. Graph. 7 (2) (2002) 120–135. [7] M. Ashikhmin, Synthesizing natural textures, in 2001 ACM Symposium on Interactive 3D Graphics, New York, USA, 2001, pp. 217–226. [8] L.Y. Wei, M. Levoy, Order-Independent Texture Synthesis, Stanford University CS Department, TR-2002-01. [9] L.Y. Wei, M. Levoy, Fast texture synthesis using tree-structured vector quantization, in: Proceedings of the SIGGRAPH 2000, 2000, pp. 479–488. [10] X. Tong, J. Zhang, L. Liu, X. Wang, B. Guo, H.Y. Shum, Synthesis of bidirectional texture functions on arbitrary surfaces, in Proceedings of the SIGGRAPH 2002, 2002, pp. 665–672.

Koojoo Kwon is a Research professor of Computer Science and Information Engineering at the Inha University of the Incheon, Korea. His research interest includes real-time rendering of 3D volume graphic and medical imaging. He received B.S. degree in Computer Science from the Woosuk university in 1999 and M.S. and Ph.D. degrees in Computer Science from the Inha University of Korea in 2002 and 2007, respectively.

Dong-Su Kang is a Ph.D. student of Computer Science and Information Engineering at the Inha University of the Incheon, Korea. His research interest includes volume rendering and medical imaging. He received his B.S. and M.S. degrees in Computer Science from the Inha University of Korea in 2008 and 2010, respectively. He is currently performing a Ph.D. in Computer Science under the advisory of Professor Byeong-Seok Shin.

Byeong-Seok Shin is a professor in the School of Computer Science and Information Engineering at Inha University of the Incheon, Korea. He received his B.S., M.S., and Ph.D. in Computer Engineering from the Seoul National University in Korea. Current research interests include human computer interaction, wearable computer, volume rendering, real-time graphics, and medical imaging.

Multiple texture mapping of alveolar bone area for implant treatment in prosthetic dentistry.

Treatment using implants is frequently employed in prosthetic dentistry. In this method, determining the bone density of the upper and lower jaws is i...
2MB Sizes 0 Downloads 12 Views