A prototype hand-held tri-modal instrument for in vivo ultrasound, photoacoustic, and fluorescence imaging Jeeun Kang, Jin Ho Chang, Brian C. Wilson, Israel Veilleux, Yanhui Bai, Ralph DaCosta, Kang Kim, Seunghan Ha, Jong Gun Lee, Jeong Seok Kim, Sang-Goo Lee, Sun Mi Kim, Hak Jong Lee, Young Bok Ahn, Seunghee Han, Yangmo Yoo, and Tai-Kyong Song Citation: Review of Scientific Instruments 86, 034901 (2015); doi: 10.1063/1.4915146 View online: http://dx.doi.org/10.1063/1.4915146 View Table of Contents: http://scitation.aip.org/content/aip/journal/rsi/86/3?ver=pdfcov Published by the AIP Publishing Articles you may be interested in Four dimensional hybrid ultrasound and optoacoustic imaging via passive element optical excitation in a hand-held probe Appl. Phys. Lett. 105, 173505 (2014); 10.1063/1.4900520 High-frequency annular array with coaxial illumination for dual-modality ultrasonic and photoacoustic imaging Rev. Sci. Instrum. 84, 053705 (2013); 10.1063/1.4804636 Automatic patient alignment system using 3D ultrasound Med. Phys. 40, 041714 (2013); 10.1118/1.4795129 A handheld fluorescence molecular tomography system for intraoperative optical imaging of tumor margins Med. Phys. 38, 5873 (2011); 10.1118/1.3641877 Quantification of prostate deformation due to needle insertion during TRUS-guided biopsy: Comparison of hand-held and mechanically stabilized systems Med. Phys. 38, 1718 (2011); 10.1118/1.3557883

This article is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitationnew.aip.org/termsconditions. Downloaded to IP: 169.230.243.252 On: Fri, 03 Apr 2015 02:27:25

REVIEW OF SCIENTIFIC INSTRUMENTS 86, 034901 (2015)

A prototype hand-held tri-modal instrument for in vivo ultrasound, photoacoustic, and fluorescence imaging Jeeun Kang,1 Jin Ho Chang,1,2,a) Brian C. Wilson,3,4,a) Israel Veilleux,3 Yanhui Bai,3 Ralph DaCosta,3,4 Kang Kim,5 Seunghan Ha,5 Jong Gun Lee,6 Jeong Seok Kim,6 Sang-Goo Lee,7 Sun Mi Kim,8 Hak Jong Lee,8 Young Bok Ahn,9 Seunghee Han,2 Yangmo Yoo,1 and Tai-Kyong Song1,a) 1

Department of Electronic Engineering, Sogang University, Seoul 121-742, South Korea Sogang Institute of Advanced Technology, Sogang University, Seoul 121-742, South Korea 3 Princess Margaret Cancer Centre/University Health Network, Toronto, Ontario M5G 1L7, Canada 4 Department of Medical Biophysics, University of Toronto, Toronto, Ontario M5G 1L7, Canada 5 Center for Ultrasound Molecular Imaging and Therapeutics, Department of Medicine, University of Pittsburgh School of Medicine and Heart and Vascular Institute, University of Pittsburgh Medical Center (UPMC), Pittsburgh, Pennsylvania 15213, USA 6 GE Ultrasound Korea, Seongnam 462-807, South Korea 7 iBULe Photonics, Incheon 406-840, South Korea 8 Department of Radiology, Seoul National University of Bundang Hospital, Kyonggi-do, South Korea 9 Department of Electronic Engineering, Konkuk University, Seoul 143-701, South Korea 2

(Received 18 July 2014; accepted 10 February 2015; published online 27 March 2015) Multi-modality imaging is beneficial for both preclinical and clinical applications as it enables complementary information from each modality to be obtained in a single procedure. In this paper, we report the design, fabrication, and testing of a novel tri-modal in vivo imaging system to exploit molecular/functional information from fluorescence (FL) and photoacoustic (PA) imaging as well as anatomical information from ultrasound (US) imaging. The same ultrasound transducer was used for both US and PA imaging, bringing the pulsed laser light into a compact probe by fiberoptic bundles. The FL subsystem is independent of the acoustic components but the front end that delivers and collects the light is physically integrated into the same probe. The tri-modal imaging system was implemented to provide each modality image in real time as well as co-registration of the images. The performance of the system was evaluated through phantom and in vivo animal experiments. The results demonstrate that combining the modalities does not significantly compromise the performance of each of the separate US, PA, and FL imaging techniques, while enabling multi-modality registration. The potential applications of this novel approach to multi-modality imaging range from preclinical research to clinical diagnosis, especially in detection/localization and surgical guidance of accessible solid tumors. C 2015 AIP Publishing LLC. [http://dx.doi.org/10.1063/1.4915146] I. INTRODUCTION

The use of in vivo multimodality imaging to acquire multifunctional information is becoming more widespread for preclinical research, clinical diagnosis, interventional guidance, and treatment response monitoring.1 As discussed by Marti-Bonmati et al.,2 separate modes can be acquired at different times on different imaging devices and then fused digitally, or can be acquired synchronously using a single system with multi-modal capabilities. The latter facilitates image co-registration and minimizes changes in subject positioning and in some cases allows simultaneous acquisition of data, which is valuable in imaging dynamic processes or in guiding real-time interventions such as surgery. For preclinical small-animal (typically mouse model) imaging, there are several imaging systems available, including systems with up to 4 different modes (radioisotope, radiographic, fluorescence, luminescence). Other prototypes such as combined

a)Authors to whom correspondence should be addressed. Electronic addresses:

[email protected]; [email protected]; and [email protected].

fluorescence tomography and single photon emission computed tomography/computed tomography (SPECT/CT) have also been reported.3 There is a similar trend in clinical imaging systems, with either fully integrated hybrid systems or with the modalities linked by common patient support/transfer.4,5 The objective is to efficiently collect images containing different biological information and retain accurate coregistration between them. In some applications, anatomical information provided by CT or MRI scanning is used to correct for attenuation effects in PET or gamma-ray imaging, thereby improving the accuracy of quantitative radionuclide uptake/biodistribution measurements. In the preclinical domain, there is a similar motivation to combine bioluminescence or fluorescence tomography with X-ray CT imaging in order to correct the optical modes for the light attenuation by intervening organs.6 In addition to hybrid optical and non-optical imaging,7 hybrid optical-optical imaging approaches are also increasingly reported and can significantly improve the sensitivity and/or specificity of disease detection, especially for early cancer:8 e.g., fluorescence plus optical coherence tomography,9 photoacoustic imaging plus optical coherence tomography,10

0034-6748/2015/86(3)/034901/13/$30.00 86, 034901-1 © 2015 AIP Publishing LLC This article is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitationnew.aip.org/termsconditions. Downloaded to IP: 169.230.243.252 On: Fri, 03 Apr 2015 02:27:25

034901-2

Kang et al.

Rev. Sci. Instrum. 86, 034901 (2015)

TABLE I. Comparison of ultrasound, photoacoustic, and fluorescence imaging with reference to in vivo cancer imaging.

Image contrast mechanism Information content

Typical imaging depth and spatial resolution Image format

Ultrasound

Photoacoustic

Acoustic impedance mismatch and acoustic scattering Organ/tissue structure Blood flow

Optical absorption and conversion to acoustic wave (Micro)vasculature Intravascular Hb, SO2

30 cm and 300 µm at 5 MHz 1 cm and 100 µm at 50 MHz

2-5 cm and 800 µm for 5 MHz acoustic wave detection

B-mode (2d section perpendicular to the tissue surface) or C-mode (parallel to the tissue surface)

Exogenous contrast agents

Microbubbles

Tumor-targetable? Main uses in clinical oncology currently

Not standard Tumor detection Interventional guidance

3D bioluminescence plus diffuse optical tomography,11 or fluorescence tomography plus diffuse optical tomography.12 Here, we report a particular approach to tri-modal imaging, integrating ultrasound, photoacoustic, and fluorescence capabilities into a compact hand-held probe. The objective is to exploit the complementary information from each modality in a single procedure, as summarized in Table I with reference specifically to in vivo oncology applications. Thus, ultrasound imaging, which is the most commonly used imaging modality in clinics, allows non-invasive realtime structural/functional imaging deep within tissue and functional blood flow information can be obtained using Doppler techniques. Photoacoustic imaging has emerged in the last few years and is itself a hybrid between optical and acoustic technologies.13 Incident light in the form of short (∼10 ns) laser pulses, usually in the visible/near-infrared range, is elastically scattered by the tissue and is thereby spatially distributed throughout the volume of interest. Local absorption of the light energy leads to transient increase in temperature (∼10−3 ◦C) that induces local thermoelastic expansion, generating acoustic waves that are detected at the tissue surface to construct an ultrasound image of the distribution of optical absorbers. Thus, in photoacoustic imaging the molecular specificity of light absorption is combined with the imaging depth capability of ultrasound, which also defines the spatial resolution of the images. In the absence of an administered (optical) contrast agent, the primary absorber in the wavelength range around 500-650 nm is hemoglobin and wavelength scanning enables separation of the Hb and HbO2 contributions based on their different absorption spectra, thereby allowing images to be computed of the important functional parameter of oxygen saturation, SO2 = [HbO2]/{[Hb] + [HbO2]}. Exogenous contrast agents for photoacoustic imaging, either targeted or untargeted, have been reported based on dyes or nanoparticles with high light absorption.14 The image format and spatial resolution in PAI is essentially the same as for ultrasound imaging. However, the contrast is optical in nature and the maximum depth of imaging is determined mainly by the attenuation of the laser

Molecular dyes or nanoparticles with high optical absorption Yes Not yet established

Fluorescence Optical absorption and re-emission Endogenous fluorophore content from autofluorescence Uptake of exogenous fluorophores 90% in-band and 10−4 outof-band transmission (Edmund Optics, Barrington, NJ, USA). The camera full-frame read-out rate is up to 5 Hz: with 4 × 4 binning, this increases to 15 Hz and is limited by the speed of the USB read-out ports. For PA and US imaging, the spatial resolution is determined primarily by the center frequency of the transducer. Since this prototype system has been developed mainly for applications requiring an imaging depth of a few cm, a 15 MHz 128-element linear array transducer was designed and fabricated. To improve the acoustic signal sensitivity, PMN-PT single crystals (IBULE Photonics, Inc., Incheon, Korea) were used as the active material.24 The geometrical specifications of the transducer are summarized in Table II. Photoacoustic signals are induced following 7 ns laser pulses and the delivery optics were integrated with the acoustic transducer (Fig. 2(c)) such that the light is distributed uniformly over the target tissue volume using a custommade bifurcated optical fiber bundle with 50 µm fibers of 0.55 numerical aperture (Fiberoptic Systems, Inc., Simi Valley, CA, USA). The output apertures were configured as

0.89 mm × 13 mm rectangles. At the input end, the bundle is 9.5 mm diameter, which is identical to the spot size of the Nd:YAG laser-pumped OPO system (Surelite III-10 and Surelite OPO Plus, Continuum, Inc., Santa Clara, CA, USA), allowing the beam to be coupled directly into the fiber bundle. The bifurcated outlets are placed on each side of the US transducer, with separation d = 5.95 mm. and are tilted inwards at 30◦ (θ laser), so that the depth of the center of the f beams overlap at a distance in air zlaser , given (Fig. 2(c)) by

TABLE II. Specifications of the linear array transducer for combined US/PA imaging.

Category

Subject

Unit

Specification

Number of elements Element pitch Element height Element kerf size Geometric lens focal depth

EA mm mm mm mm

128 0.1 1.5 0.02 7

f zlaser =

d . 2 tan θ laser

(1)

f Hence, zlaser of the US/PA subsection was 5.15 mm, which is slightly shorter than the geometrical acoustic focal depth, f zacoustic = 7.00 mm. Since the laser beam diffuses as it propagates through the tissue, it was expected that the resulting overlap would match with the acoustic focal area, and this was experimentally verified.

B. Host process system

The host process system consists of a workstation, graphical user interface (GUI), and a monitor, as summarized in Table III. Its main roles are controlling the system, reconstructing the PA and FL images, and combining either US/PA or FL/PA/US images. Depending on the imaging mode, the mechanical scan system is also controlled to translate the probe. The light sources for FL and PA imaging and their operating parameters can be selected by the user. Compute-Unified Device Architecture (CUDA)-based parallel processing software was employed for PA image reconstruction. This includes dynamic beamforming25 and back-end processing26 that consume 9.9 ms of overall processing time for 4 cm imaging depth. This corresponds to the frame rate of 101 Hz, which is sufficient to complete the reconstruction at the current frame rate (10-20 Hz): the latter is mainly determined by pulse repetition frequency of the laser (10 Hz). On the other hand, the US images reconstructed in the commercial US scanner (Logiq P6 system, GE Healthcare, Gyeonggi-do, Korea) are received by the host processing system through a frame grabber (VGA2USB LR, Epiphan Systems, Inc., Ottawa, Ontario, Canada). In FL mode, the raw pixel values from the CMOS cameras are read out through a USB port and saved as 8-bit RGB images. For whitelight reflectance, the images are white-balanced by calibrating TABLE III. Specifications of the host process system for PA image reconstruction and tri-modal registration.

Operating system (OS) Central processing unit (CPU) Random access memory (RAM) Graphic processing unit (GPU)

Description Windows 7 Professional 64 bit Intel i7-3 3770 K DDR3 16G PC3-19200 CL10 Nvidia Geforce GTX690 • PCI Express 3.0 (20 fps in spite of increasing the laser PRF. Hence, the data transfer rate from the GE-DAQ system to the host process system, which is the main time consumer (i.e., 55.6 ms) in the PA image reconstruction, needs to be improved. One possible solution is to change the data transfer protocol from the PCI Express 2.0 to version 3.0 and to further optimize the GE-DAQ system for reduction of internal latency. Also, the US/PA subsection of the probe should provide a large enough field-of-view to support various clinical applications, so that the optimal configuration of the US/PA subsection needs to be determined, including the aperture size and operating frequency of the US transducer, the size of the optical fiber bundle, and the optical/acoustical focal depths. The optical design implemented in the FL subsection of the current handpiece was based on off-the-shelf components to reduce costs and development time. This causes some loss of performance. An optimized design with improved resolution and field-of-view has been devised using custom optical components and will be incorporated into future systems. Nevertheless, even the current design allows highquality FL imaging in vivo. Annoying features such as the long zoom switch time (currently 5 s) can be improved simply by swapping the stepper motor and mechanical drive components. The FL mode has been implemented in the current first prototype with specific single excitation and detection wavelength bands. User-selectable excitation wavelength can easily be incorporated by including multiple LED sources and using either optical or electronic switching. Being able to switch the detection wavelength during use will require incorporating spectral filtering into the handpiece before the light reaches the detector array, which may necessitate some reconfiguration of the layout. Pre-section of the detection wavelength band prior to use can be achieved by using a different long-pass or band-pass filter element. Further refinements of the co-registration and display algorithms will likely be required to address specific clinical needs. Incorporating electromagnetic or optical tracking onto the handpiece would also enable co-registration with radiological (e.g., CT or MR) volumetric images. For preclinical (small animal) applications, having the tri-modal probe itself or the animal mounted on an accurate translation stage as illustrated in Fig. 1 will facilitate accurate co-registration and longitudinal monitoring of, for example, tumor development and therapeutic response.

This article is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitationnew.aip.org/termsconditions. Downloaded to IP: 169.230.243.252 On: Fri, 03 Apr 2015 02:27:25

034901-13

Kang et al.

At this time, a second-generation prototype system is under construction that will meet the requirements for firstin-human use, particularly ensuring optical, electrical, and mechanical safety and biocompatibility, as well as having a user-friendly operating interface and display. It is planned to carry out the first clinical tests of this system in the near future, focusing initially on assessing the safety and technical feasibility under realistic clinical conditions (including in the surgical environment), as well as obtaining initial utility data and user feedback. An example is to use the system to guide breast cancer lumpectomy, with the intent of reducing the incidence of second surgeries due to narrow surgical margins that result in incomplete tumor tissue resection. For this application, the US images will assist in providing overall tumor localization, the PA images will add contrast to help guide the tumor excision, while the FL imaging will identify any residual tumor tissue within the lumpectomy cavity. PA and FL imaging of the lumpectomy specimen ex vivo can also be used to provide correlative information on the location of narrow margins. Detection of local lymph node involvement is also potentially possible in either PA and/or FL modes through the use of optical contrast agents. A second planned example is the use of the system to evaluate patients with suspected thyroid cancer, where the PA imaging may distinguish between malignant and benign nodules, or in patients with known thyroid cancer where the system can be used pre-operatively and intra-operatively to plan and guide surgery, respectively. ACKNOWLEDGMENTS

This work was supported by International Collaborative R&D Program (2010-TD-500409-001) funded by the Ministry of Trade, Industry and Energy (MOTIE), Korea. 1R.

Hicks, E. Lau, and D. Binns, Biomed. Imaging Intervention J. 3, e49 (2007). 2L. Martí-Bonmatí, R. Sopena, P. Bartumeus, and P. Sopena, Contrast Media Mol. Imaging 5, 180 (2010). 3M. Solomon, R. E. Nothdruft, W. Akers, W. B. Edwards, K. Liang, B. Xu, G. P. Suddlow, H. Deghani, Y.-C. Tai, A. T. Eggebrecht, S. Achilefu, and J. P. Culver, J. Nucl. Med. 54, 639 (2013). 4D. Papathanassiou, C. Bruna-Muraille, J. C. Liehn, T. D. Nguyen, and H. Curé, Crit. Rev. Oncol. Hematol. 72, 239 (2009). 5D. A. Torigian, H. Zaidi, T. C. Kwee, B. Saboury, J. K. Udupa, Z. H. Cho, and A. Alavi, Radiology 267, 26 (2013). 6P. Mohajerani, A. Hipp, M. Willner, M. Marschner, M. Trajkovic-Arsic, X. Ma, N. C. Burton, U. Klemm, K. Radrich, V. Ermolayev, S. Tzoumas, J. T. Siveke, M. F. Bech, and V. Ntziachristos, IEEE Trans. Med. Imaging 33, 1434 (2014).

Rev. Sci. Instrum. 86, 034901 (2015) 7B. H. Li, A. S. Leung, A. Soong, C. E. Munding, H. Lee, A. S. Thind, N. R.

Munce, G. A. Wright, C. H. Rowsell, V. X. Yang, B. H. Strauss, F. S. Foster, and B. K. Courtney, Catheter. Cardiovasc. Interventions 81, 494 (2013). 8I. Georgakoudi, E. E. Sheets, M. G. Müller, V. Backman, C. P. Crum, K. Badizadegan, R. R. Dasari, and M. S. Feld, Am. J. Obstet. Gynecol. 18, 374 (2002). 9D. Lorenser, B. C. Quirk, M. Auger, W.-J. Madore, R. W. Kirk, N. Godbout, D. D. Sampson, C. Boudoux, and R. A. McLaughlin, Opt. Lett. 38, 266 (2013). 10L. Xi, C. Duan, H. Xie, and H. Jiang, Appl. Opt. 52, 1928 (2013). 11C. Darne, Y. Lu, and E. M. Sevick-Muraca, Phys. Med. Biol. 59, R1 (2014). 12Y. Lin, W. C. Barber, J. S. Iwanczyk, W. Roeck, O. Nalcioglu, and G. Gulsen, Opt. Express 18, 7835 (2010). 13L. V. Wang and S. Hu, Science 335, 1458 (2012). 14S. Zackrisson, S. M. van de Ven, and S. S. Gambhir, Cancer Res. 744, 979 (2014). 15M. Mehrmohammadi, S. J. Yoon, D. Yeager, and S. Y. Emelianov, Curr. Mol. Imaging 2, 89 (2013). 16L. Xi, G. Zhou, N. Gao, L. Yang, D. A. Gonzalo, S. J. Hughes, and H. Jiang, Ann. Surg. Oncol. 21, 1602 (2014). 17M. Goetz and T. D. Wang, Gastroenterology 138, 828 (2010). 18H. Zhang, R. R. Uselman, and D. Yee, Expert Opin. Med. Diagn. 3, 241 (2011). 19P. A. Valdes, V. L. Jacobs, B. C. Wilson, F. Leblond, D. W. Roberts, and K. D. Paulsen, Opt. Lett. 38, 2786 (2013). 20R. Bouchard, O. Sahin, and S. Emelianov, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 613, 450 (2014). 21J. S. Kim, Y. H. Kim, J. H. Kim, K. W. Kang, E. L. Tae, H. Youn, D. Kim, S. K. Kim, J. T. Kwon, M. H. Cho, Y. S. Lee, J. M. Jeong, J. K. Chung, and D. S. Lee, Nanomedicine 72, 219 (2012). 22J. F. Lovell, C. S. Jin, E. Huynh, H. Jin, C. Kim, J. L. Rubinstein, W. C. Chan, W. Cao, L. V. Wang, and G. Zheng, Nat. Mater. 10, 324 (2011). 23E. Huynh, C. S. Jin, B. C. Wilson, and G. Zheng, Bioconjugate Chem. 25, 796 (2014). 24P. Yu, Y. Ji, N. Neumann, S. G. Lee, H. Luo, and M. Es-Souni, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 59, 1983 (2012). 25Y. Lee, W. Y. Lee, C.-E. Lim, J. H. Chang, T.-K. Song, and Y. Yoo, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 59, 573 (2012). 26J. H. Chang, L. Sun, J. T. Yen, and K. K. Shung, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 56, 1490 (2009). 27American National Standards Institute (ANSI), Standard Z136.1–2007, Laser Institute of America, Orlando, FL, 2007. 28B. Zhuang, V. Shamdasani, S. Sikdar, R. Managuli, and Y. Kim, IEEE Trans. Inf. Technol. Biomed. 13, 571 (2009). 29K. R. Erikson, IEEE Trans. Sonics Ultrason. 26, 453 (1979). 30S. Gundy, W. V. der Putten, A. Shearer, A. G. Ryder, and M. Ball, Proc. SPIE 4432, 299 (2001). 31J. Kang, E.-K. Kim, G. R. Kim, C. Yoon, T. K. Song, and J. H. Chang, J. Biophotonics 8, 71–80 (2015). 32G. R. Kim, J. Kang, J. Y. Kwak, J. H. Chang, S. I. Kim, H. J. Kim, and E.-K. Kim, PLoS One 9, e105878 (2014). 33J. Kang, S.-W. Kang, H. J. Kwon, J. Yoo, S. Lee, J. H. Chang, E.-K. Kim, T. K. Song, W. Y. Chung, and J. Y. Kwak, PLoS One 9, e113358 (2014). 34R. I. Siphanto, K. K. Thumma, R. G. M. Kolkman, T. G. van Leeuwen, F. F. M. de Mul, J. W. van Neck, L. N. A. van Adrichem, and W. Steenbergen, Opt. Express 13, 89 (2005). 35K. R. Bhushan, P. Misra, F. Liu, S. Mathur, R. E. Len-kinski, and J. V. Frangioni, J. Am. Chem. Soc. 130, 17648 (2008).

This article is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitationnew.aip.org/termsconditions. Downloaded to IP: 169.230.243.252 On: Fri, 03 Apr 2015 02:27:25

A prototype hand-held tri-modal instrument for in vivo ultrasound, photoacoustic, and fluorescence imaging.

Multi-modality imaging is beneficial for both preclinical and clinical applications as it enables complementary information from each modality to be o...
14MB Sizes 6 Downloads 9 Views