sensors Article

Design and Evaluation of a Scalable and Reconfigurable Multi-Platform System for Acoustic Imaging Alberto Izquierdo 1, *, Juan José Villacorta 1 , Lara del Val Puente 2 and Luis Suárez 3 1 2 3

*

Signal Theory and Communications Department, University of Valladolid, Valladolid 47011, Spain; [email protected] Mechanical Engineering Area, Industrial Engineering School, University of Valladolid, Valladolid 47011, Spain; [email protected] Civil Engineering Department, Superior Technical College, University of Burgos, Burgos 09001, Spain; [email protected] Correspondence: [email protected]; Tel.: +34-983-185-801

Academic Editor: Gonzalo Pajares Martinsanz Received: 3 May 2016; Accepted: 8 October 2016; Published: 11 October 2016

Abstract: This paper proposes a scalable and multi-platform framework for signal acquisition and processing, which allows for the generation of acoustic images using planar arrays of MEMS (Micro-Electro-Mechanical Systems) microphones with low development and deployment costs. Acoustic characterization of MEMS sensors was performed, and the beam pattern of a module, based on an 8 × 8 planar array and of several clusters of modules, was obtained. A flexible framework, formed by an FPGA, an embedded processor, a computer desktop, and a graphic processing unit, was defined. The processing times of the algorithms used to obtain the acoustic images, including signal processing and wideband beamforming via FFT, were evaluated in each subsystem of the framework. Based on this analysis, three frameworks are proposed, defined by the specific subsystems used and the algorithms shared. Finally, a set of acoustic images obtained from sound reflected from a person are presented as a case study in the field of biometric identification. These results reveal the feasibility of the proposed system. Keywords: MEMS array; scalable system; multi-platform framework; acoustic imaging

1. Introduction In recent years, techniques for obtaining acoustic images have been developed rapidly. At present, acoustic images are associated with a wide variety of applications [1], such as non-destructive testing of materials, medical imaging, underwater imaging, SONAR, geophysical exploration, etc. These techniques for obtaining acoustic images are based on the RADAR (RAdio Detection and Ranging) principles, which form an image of an object from the radio waves that have been reflected on it [2]. RADAR systems require high-cost hardware and their application with people and specific materials is difficult, due to their low reflectivity. These are the reasons why acoustic imaging techniques, also called SODAR (Sound Detection and Ranging) techniques, were developed. These SODAR techniques, mainly based on the use of arrays, represent a simple, low-cost alternative for obtaining “acoustic images” of an object. An array is an arranged set of identical sensors, excited in a specific manner [3]. Microphone arrays are a particular case, used in applications such as speech processing, echo cancellation, localization, and sound sources separation [4]. By using beamforming techniques [5], the array beam pattern can be electronically steered to different spatial positions, allowing the discrimination of acoustic sources based on their position. Sensors 2016, 16, 1671; doi:10.3390/s16101671

www.mdpi.com/journal/sensors

Sensors 2016, 16, 1671

2 of 17

The authors of this paper have experience in the design [6] and development of acoustic imaging systems, based in acoustic arrays, used in many different fields, such as detection and tracking systems [7,8], Ambient Assisted Living [9], or biometric identification systems [10–12]. The arrays used were ULA (Uniform Linear Array), formed by acoustic sensors distributed uniformly along a line. These arrays are simple, but limited to one dimension (azimuth or elevation) to estimate the spatial position of the sound source. In order to obtain spatial information in two dimensions, it is necessary to work with planar arrays with sensors distributed on a surface. Working with planar arrays leads to an increase in both system complexity and space required by the acoustic sensors and the associated hardware, since the extension from a 1D array of N element, to a 2D array of N × N elements increases the number of the required channels in an order of N2 . These factors are accentuated when the number of sensors to work with is large. In the design of a system for acquiring and processing signals from an acoustic array, it should be noted that costs and complexity are directly related to the number of channels/sensors of the system. A typical system to obtain acoustic images has four basic elements: sensors, signal conditioners, acquisition devices and signal processor. For the first three elements, system cost increases linearly with the number of channels, as each sensor needs a signal conditioner and an acquisition device. Digital MEMS (Micro-Electro-Mechanical System) microphones include a microphone, a signal conditioner, and an acquisition device incorporated in the chip itself. For this reason, an acquisition and processing system based on MEMS microphone arrays is reduced to two basic elements: MEMS microphone and a processing system. This processing system is usually based on an FPGA (Field Programmable Gate Array), as it is a digital system with multiple I/O nodes. The integration of the microphone preamplifier and the ADC in a single chip significantly reduces costs, if compared with solutions based on analog microphones. This is one of the main reasons why MEMS microphones are used in acoustic arrays with a large number of channels. However, cost reduction is not the only advantage of working with MEMS microphones. This technology also reduces the space occupied by the system, which makes it feasible to build arrays with hundreds or even thousands of sensors. Arrays of MEMS microphones are specially designed for acoustic source localization [13–15]; however, they are also used in other applications such as DOA (direction of arrival) estimation for vehicles in motion [16], speech processing [17,18], turbulence measurements [19], identifying geometric dimensions and internal defects of concrete structures [20], or acoustic imaging [21–23]. The system presented in this paper has two main characteristics/novelties: (i) it is scalable using several subarrays or modules, increasing the number of sensors used, and reconfigurable in the position and orientation of the modules; and (ii) its multi-platform framework is reconfigurable. These characteristics made our system flexible and suitable for many different applications. Many of the systems found on the literature show fixed solutions to particular problems, where the hardware is not scalable or the software is not reconfigurable as in the system that is proposed in this paper. The system shown in [22] is reconfigurable, i.e., in the position of its modules, but its acquisition framework, based on an USB port, limits the maximum number of sensors that could be added to the system. For its part, the system shown by Turqueti [16] is scalable; it allows multiple arrays to be aggregated, as the system detailed in this paper. However, our system has two advantages in comparison with Turqueti’s system: (i) it can be a standalone system, due to the use of a myRIO platform (embedded hardware with a FPGA and an ARM processor) as an acquisition and data processing system; and (ii) it allows for emitting sounds, making it an active system. This paper presents in detail the framework of a system that acquires and processes acoustic images. This system is based on a multi-platform framework, where each processing task can be interchanged between the different levels of the framework. Thus, the system can be adapted to different cost and mobility scenarios by means of its reconfigurable framework. The paper also shows the evaluation of processing times of the algorithms involved in obtaining an acoustic image, including signal processing and wideband beamforming via FFT, in each subsystem of the framework. The hardware of the system is based on modules of 8 × 8 square arrays of MEMS microphones,

Sensors 2016, 16, 1671

3 of 17

allowing the system to be scalable, by means of the cluster of several modules. This paper also shows the acoustic characterization of the MEMS microphones of the square array, as well as a comparative analysis of the theoretical and measured beam patterns of one of the array modules and of some clusters formed by several array modules. Section 2 describes the hardware setup of the system and the implemented software algorithms, both defined on the basis of the requirements stated. Section 3 presents the planar array designed for the system: it introduces MEMS microphones technology and characterizes the frequency response of the microphones used, and it characterizes the array module acoustically, obtaining its beam pattern and also the beam pattern of some clusters of modules. A reconfigurable acquisition and processing framework is proposed in Section 4, and its performance is analyzed for several scenarios. Section 5 presents a case study using the system for biometric identification tasks. Finally, Section 6 contains the conclusions and future research lines. 2. System Description In this section, the requirements for the implementation of the acquisition and processing system for a 2D array, based on MEMS sensors, are analyzed. Then, the hardware chosen for the implementation is defined, and the processing algorithms to obtain an acoustic image using beamforming techniques are explained. 2.1. Requirements The size of an acquisition and processing system for an array depends on the number of sensors and the set of processing algorithms. Thus, it is necessary to use a very low-cost technology per channel, in order to build a viable high dimensional array. Using MEMS microphones allows for a cost reduction of two main elements of the system: sensors and acquisition systems. Digital MEMS microphones need only one digital input to be read, although the received digital data must be processed to obtain the waveform signal. Most acquisition systems are based on FPGAs, which have a large number of digital inputs, between 40 and 1400 depending on the model; so, one FPGA can acquire as much channels as the number of digital inputs it contains. Besides, FPGA processing capacity allows the system to carry out simple operations with the acquired signals without increasing costs. The processing capacity of the FPGA is not enough to obtain the final acoustic image so it is necessary that another processor with higher capacity be joined to the FPGA. The system must be modular and scalable. Thus, for applications that require a large number of channels, it would only be necessary to add modules of lower dimension instead of designing a new system with higher capacity. In this way, reusing arrays and their processing systems reduces costs. Extra arrays, with their acquisition subsystem, can always be added in order to build higher dimensional systems. The array modules can increase the dimension of the array, but they can also form different kinds of module configurations in order to obtain specific beam patterns, improving the performance of the array, and thus of the complete system. The use of modular subsystems implies the use of a central unit that joins the data from all the modules and controls them. Finally, the tools and programming languages to be used should also be defined. As this system has different processing platforms, one solution can be the use of a specific language for each platform, but the use of a common programming language on all platforms is desirable. 2.2. Hardware Setup 2.2.1. MEMS Array The acoustic images acquisition system shown in this paper is based on a Uniform Planar Array (UPA) of MEMS microphones. This array, which has been entirely developed by the authors, is a square array of 64 (8 × 8) MEMS microphones that are spaced uniformly, every 2.125 cm, in a rectangular Printed Circuit Board (PCB), as shown in Figure 1. As can also be observed in Figure 1, the PCB where

Sensors 2016, 16, 1671

4 of 17

the MEMS microphones are placed has square gaps between the acoustic sensors, in order to make the arraySensors as light and portable as possible. 2016, 16, 1671 4 of 17

(a)

(b)

Figure 1. System microphone array: (a) sensor spacing; (b) system array.

Figure 1. System microphone array: (a) sensor spacing; (b) system array.

This array was designed to work in an acoustic frequency range between 4 and 16 kHz. The

This was corresponds designed to acoustic frequency range between and 16 kHz. 2.125 array cm spacing to work λ/2 for in thean 8 kHz frequency. This spacing allows a good4 resolution The 2.125 cm frequencies, spacing corresponds to λ/2grating for the lobes 8 kHzfor frequency. This spacing a good resolution for low while avoiding high frequencies in theallows angular exploration zone of interest [10]. for low frequencies, while avoiding grating lobes for high frequencies in the angular exploration zone the implementation of the array, MP34DT01 microphones of STMicroelectronics were chosen. of interestFor [10]. They are digital MEMS microphones a PDM (Pulse Density Modulation) interface and withchosen. a For the implementation of the array,with MP34DT01 microphones of STMicroelectronics were one-bit digital signal output, obtained using a high sampling rate (1 MHz to 3.25 MHz) [24–26]. They are digital MEMS microphones with a PDM (Pulse Density Modulation) interface and with The main features of these microphones are: low-power, omnidirectional response, 63 dB SNR, high a one-bit digital signal output, obtained using a high sampling rate (1 MHz to 3.25 MHz) [24–26]. sensitivity (−26 dB FS) and an almost flat frequency response (±6 dB in the range of 20 Hz to 20 kHz). The main features of these microphones are: low-power, omnidirectional response, 63 dB SNR, high sensitivity (−26 dB FS) and an almost flat frequency response (±6 dB in the range of 20 Hz to 20 kHz). 2.2.2. Processing System Taken into account the previous requirements, the hardware used to implement the system was 2.2.2. Processing System

selected. The design of a specific hardware was rejected due to the high cost of the design and the

Taken into account the previous requirements, used to implement the system was required time, so a search for a commercial solutionthe washardware done. MyRIO platform has been selectedwas as the base unit for to thisthe system. belongsand to the selected. The design of a[27] specific hardware rejected due high This costplatform of the design the Reconfigurable Input-Output (RIO) family of devices from National Instruments that is required time, so a search for a commercial solution was done. oriented to sensors with nonstandard acquisition procedures, allowing low-level programming of MyRIO platform [27] has been selected as the base unit for this system. This platform belongs to the acquisition routines. Specifically, myRIO platform is an embedded hardware based on a Xilinx the Reconfigurable Input-Output (RIO) family of devices from National Instruments that is oriented to Zynq 7010 chip, which incorporates a FPGA and a dual-core ARM® Cortex™-A9 processor. The sensors with nonstandard acquisition procedures, allowing low-level programming of the acquisition FPGA has 40 lines of digital input/output, 32 of which are used as the connection interface with the routines. Specifically, myRIO platform is an embedded hardware based Xilinx Zynq 64 MEMS microphones of the array, multiplexing two microphones in eachon I/Oa line; while the 7010 otherchip, ® Cortex™-A9 processor. The FPGA has 40 lines of whicheight incorporates a FPGA and a dual-core ARM lines are used to clock generation and synchronization. The ARM processor is equipped with digital256 input/output, 32 of which used asstorage the connection with 64 interface. MEMS microphones MB of DDR3 RAM, 512 MBare of built-in space, USBinterface Host port, andthe Wi-Fi All this of thehardware array, multiplexing microphones in×each I/O line; while other eight lines are used to is enclosed intwo a small box (136 × 89 25 mm) that costs aboutthe $1000. The embedded processor included myRIO is capableisofequipped running all the 256 software algorithms clock generation and synchronization. TheinARM processor with MB of DDR3 RAM, to generate acoustic images, so it can be used as a standalone array module formed by myRIO in a 512 MB of built-in storage space, USB Host port, and Wi-Fi interface. All this hardware is aenclosed connected to a MEMS array board as shown in Figure 2. The acoustic images can be stored in the small box (136 × 89 × 25 mm) that costs about $1,000. internal storage of myRIO or in an external disk connected through the USB port. The embedded processor included in myRIO is capable of running all the software algorithms to generate acoustic images, so it can be used as a standalone array module formed by a myRIO connected to a MEMS array board as shown in Figure 2. The acoustic images can be stored in the internal storage of myRIO or in an external disk connected through the USB port.

Sensors 2016, 16, 1671 Sensors Sensors 2016, 2016, 16, 16, 1671 1671

5 of 17 55 of of 17 17

Figure Array module with myRIO Figure andMEMS MEMSarray arrayboard. board. Figure2.2. 2.Array Arraymodule module with with myRIO myRIO and and MEMS array board.

Although myRIO can as a standalone system, the of display means it AlthoughmyRIO myRIO can can work work system, the lack lack display means that that it is is usually usually Although workasasa standalone a standalone system, the oflack of display means that it is controlled from a PC connected using a Wi-Fi interface. In a global hardware setup, as shown in controlled from a PC connected using a Wi-Fi interface. In a global hardware setup, as shown in usually controlled from a PC connected using a Wi-Fi interface. In a global hardware setup, Figure 3, the system includes aa PC and one or more array modules. The PC performs three main Figure 3, the system includes PC and one or more array modules. The PC performs three main as functions: shown in Figure 3, the system includes a PC and one or more array modules. The PC performs functions: three main functions:  As As user user interface, interface, the the PC PC allows allows changing changing the the system system parameters parameters and and visualizing visualizing the the acoustic acoustic • Asimages. user interface, the PC allows changing the system parameters and visualizing the images. images.  acoustic As As processing processing unit, unit, the the processors processors inside inside the the PC PC could could be be used used to to execute execute the the algorithms algorithms in in • Asorder processing unit, the processors inside the PC could be used to execute the algorithms in order to obtain acoustic images faster. Two processors are available in aa order to obtain the acoustic images faster. Two processors are available in the the PC: PC: to general-purpose obtain the acoustic images faster. Two processors are available in the PC: a general-purpose PC processor and a Graphical Processing Unit (GPU) included in the graphics general-purpose PC processor and a Graphical Processing Unit (GPU) included in the graphicsPC card. processor card. and a Graphical Processing Unit (GPU) included in the graphics card. aa control unit, single PC can manage several to an a control unit, myRIO platforms, platforms,each eachone oneassociated associated •  AsAs As control unit,a aasingle singlePC PCcan canmanage manage several several myRIO myRIO platforms, each one associated to to anan array module. This feature allows clustering several modules for a proper operation of the array module. This feature allows clustering several modules for afor proper operation of theofsystem, array module. This feature allows clustering several modules a proper operation the system, which between their lines, one is system, which are are synchronized synchronized between them using their I/O I/O lines, where oneismyRIO myRIO is the the which are synchronized between them usingthem theirusing I/O lines, where onewhere myRIO the master and others master and the others are are slaves. slaves. themaster othersand arethe slaves.

Figure 3. Global Figure3. 3. Global Global hardware hardware setup. Figure hardware setup. setup.

Sensors 2016, 16, 1671 Sensors 2016, 16, 1671 Sensors 2016, 16, 1671

6 of 17 6 of 17 6 of 17

2.3. Software Algorithms 2.3. Software Algorithms 2.3. Software Algorithms The algorithms implemented in the system, shown in Figure 4, can be divided into three blocks: MEMS signal processing, and image shown generation. Theacquisition, algorithms The algorithms implemented implemented in in the the system, system, shown in in Figure Figure 4, 4, can can be be divided divided into into three three blocks: blocks: MEMS acquisition, signal processing, and image generation. MEMS acquisition, signal processing, and image generation.

Figure 4. Software algorithms diagram. Figure 4. Software diagram. Figure 4. Software algorithms algorithms diagram.

The programming language used is LabVIEW 2015, along with its Real Time, FPGA, and GPU modules, which allows language developing applications on 2015, different hardware those used in The programming used is LabVIEW along with its platforms Real Time,like FPGA, and GPU The programming language used is LabVIEW 2015, along with its Real Time, FPGA, and GPU the system: FPGA, Embedded Processor (EP), PC, and GPU. In addition, most like of the developed modules, which allows developing applications on different hardware platforms those used in modules, which allows developing applications on different hardware platforms like those used in the algorithms run on any of theProcessor platforms(EP), without the system:can FPGA, Embedded PC, reprogramming. and GPU. In addition, most of the developed system: FPGA, Embedded Processor (EP), PC, and GPU. In addition, most of the developed algorithms In the can acquisition block, each MEMSwithout microphone with a PDM interface, which internally algorithms run on any of the platforms reprogramming. can run on any of the platforms without reprogramming. incorporates a one-bit sigma-delta with a sampling of 2 MHz,which performs signal In the acquisition block, eachconverter MEMS microphone withfrequency a PDM interface, internally In the acquisition block, each MEMS microphone with a PDM interface, which internally acquisition. So, each sigma-delta acquired signal is coded only frequency one bit per This block is incorporates a one-bit converter with awith sampling of 2sample. MHz, performs signal incorporates a one-bit sigma-delta converter with a sampling frequency of 2 MHz, performs signal implemented in the FPGA, generating a common clock signal for all MEMS, and reading acquisition. So, each acquired signal is coded with only one bit per sample. This block is acquisition. So, each acquired signal is coded with only one bit per sample. This block is implemented simultaneouslyin64the sensors signals via the adigital inputs of the FPGA. signals are in implemented FPGA, generating common clock signal forThese all MEMS, and stored reading in the FPGA, generating a common clock signal for all MEMS, and reading simultaneously 64 sensors 64-bit binary words, where each bit stores the signal of each MEMS. Thus, the size of the data is simultaneously 64 sensors signals via the digital inputs of the FPGA. These signals are stored in signals via the digital inputs of the FPGA. These signals are stored in 64-bit binary words, where each minimal and the transfer rateeach is high. 64-bit binary words, where bit stores the signal of each MEMS. Thus, the size of the data is bit stores the signal of each MEMS. Thus, the size of the data is minimal and the transfer rate is high. In the processing block minimal andsignal the transfer rate is high.two routines are implemented: (i) Deinterlacing: Through this In the signal processing block two routines are implemented: (i) Deinterlacing: Through this process, 64 signal one-bitprocessing signals are extracted from each binary word (i) andDeinterlacing: (ii) DecimateThrough & Filtering: In the block two routines are implemented: this process, 64 one-bit signals are extracted from each binary word and (ii) Decimate & Filtering: Applying Applying64 downsampling techniques, basedfrom on decimation andword filtering independent signals process, one-bit signals are extracted each binary and[25], (ii) 64 Decimate & Filtering: downsampling techniques, based on decimation and filtering [25], 64 independent signals are obtained are obtained and the sampling frequency is reduced from 2and MHz to 50 kHz. Applying downsampling techniques, based on decimation filtering [25], 64 independent signals and the sampling frequency is reduced from 2 MHz to 50 kHz. Finally, in the image generation block, based on wideband beamforming, are obtained and the sampling frequency is reduced from 2 MHz to 50 kHz. a set of N × N steering Finally, in the image generation block, based on wideband beamforming, a set of N × N steering directions areindefined, andgeneration the beam block, formerbased output assessedbeamforming, for each of these steering Finally, the image onare wideband a set of N × directions. N steering directions are defined, and the beam former output are assessed for each of these steering directions. Widebandare beamforming the FFT of are the assessed MEMS signals [n];these multiplies, by directions defined, and[3] thecomputes beam former output for eachxiof steeringelement directions. Wideband beamforming [3] computes the FFT of the MEMS signals xi [n]; multiplies, element by element, each FFT Xi[k] by phase vector, that of depends on the steering andelement the sensor Wideband beamforming [3]acomputes the FFT the MEMS signals xi[n];direction multiplies, by element, each FFT Xi [k] by a phase vector, that depends on the steering direction and the sensor position; each and finally takes sum of the FFT in phase, showndirection in Figureand 5. The element, FFT Xi[k] bythe a phase vector, thatshifted depends on theas steering the images sensor position; and finally takes the sum of the FFT shifted in phase, as shown in Figure 5. The images generatedand are finally then displayed stored in FFT the system. position; takes theand sum of the shifted in phase, as shown in Figure 5. The images generated are then displayed and stored in the system. generated are then displayed and stored in the system.

Figure 5. Wideband beamforming. Figure 5. Wideband beamforming.

Figure 5. Wideband beamforming.

Sensors 2016, 16, 1671 Sensors 2016, 16, 1671

7 of 17 7 of 17

3. MEMS MEMSArray ArrayDescription Descriptionand andCharacterization Characterization The acronym MEMS refers to mechanical systems with a dimension smaller than 1 mm [28] manufactured withtools toolsand and technology arising the integrated (ICs)These field.systems These manufactured with technology arising fromfrom the integrated circuitscircuits (ICs) field. systems are mainly used for the miniaturization of mechanical sensors. Their small size makes are mainly used for the miniaturization of mechanical sensors. Their small size makes interconnection interconnection with other discrete more difficult. Therefore, when ordered,asthey with other discrete components morecomponents difficult. Therefore, when ordered, they are supplied partare of supplied as partmicro-mechanical of an encapsulated micro-mechanical systema composed by a sensor, signal an encapsulated system composed by a sensor, signal conditioning circuita and an conditioning circuit electric interface [29].and an electric interface [29]. 3.1. MEMS Microphones Characterization An analysis of the frequency response of all MEMS microphones included in the array was performed. pulse, withwith a frequency changing between 2 and 182kHz, generated performed. AAsinusoidal sinusoidal4 ms 4 ms pulse, a frequency changing between and was 18 kHz, was using a reference Previously, the frequency response of the response reference loudspeaker was generated using loudspeaker. a reference loudspeaker. Previously, the frequency of the reference calibrated using measurement microphone (Behringer ECM 8000) and placing it 8000) in the and sameplacing position loudspeaker wasa calibrated using a measurement microphone (Behringer ECM it as the array. Figure 6asshows the arrangement of the used to perform this analysis. in same position the array. Figure 6 shows thecomponents arrangement of the components used to All measurements were performed in an anechoic chamber. perform this analysis. All measurements were performed in an anechoic chamber.

Figure Figure 6. 6. Arrangement Arrangement of of the the components components in in the the anechoic anechoic chamber. chamber.

The frequency response of each MEMS sensor was obtained and normalized according to the The frequency response of each MEMS sensor was obtained and normalized according to the loudspeaker’s response. Then, the average of the frequency responses was assessed. Figure 6 shows loudspeaker’s response. Then, the average of the frequency responses was assessed. Figure 6 shows all the responses. It can be observed that the averaged frequency response is essentially flat, with a all the responses. It can be observed that the averaged frequency response is essentially flat, with slight increase at high frequencies. This averaged response is bounded within a range of ±3 dB. a slight increase at high frequencies. This averaged response is bounded within a range of ±3 dB. Figure 7 also shows that the frequency response of MEMS sensors varies in a range of ±2 dB around Figure 7 also shows that the frequency response of MEMS sensors varies in a range of ±2 dB around the averaged value. the averaged value.

Sensors 2016, 16, 1671 Sensors2016, 2016,16, 16,1671 1671 Sensors

8 of 17 of17 17 88of

Figure 7. 7. Frequency Frequency responses responses and and averaged averaged response of the MEMS microphones. averagedresponse responseof ofthe theMEMS MEMSmicrophones. microphones. Figure

3.2. MEMS Characterization 3.2. MEMS Array Array Characterization Characterization 3.2. 3.2.1. Acoustic Characterization Characterization of an Array Module 3.2.1. Acoustic Characterization of of an an Array Array Module Module 3.2.1. Figure shows some theoretical beam patterns patterns of of the the UPA UPA system, which are pointed towards Figure 888 shows shows some some theoretical theoretical beam UPA system, system, which which are are pointed pointed towards towards Figure ◦ , −15◦ ], [0◦ , 0◦ ] and [5◦ , 10◦ ], several steering angles ([azimuth, elevation]): [ − 15 working several steering angles ([azimuth, elevation]): [−15°, −15°], [0°, 0°] and [5°, 10°], for several several steering angles ([azimuth, elevation]): [−15°, −15°], [0°, 0°] and [5°, 10°], for several working frequencies: 1616 kHz. It can be be observed that:that: (i) as increases, the array frequencies: 444kHz, kHz,888kHz kHzand and 16 kHz. can be observed observed that: (i)the asfrequency the frequency frequency increases, the frequencies: kHz, kHz and kHz. ItIt can (i) as the increases, the angular resolution also increases, because the mainlobe beamwidth is reduced; and (ii) the sidelobe array angular resolution also increases, because the mainlobe beamwidth is reduced; and (ii) the array angular resolution also increases, because the mainlobe beamwidth is reduced; and (ii) the level is constant –at − 13 dB for all frequencies. Figure 8b,c show that, for high frequencies, there are sidelobe level is constant –at −13 dB for all frequencies. Figure 8b,c show that, for high frequencies, sidelobe level is constant –at −13 dB for all frequencies. Figure 8b,c show that, for high frequencies, not grating lobes in the angular zone of interest. there are not not grating lobes in the theexploration angular exploration exploration zone of of interest. interest. there are grating lobes in angular zone

(a) (a)

(b) (b)

(c) (c)

Figure8. 8.Theoretical Theoreticalbeam beampatterns patternsfor fordifferent differentfrequencies frequenciesand andpointing pointingto todifferent differentsteering steeringangles angles Figure Figure 8. Theoretical beam patterns for different frequencies and pointing to different steering angles [azimuth,elevation]: elevation]:(a) (a)444kHz kHz[−15°, and [0°, and (c)(c) 1616 kHz and [5°, ◦0°]; ◦10°]. [azimuth, 88kHz and [0°, (c) 16 kHz and [5°, [azimuth, elevation]: (a) kHz [[−15°, −15◦ −15°]; ,−15°]; −15◦(b) ];(b) (b) 8kHz kHz and [00°]; , 0◦and ]; and kHz and [510°]. , 10◦ ].

Forthe theacoustic acousticcharacterization characterizationof ofthe theMEMS MEMSarray, array,aareference referenceloudspeaker loudspeakerplaced placedin indifferent different For For the acoustic characterization of the MEMS array, a reference loudspeaker placed in different positions was employed to obtain its beam patterns. Beamforming was carried out with a wideband positions was employed to obtain its beam patterns. Beamforming was carried out with a wideband positions was employed to obtain its beam patterns. Beamforming was carried out with a wideband FFT algorithm, algorithm, focused focused on on the the loudspeaker loudspeaker position. position. Figure Figure 99 shows shows some some of of these these beam beam patterns. patterns. FFT FFT algorithm, focused on the loudspeaker position. Figure 9 shows some of these beam patterns. The measured beam patterns are very similar to the theoretical ones, which assume that the the The measured beam patterns are very similar to the theoretical ones, which assume that The measured beam patterns are very similar to the theoretical ones, which assume that the acoustic sensors are omnidirectional and paired in phases. Nevertheless, a more detailed analysis of acoustic sensors are omnidirectional and paired in phases. Nevertheless, a more detailed analysis of acoustic sensors are omnidirectional and paired in phases. Nevertheless, a more detailed analysis of the measured measuredbeam beam patterns patterns shows: shows:(i) (i) there there are are more moresidelobes sidelobes with with aa level level higher higher than than −20 −20 dB; dB; and and the the measured beam patterns shows: (i) there are more sidelobes with a level higher than −20 dB; and (ii) there is a very small displacement of the sidelobes, which are closer. These effects are because the (ii) there is a very small displacement of the sidelobes, which are closer. These effects are because the (ii) there is a very small displacement of the sidelobes, which are closer. These effects are because the gain of of each each microphone microphone is is slightly slightly different different for for each each frequency, frequency, as as shown shown in in Figure Figure 7. 7. This This is is the the gain gain of each microphone is slightly different for each frequency, as shown in Figure 7. This is the same same effect effect as as applying applying windowing windowing techniques techniques to to the the beamforming beamforming weight weight vector, vector, which which modifies modifies same the level level and and the the position position of of the the sidelobes. sidelobes. Thus, Thus, as as the the variations variations of of the the measured measured beam beam pattern, pattern, the

Sensors 2016, 16, 1671

9 of 17

with the theoretical one, are limited, it is not necessary to apply calibration techniques Sensorsrespect 2016, 16, to 1671 9 of to 17 the array. Sensors 2016, 16, 1671

9 of 17

effect as applying windowing techniques to the beamforming weight vector, which modifies the level withthe respect to the theoretical one, are as limited, it is notof necessary to apply calibration techniques and position of the sidelobes. Thus, the variations the measured beam pattern, with respect to to the array. the theoretical one, are limited, it is not necessary to apply calibration techniques to the array.

(a)

(b)

(c)

Figure 9. Measured beam patterns for different frequencies and pointing to different steering angles [azimuth, elevation]: (a) 4 kHz and [−15°, −15°]; (b) 8 kHz and [0°, 0°]; and (c) 16 kHz and [5°, 10°].

(a) (b) 3.2.2. Acoustic Characterization of Array Clusters

(c)

Figure 9. Measured beam patterns for different frequencies and pointing to different steering angles Figure 9. Measured beam patterns for different frequencies and pointing to different steering angles

The proposed system, characterized by its modularity and ◦and scalability, can group together [azimuth, elevation]: (a) (b)(b) 8 kHz and [0°, (c)(c) 1616 kHz and [5°, [azimuth, elevation]: (a) 44 kHz kHz and and [[−15°, −15◦ ,−15°]; −15◦ ]; 8 kHz and [0◦0°]; , 0 ]; and kHz and [5◦10°]. , 10◦ ]. multiple modules with 64 sensors to build clusters with a very large number of sensors, where their geometry and spatial properties of could be Clusters adapted to specific application requirements. 3.2.2. Acoustic Characterization Array Clusters 3.2.2. Acoustic Characterization of Array As an example, the acoustic characterizations of three clusters geometries are shown: The proposedsystem, system, characterized bymodularity its modularity and scalability, cantogether group together The proposed characterized by its and scalability, can group multiple modules A row cluster, to increase the directivity in one direction. multiple modules with 64 sensors to build clusters with a very large number of sensors, where their with 64 sensors to build clusters with a very large number of sensors, where their geometry and A square cluster, to increase directivity in two orthogonal directions. geometry and spatial properties could be adapted to specific application requirements. spatial properties could be adapted to specific application requirements.  A cluster, tothe implement special radiation beam patterns. As an example, example, the acoustic characterizations characterizations of three three clusters geometries geometries are are shown: shown: Asstar an acoustic of clusters Figure 10 shows the implemented cluster, thedirection. theoretical beam pattern, steered towards the • A row row cluster, cluster, to increase increase the directivity directivity inone one direction. A to the in broadside for 8cluster, kHz, and the measured beaminpattern.  A square to increase directivity two orthogonal directions. • A square cluster, to increase directivity in two orthogonal directions. The row cluster shows thatspecial the beamwidth in azimuth has been reduced by a factor of 3,  A star cluster, to implement radiation beam patterns. • A star cluster, to implement special radiation beam patterns. increasing the angular resolution of the image in that direction. In the square cluster, the beamwidth Figure 10 shows the implemented cluster, the theoretical beam pattern, steered towards the Figure 10 the implemented cluster,inthe beam pattern, steered towards in azimuth andshows elevation is halved. Finally, thetheoretical star cluster, a radial symmetrical patternthe is broadside for 8 kHz, and the measured beam pattern. broadsidewith for 8akHz, andbeamwidth the measured beam pattern. achieved similar in multiple directions. The row cluster shows that the beamwidth in azimuth has been reduced by a factor of 3, increasing the angular resolution of the image in that direction. In the square cluster, the beamwidth in azimuth and elevation is halved. Finally, in the star cluster, a radial symmetrical pattern is achieved with a similar beamwidth in multiple directions.

(a)

(a)

(b) Figure 10. Cont.

(b)

Sensors 2016, 16, 1671 Sensors 2016, 16, 1671

10 of 17 10 of 17

(c) Figure 10. Cluster (col. 1), 1), theoretical theoretical(col. (col.2) 2)and andmeasured measuredbeampatterns beampatterns(col. (col.3)3) 8 kHz, pointed Figure 10. Cluster (col. forfor 8 kHz, pointed to to the broadside. Cluster geometries: row (a); square (b); and star (c). the broadside. Cluster geometries: row (a); square (b); and star (c).

4. Multiplatform Processing Framework The row cluster shows that the beamwidth in azimuth has been reduced by a factor of 3, increasing On the resolution basis of the hardware presented in Section 2, a the multiplatform the angular of global the image in that setup direction. In the square cluster, beamwidth framework in azimuth with four processing levels, each one implemented over a hardware platform, defined. with Thesea and elevation is halved. Finally, in the star cluster, a radial symmetrical patternwas is achieved processing levels are: similar beamwidth in multiple directions.  Level 1 (L1) corresponds to the FPGA based on its capacity to carry out simple tasks of filtering 4. Multiplatform Processing Framework and decimation. The parallelization degree is maximum and limited by the number of the On theresources basis of the global setup presented Section 2, a multiplatform framework with FPGA (Look Uphardware Tables, multipliers, DSP in units, RAMs, etc.). four processing levels, each one implemented over a hardware platform, was defined. These  Level 2 (L2) is based on an Embedded Processor (EP), such as an ARM processor thatprocessing picks up levelsthe are: FPGA signals and carries out the first processing stages. It has limited memory as well as processing and storage capacity. Level 1 (L1) corresponds to the FPGA based on its capacity to carry out simple tasks of filtering • Level  3 (L3) is based on a PC processor, such as an Intel Core i5/i7 with two to four cores. It is in and decimation. The parallelization degree is maximum and limited by the number of the FPGA charge of the main processing of the application, with medium cost and consumption. It has a resources (Look Up Tables, multipliers, DSP units, RAMs, etc.). high processing capacity, a great amount of memory (up to 64 Gb) and storage capacity based Level 2 (L2) is based on an Embedded Processor (EP), such as an ARM processor that picks up • on a disk. the FPGA signals and carries out the firstwhich processing stages. has limited as algebra well as  Level 4 (L4) is formed by coprocessors, can carry outItmassive FFT memory and lineal processing capacity.Processing Unit (GPU), and they have from 200 to 1200 cores. operations,and suchstorage as a Graphical • Level 3 (L3) is based on a PC processor, such as an Intel Core i5/i7 with two to four cores. It is in Processing time, and required memory cost must beconsumption. analyzed for Itallhas thea charge of the mainparallelization processing of degree the application, with medium and algorithms, needed to obtain an acoustic image, described in Section 2. The objective of this analysis high processing capacity, a great amount of memory (up to 64 Gb) and storage capacity based on is to adetermine the platforms/levels where these algorithms can be implemented, and the optimal disk. distribution between the available algorithms and platforms. • Level 4 (L4) is formed by coprocessors, which can carry out massive FFT and lineal algebra The time required to transfer data between levels should also be taken into account. This operations, such as a Graphical Processing Unit (GPU), and they have from 200 to 1200 cores. transfer time, in many situations, can be similar to, or even greater than the algorithm processing time.Processing Ideally, these transfers should bedegree minimized, in ordermemory to process databeonanalyzed the samefor level time, parallelization and required must all and the work with a one-way processing flow, i.e., the FPGA sends data to the EP, and it sends its data to the algorithms, needed to obtain an acoustic image, described in Section 2. The objective of this analysis PC. one level is used as a coprocessor, bidirectional flows i.e.,and between the EP is toWhen determine the platforms/levels where these algorithms canare beestablished, implemented, the optimal and the PC processor, through a TCP-IP interface, or between the PC processor and the GPU, using a distribution between the available algorithms and platforms. PCIe The interface. time required to transfer data between levels should also be taken into account. This transfer time, in many situations, can be similar to, or even greater than the algorithm processing time. Ideally, 4.1. Analysis Settings these transfers should be minimized, in order to process data on the same level and work with a one-way processing flow, the FPGA of sends data to the in EP,each andlevel, it sends its data to the PC. When In order to analyze thei.e., performance the algorithms a work scheme, based on an one level is used as a coprocessor, bidirectional flows are established, i.e., between the EP and the PC active acoustic system, was defined. This system sends a multifrequency acoustic signal that reaches processor, TCP-IP between signal the PC is processor and GPU, usingarray. a PCIeFinally, interface. the personthrough under atest and interface, then, theorreflected collected bythe the MEMS a

multichannel signal is processed following the block diagram shown in Figure 11. 4.1. Analysis Settings In order to analyze the performance of the algorithms in each level, a work scheme, based on an active acoustic system, was defined. This system sends a multifrequency acoustic signal that

Sensors 2016, 16, 1671

11 of 17

reaches the person under test and then, the reflected signal is collected by the MEMS array. Finally, a multichannel Sensors 2016, 16,signal 1671 is processed following the block diagram shown in Figure 11. 11 of 17

Figure 11. Processing block diagram. Figure 11. Processing block diagram.

There are hardware constrains that make the implementation of some algorithms in every There arelevel hardware constrains thatacquisition make thecan implementation of some algorithms in every processing unfeasible, i.e., MEMS only be carried out in the FPGA, or image storage level cannotunfeasible, be executedi.e., either in theacquisition FPGA or in can the GPU. Table 1 shows algorithms processing MEMS only be carried outthe in main the FPGA, or image usedcannot and thebeprocessing could storage executedlevels eitherwhere in thethey FPGA or be in implemented. the GPU. Table 1 shows the main algorithms used

and the processing levels where they could be implemented.

Table 1. Algorithms vs. processing level.

Table 1. Algorithms vs. processing level. L3 L1 L2 L4 FPGA EP PC GPU Algorithms L1 FPGA L2 EP L3 PC L4 GPU • MEMS acquisition  • • • Deinterlacing MEMS acquisition • • • •• Decimation and filtering Deinterlacing •• • • Wideband beamforming Decimation and filtering • • • • Wideband beamforming •• • •• Image storage Image storage • • The performance measurements have been carried out using the global hardware setup, with one array module, controlled by a PC. The selected PC is based on an i5 processor with four cores The performance measurements have been carried out using the global hardware setup, with one and 32 GB RAM, including a NVIDIA GTX 660 card with 960 cores. As algorithm parameters, an arrayacquisition module, controlled by a PC. The selected PC is based on an i5 processor with four cores and 32 GB time of 30 ms, 256-point FFTs, and a grid of 40 × 40 steering directions have been RAM, including a NVIDIA GTX 660 card with 960 cores. As algorithm parameters, an acquisition time assumed.

Algorithms

of 30 ms, 256-point FFTs, and a grid of 40 × 40 steering directions have been assumed. 4.2. Performance Analysis

4.2. Performance Analysis 4.2.1. Signal Processing

4.2.1. Signal Processing

The time required to implement each of the algorithms included in the signal processing on the different are presented in Tableeach 2. of the algorithms included in the signal processing on the The timelevels required to implement

different levels are presented in Table 2. Table 2. Processing and transfer times 1 for the signal processing tasks. 1 Table 2. Processing and transfer L1times for the L2signal processing L3 tasks. Algorithms FPGA EP PC L1 FPGA Processing time: Algorithms Deinterlacing 0 270.3 L2 EP 22.4L3 PC Processing time: 0 270.3 22.4 Processing time: Deinterlacing 9.7 9.7 Processing time: Decimation/Filtering 0 0 354.1 354.1 Decimation/Filtering Transfer time: EP→PC 113.0 Transfer time: EP→PC 113.0 Total 624.4 145.1 Total 624.4 145.1 1 1

L4 GPU L4 -GPU -

--

Time is expressed in ms. Time is expressed in ms.

Level 1 allows the implementation of all these algorithms using FPGA hardware resources, Level 1 allows the implementation of all these algorithms using FPGA hardware resources, simultaneously with capture and signal processing without consuming additional processing time. simultaneously with capture and signal processing without consuming additional processing time. Analyzing data from Levels 2 and 3, it can be observed that PC processing time is about 20 times lower Analyzing data from Levels 2 and 3, it can be observed that PC processing time is about 20 times thanlower the time by the EP. times Level can be increased by transferring timetime from the thanrequired the time required byThe the EP. Theontimes on3Level 3 can be increased by transferring EP tofrom the the PCEP fortofurther processing. This transfer time was measured and its value is about 113 ms. the PC for further processing. This transfer time was measured and its value is about Level113 4, based on GPUs, wasondiscarded because the algorithms to perform ms. Level 4, based GPUs, was discarded because therequired algorithms requireddeinterlacing to perform and deinterlacing and decimation/filtering arethis not platform available for this platform in LabVIEW. decimation/filtering are not available for in LabVIEW.

Sensors 2016, 16, 1671

12 of 17

4.2.2. Image Generation Table 3 shows the required processing times related with wideband beamforming and transfer times between the PC and the GPU, for the generation of an acoustic image. Table 3. Processing and transfer times 1 for wideband beamforming. Algorithms

L1 FPGA

L2 EP

L3 PC

L4 GPU

Processing time: Wideband beamforming Transfer time: PC→GPU→PC Total

-

257.3

18.5

257.3

18.5

25.8 7.6 33.4

1

Time is expressed in ms.

Level 1 is discarded due to the fact that (i) the FPGA included in myRIO does not have enough memory to generate and store images; and (ii) the implementation of the beamforming algorithms requires FPGAs with a large number of slices which makes it more costly. Processing times on Level 2 are the longest and the memory capacity of the EP is limited, therefore, this level is also discarded. In order to compare Levels 3 and 4, transfer times between the PC and the GPU should be considered. The results in Table 3 show that it is preferable to perform the beamforming on the PC. These results might seem contradictory, since the processing power of the GPU is higher than the PC’s. This is due to the following: (i) for 64 sensors, the GPU capacity of parallel processing is not totally used; and (ii) libraries included in LabVIEW-GPU are limited, which forces multiple data transfers between PC and GPU memories. The implementation of the beamforming algorithm in native code for the GPU and its subsequent invocation from LabVIEW would significantly reduce the overall processing time. If the number of sensors of the system increases, by adding multiple modules of 64 sensors, the performed operations increase proportionately, taking advantage of the whole capacity of GPU parallel processing. In Table 4, the processing and transfer times versus the number of sensors is analyzed. If the number of sensors is larger than 128, GPU performance improves. Table 4. Processing and transfer times 1 vs. number of sensors in the system. PC

GPU

# Sensors

Processing Time: Wideband BF 2

Processing Time: Wideband BF

Transfer Time: PC↔GPU

Total

64 128 256 512 1024

18.5 49.1 100.2 198.6 394.3

25.8 40.7 72.9 138.8 267.8

7.6 8.1 8.7 11.2 12.5

33.4 48.8 81.6 150.0 280.3

1

Time is expressed in ms; 2 BF: beamforming.

Image storage can be implemented in Levels 2 and 3 because embedded microprocessors and PCs have the capacity to store data. Embedded systems incorporate a low capacity internal disk and can use external high-capacity USB drives/disks. In the case of PCs, they include both high-capacity internal and external hard drives. As storage times are very dependent of the type and model of the used media, they were discarded for the analysis. 4.3. Framework Proposals Depending on the level where the signal processing algorithm runs, there are three implementation options. In turn, for each of these possibilities, there are up to three options depending on where the wideband beamforming algorithm is implemented. Table 5 summarizes the different options to implement the acoustic imaging algorithms with their corresponding processing and transfer times for all feasible processing levels.

Sensors 2016, 16, 1671

13 of 17

Sensors 2016, 16, Sensors 2016,options 16,1671 1671 different

13 of 17

13 of 17 to implement the acoustic imaging algorithms with their corresponding processing and transfer times for all feasible processing levels. different options to implement the acoustic imaging algorithms with their corresponding processing Processing and transfer times 11 for acoustic imaging algorithms. and transfer times Table for all5.feasible processing levels.

Table 5. Processing and transfer times for acoustic imaging algorithms.

Algorithms Level algorithms. Table 5. Processing and transfer times 1 for Processing acoustic imaging Processing time: L1: FPGA L2: EP L3: PC Algorithms Processing Level Signal processing 0 624.4 32.1 Processing time: L3: PC L4: GPU L2: EP L1: FPGA L3: PC L4: GPU L2: EP L2: L3:EP PC L4: GPU L3: PC Signal processing 018.5 624.4 Wideband BF 2 257.3 25.8 257.3 18.5 25.8 18.5 32.1 25.8 L2: EP L3: PC L4: GPU L2: EP L3: PC L4: GPU L3: PC Transfer time: EP→PC 113.0 113.0 113.0 113.0 113.0 L4: GPU 113.0 Wideband BF 2 257.318.525.8 257.318.5 25.87.6 18.5 25.87.6 PC→GPU 7.6 Transfer time: 113.0 113.0 -881.7 113.0 113.0 113.0 257.3 131.5 146.4 755.9 770.8 113.0 163.6 178.5 TotalEP→PC 7.6 7.6 PC→GPU - 1 7.6 2 1 Timeisisexpressed Time inms; ms;2881.7 BF:Beamforming. Beamforming. in BF: 257.3 131.5expressed 146.4 755.9 770.8 163.6 178.5 Total 1 Time is expressed in ms; 2 BF: Beamforming. The frameworks that are based on the implementation of the signal processing algorithms The frameworks that are based the implementation ofdiscarded, the signalasprocessing algorithms through an embedded processor or on aonPC (grey columns) are the FPGA allows the The frameworks that are based the(grey implementation ofdiscarded, the signal as processing algorithms through an embedded processor or a PC columns) are the FPGA allows the parallel implementation of the capture and signal processing. Thus, only three framework proposals through an embedded processor or a PC (grey columns) are Thus, discarded, as the FPGA allows the parallel implementation of the capture and signal processing. only three framework proposals (white implementation columns) have been considered, associating each oneThus, to a particular parallel of the capture and signal processing. only threeuse: framework proposals (white columns) have been considered, associating each one to a particular use: (white columns) have been considered, oneplace to a particular use: (1) Embedded system: The whole associating processing each takes in the embedded processor/FPGA (1) Embedded system: The whole processing takes place in the embedded processor/FPGA without without a PC, as shown in Figure 12a. This takes framework optimal for applications that require (1) Embedded system: The whole processing place isin embedded processor/FPGA a PC, as shown in Figure 12a. This framework is optimal forthe applications that require portable portable systems and where the processing speed is not critical. without PC, where as shown in Figure 12a. Thisis framework systemsa and the processing speed not critical. is optimal for applications that require (2) portable PC system: The processing is shared between theisFPGA and the PC. The embedded processor is systems and where the processing speed not critical. (2) PC system: The processing is shared between thePC FPGA and the as PC.shown The embedded processor used to control and transfer data between the and in Figure 12b. This (2) PC system: The processing is shared between the FPGA andFPGA, the PC. The embedded processor is is used to control and transfer data between the PC and FPGA, as shown in Figure 12b. framework presents the lowest processing time using 64 sensors. It is optimal for systems that used to control and transfer data between the PC and FPGA, as shown in Figure 12b. This This framework presentstime the lowest processing time using 64 sensors. It is optimal for systems require a short response a small number of sensors. framework presents the lowest and/or processing time using 64 sensors. It is optimal for systems that that require a short response time and/or a small number of (3) require PC system with GPU: time The and/or algorithms arenumber implemented insensors. the same way as in the previous a short response a small of sensors. (3) PC system with GPU:beamforming, The algorithms are is implemented inonthe same way as in the previous framework excluding which implemented GPU, as as shown in previous Figure 12c. (3) PC system with GPU: The algorithms are implemented in the the same way in the framework excluding beamforming, which is implemented on the GPU, as shown in Figure 12c. This framework improves processing time as the number is the12c. most framework excluding beamforming, which is implemented onof thesensors GPU, asincreases. shown inItFigure This framework improves processing time as thethat number of asensors increases. It is the most versatile framework and it is optimal for systems require large number of sensors. This framework improves processing time as the number of sensors increases. It is the most versatile framework and it is optimal for systems that require a large number of sensors. versatile framework and it is optimal for systems that require a large number of sensors.

(a) (a)

(b) (b)

(c) (c)

Figure 12. Embedded system (a); PC system (b); and PC system with GPU framework (c). Figure 12. Embedded system (a); PC system (b); and PC system with GPU framework (c). Figure 12. Embedded system (a); PC system (b); and PC system with GPU framework (c).

Sensors 2016, 16, 1671 Sensors 2016, 16, 1671 Sensors 2016, 16, 1671

14 of 17 14 of 17 14 of 17

5. Case Study: Biometric Identification of People 5. CaseStudy: Study:Biometric BiometricIdentification Identificationof ofPeople People 5. Case The proposed proposed system has has multiple applications: applications: localization and characterization of noise or The localization and and characterization characterization of of noise noise or or The proposed system system has multiple multiple applications: localization vibration sources, sources, spatial spatial filtering filtering and and elimination elimination of of acoustic acoustic interference, interference, etc. etc. This case study is is vibration This case study vibration sources, spatial filtering and elimination of acoustic interference, etc. This case study is focused on biometric identification, as an extension of the previous system developed by the authors focused biometric identification, the previous previous system the authors authors focused on on biometric identification, as as an an extension extension of of the system developed developed by by the with aa linear lineararray arrayofofanalog analog microphones [10]. This system could be used asaccess an access control to with microphones [10]. This system could be used as an control to enter with a linear array of analog microphones [10]. This system could be used as an access control to enter in a medium-sized research department, where only several subjects have authorized access. in a medium-sized research department, wherewhere only several subjects have authorized access.access. enter in a medium-sized research department, only several subjects have authorized The biometric identification system is based on placing the person in front of the array and The biometric is based on placing the person in front the array sending The biometricidentification identificationsystem system is based on placing the person inoffront of theand array and sending a multifrequency signal that is reflected on the person to be identified. The person and the a multifrequency signal that is reflected on the person be identified. The person the and system sending a multifrequency signal that is reflected on theto person to be identified. The and person the system areinside placedan inside an anechoic chamber in order to simplify the processing, but the chamber are placed anechoic chamber in order to simplify the processing, but the chamber could be system are placed inside an anechoic chamber in order to simplify the processing, but the chamber could be avoided if clutter removal techniques are used. The reflected signal is captured by the avoided clutter removal are used. The signal is captured the microphone could beifavoided if cluttertechniques removal techniques are reflected used. The reflected signal by is captured by the microphone array, obtaining several acoustic images for different ranges. The acoustic images are array, obtaining several acoustic images for different ranges. The acoustic images are pre-processed to microphone array, obtaining several acoustic images for different ranges. The acoustic images are pre-processed to extract the information needed for further biometric identification. Figure 13 shows extract the information for further biometric identification. 13 shows the image of a pre-processed to extract needed the information needed for further biometricFigure identification. Figure 13 shows the image of atesting. person under testing. person under the image of a person under testing.

Figure 13. Example of a person under testing. Figure 13. Example of a person under testing.

Figure 14 14 shows an an example of of some acoustic acoustic images obtained obtained for aa range range of 22m m withaa±±22° Figure 22◦ Figure 14 shows shows an example example of some some acoustic images images obtained for for a range of of 2 mwith with a ±22° angle in the azimuth coordinate and ±15° angle in the elevation. It is observed that if the frequency ◦ angle 15 angle angle in in the the elevation. elevation. ItIt is is observed observed that that if if the the frequency frequency angle in in the the azimuth azimuth coordinate coordinate and and ± ±15° increases, the the spatial spatial resolution resolution improves improves and and the the main main parts parts of of the the body body could could be be discerned. discerned. The use increases, The use use increases, the spatial resolution improves and the main parts of the body could be discerned. The of parameter extraction algorithms will improve the classification, as shown in the authors’ previous of parameter extraction extraction algorithms algorithms will will improve improve the the classification, classification, as the authors’ authors’ previous previous of parameter as shown shown in in the work, using using 1D 1D microphone microphone arrays arrays [12]. [12]. work, work, using 1D microphone arrays [12].

Figure 14. Acoustic images of a person with increasing frequencies, from left to right, at a constant Figure 14. Acoustic images of a person with increasing frequencies, from left to right, at a constant Figure range. 14. Acoustic images of a person with increasing frequencies, from left to right, at a range. constant range.

The range information can also be obtained from the captured images. Figure 15 shows images The range information can also be obtained from the captured images. Figure 15 shows images for a constant frequency and an elevation of 5°, 0° and −10°, with 0.5 m range intervals and a ±22° for a constant frequency and an elevation of 5°, 0° and −10°, with 0.5 m range intervals and a ±22°

Sensors 2016, 16, 1671

15 of 17

The range information can also be obtained from the captured images. Figure 15 shows images Sensors 2016, 16, 1671 15 of 17 for a constant frequency and an elevation of 5◦ , 0◦ and −10◦ , with 0.5 m range intervals and a ±22◦ azimuth. azimuth. Figure Figure 15b 15b shows shows that that the the torso torso is is closer closer to to the theMEMS MEMSarray arraythan thanthe thearms. arms. These These results results show showthat thatthe thedesigned designedsystem systemallows allowsfor forthe theacquisition acquisitionof of3D 3Dacoustic acousticimages. images.

(a)

(b)

(c)

Figure 15. 15. Acoustic Acoustic images images of of aa person person at atdifferent differentelevations: elevations: (a) 5° 5(a); 0° (b); and (c) −10° (c), with aa ◦ ; (b) ◦ with Figure 0◦ ; and −10 constant frequency. constant frequency.

6. Conclusions 6. Conclusions A modular, scalable, multiplatform, and low-cost acquisition and processing system was A modular, scalable, multiplatform, and low-cost acquisition and processing system was designed designed to obtain acoustic 3D images. This system is based on a module with a myRIO platform to obtain acoustic 3D images. This system is based on a module with a myRIO platform and a planar and a planar array that consists of 64 MEMS microphones uniformly distributed on an 8 × 8 grid. The array that consists of 64 MEMS microphones uniformly distributed on an 8 × 8 grid. The system can system can work with only one module or with a cluster of several of them, using the same PC unit. work with only one module or with a cluster of several of them, using the same PC unit. The system The system can be adapted to different cost and mobility scenarios, by means of its reconfigurable can be adapted to different cost and mobility scenarios, by means of its reconfigurable multi-platform multi-platform framework, where each processing tasks can be interchanged between its different framework, where each processing tasks can be interchanged between its different levels. levels. A digital MEMS microphone was selected as the acoustic sensor. This microphone allows the A digital MEMS microphone was selected as the acoustic sensor. This microphone allows the integration of a large number of sensors on a small sized board at low cost. An analysis of the frequency integration of a large number of sensors on a small sized board at low cost. An analysis of the responses of these 64 MEMS microphones was carried out, obtaining: (i) a flat average response in the frequency responses of these 64 MEMS microphones was carried out, obtaining: (i) a flat average acoustic band, with a variation of ±3 dB; and (ii) dispersion between the responses lower than ±2 dB. response in the acoustic band, with a variation of ±3 dB; and (ii) dispersion between the responses The beampatterns of an array module and of different clusters of modules were also characterized, lower than ±2 dB. The beampatterns of an array module and of different clusters of modules were verifying that they indeed fit the theoretical models, so it is not necessary to calibrate the array. also characterized, verifying that they indeed fit the theoretical models, so it is not necessary to Finally, a multiplatform framework and the necessary algorithms to obtain acoustic images from calibrate the array. the data captured by each microphone were jointly defined. The processing time of the algorithms Finally, a multiplatform framework and the necessary algorithms to obtain acoustic images was evaluated on each platform. Based on these results, three different frameworks were defined for from the data captured by each microphone were jointly defined. The processing time of the specific uses. algorithms was evaluated on each platform. Based on these results, three different frameworks were Thus, a versatile system for different applications, due to its modularity, scalability and defined for specific uses. reconfigurability, was designed to obtain acoustic images. Currently, the authors are using the system Thus, a versatile system for different applications, due to its modularity, scalability and in the field of biometric identification, working on feature extraction and person identification from reconfigurability, was designed to obtain acoustic images. Currently, the authors are using the acoustic images. system in the field of biometric identification, working on feature extraction and person identification fromThis acoustic Acknowledgments: workimages. has been funded by the Spanish research project SAM: TEC 2015-68170-R (MINECO/FEDER, UE). Acknowledgments: This work been funded byDel theVal Spanish SAM: TEC 2015-68170-R Juan José has Villacorta and Lara Puenteresearch designedproject and implemented the software Author Contributions: (MINECO/FEDER, UE). related with levels 0 and 1 of the framework. Alberto Izquierdo and Juan José Villacorta designed and implemented the software related with levels 2 and 3 of the framework. Luis Suárez and Juan José Villacorta designed and made Author Contributions: Del Izquierdo Val Puente designedand andperformed implemented the software the hardware prototypes.Juan LaraJosé Del Villacorta Val Puenteand andLara Alberto conceived the experiments, related with the levels 0 Alberto and 1 of the framework. Izquierdo JuanPuente José Villacorta designed and and analyzed data. Izquierdo, Juan JoséAlberto Villacorta and Laraand Del Val wrote the paper. implemented the software related with levels 2 and 3 of the framework. Luis Suárez and Juan José Villacorta Conflicts of Interest: The authors declare no conflict of interest. designed and made the hardware prototypes. Lara Del Val Puente and Alberto Izquierdo conceived and performed the experiments, and analyzed the data. Alberto Izquierdo, Juan José Villacorta and Lara Del Val Puente wrote the paper.

Conflicts of Interest: The authors declare no conflict of interest.

Sensors 2016, 16, 1671

16 of 17

Abbreviations The following abbreviations are used in this manuscript: MEMS FPGA FFT ULA ADC DOA IC SNR PDM I2S PCM UPA PCB GPU RIO PC OS

Micro-Electro-Mechanical Systems Field Programmable Gate Array Fast Fourier Transform Uniform Linear Array Analog-Digital Converter Direction of Arrival Integrated Circuit Signal to Noise Ratio Pulse Density Modulation Integrated Interchip Sound Pulse Code Modulation Uniform Planar Array Printed Circuit Board Graphical Processing Unit Reconfigurable I/O Personal Computer Operative System

References 1. 2. 3. 4. 5. 6. 7.

8.

9. 10. 11. 12. 13.

14.

15.

Gan, W.S. Acoustic Imaging: Techniques and Applications for Engineers; John Wiley & Sons: New York, NY, USA, 2012. Skolnik, M.I. Introduction to RADAR Systems, 3rd ed.; McGraw-Hill Education: New York, NY, USA, 2002. Van Trees, H. Optimum Array Processing: Part IV of Detection, Estimation and Modulation Theory; John Wiley & Sons: New York, NY, USA, 2002. Brandstein, M.; Ward, D. Microphone Arrays; Springer: New York, NY, USA, 2001. Van Veen, B.D.; Buckley, K.M. Beamforming: A Versatile Approach to Spatial Filtering. IEEE ASSP Mag. 1988, 5, 4–24. [CrossRef] Del Val, L.; Jiménez, M.; Izquierdo, A.; Villacorta, J. Optimisation of sensor positions in random linear arrays based on statistical relations between geometry and performance. Appl. Acoust. 2012, 73, 78–82. [CrossRef] Duran, J.D.; Fuente, A.I.; Calvo, J.J.V. Multisensorial modular system of monitoring and tracking with information fusion techniques and neural networks. In Proceedings of the IEEE International Carnahan Conference on Security Technology, Madrid, Spain, 5–7 October 1999; pp. 59–66. Izquierdo-Fuente, A.; Villacorta-Calvo, J.; Raboso-Mateos, M.; Martínez-Arribas, A.; Rodríguez-Merino, D.; del Val-Puente, L. A human classification system for a video-acoustic detection platform. In Proceedings of the International Carnahan Conference on Security Technology, Albuquerque, NM, USA, 12–15 October 2004; pp. 145–152. Villacorta-Calvo, J.; Jiménez-Gómez, M.I.; del Val-Puente, L.; Izquierdo-Fuente, A. A Configurable Sensor Network Applied to Ambient Assisted Living. Sensors 2011, 11, 10724–10737. [CrossRef] [PubMed] Izquierdo-Fuente, A.; del Val-Puente, L.; Jiménez-Gómez, M.I.; Villacorta-Calvo, J. Performance evaluation of a biometric system based on acoustic images. Sensors 2011, 11, 9499–9519. [CrossRef] [PubMed] Izquierdo-Fuente, A.; del Val-Puente, L.; Villacorta-Calvo, J.; Raboso-Mateos, M. Optimization of a biometric system based on acoustic images. Sci. World J. 2014, 2014. [CrossRef] [PubMed] Del Val, L.; Izquierdo, A.; Villacorta, J.; Raboso, M. Acoustic Biometric System Based on Preprocessing Techniques and Support Vector Machines. Sensors 2015, 15, 14241–14260. [CrossRef] [PubMed] Tiete, J.; Domínguez, F.; da Silva, B.; Segers, L.; Steenhaut, K.; Touhafi, A. SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization. Sensors 2014, 14, 1918–1949. [CrossRef] [PubMed] Edstrand, A.; Bahr, C.; Williams, M.; Meloy, J.; Reagan, T.; Wetzel, D.; Sheplak, M.; Cattafesta, L. An Aeroacoustic Microelectromechanical Systems (MEMS) Phased Microphone Array. In Proceedings of the 21st AIAA Aerodynamic Decelerator Systems Technology Conference and Seminar, Dublin, Ireland, 23–26 May 2011. Zhang, X.; Song, E.; Huang, J.; Liu, H.; Wang, Y.; Li, B.; Yuan, X. Acoustic Source Localization via Subspace Based Method Using Small Aperture MEMS Arrays. J. Sens. 2014, 2014. [CrossRef]

Sensors 2016, 16, 1671

16. 17.

18. 19.

20. 21.

22.

23. 24. 25. 26. 27.

28.

29.

17 of 17

Zhang, X.; Huang, J.; Song, E.; Liu, H.; Li, B.; Yuan, X. Design of Small MEMS Microphone Array Systems for Direction Finding of Outdoors Moving Vehicles. Sensors 2014, 14, 4384–4398. [CrossRef] [PubMed] Zwyssig, E.; Faubel, F.; Renals, S.; Lincoln, M. Recognition of overlapping speech using digital MEMS microphone arrays. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, BC, Canada, 26–31 May 2013; pp. 7068–7072. Hafizovic, I.; Nilsen, C.; Kjølerbakken, M.; Jahr, V. Design and implementation of a MEMS microphone array system for real-time speech acquisition. Appl. Acoust. 2012, 73, 132–143. [CrossRef] White, R.; Krause, J.; De Jong, R.; Holup, G.; Gallman, J.; Moeller, M. MEMS Microphone Array on a Chip for Turbulent Boundary Layer Measurements. In Proceedings of the 50th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, Nashville, TN, USA, 9–12 January 2012. Groschup, R.; Grosse, C.U. MEMS Microphone Array Sensor for Aid-Coupled Impact Echo. Sensors 2015, 15, 14932–14945. [CrossRef] [PubMed] Turqueti, M.; Saniie, Y.; Oruklu, E. Scalable Acoustic Imaging Platform Using MEMS Array. In Proceedings of the 2010 IEEE International Conference on Electro/Information Technology (EIT), Normal, IL, USA, 20–22 May 2010; pp. 1–4. Vanwynsberghe, C.; Marchiano, R.; Ollivier, F.; Challande, P.; Moingeon, H.; Marchal, J. Design and implementation of a multi-octave-band audio camera for real time diagnosis. Appl. Acoust. 2015, 89, 281–287. [CrossRef] Sorama: Sound Solutions. Available online: www.sorama.eu/Solution/measurements (accessed on 9 June 2016). Scheeper, P.R.; van der Donk, A.G.H.; Olthuis, W.; Bergveld, P. A review of silicon microphones. Sens. Actuators A Phys. 1994, 44, 1–11. [CrossRef] Park, S. Principles of Sigma-Delta Modulation for Analog-to-Digital Converters. Available online: http: //www.numerix-dsp.com/appsnotes/APR8-sigma-delta.pdf (accessed on 2 May 2016). Norsworthy, S.; Schreier, R.; Temes, G. Delta-Sigma Data Converters: Theory, Design, and Simulation; Wiley-IEEE Press: New York, NY, USA, 1996. Acoustic System of Array Processing Based on High-Dimensional MEMS Sensors for Biometry and Analysis of Noise and Vibration. Available online: http://sine.ni.com/cs/app/doc/p/id/cs-16913 (accessed on 14 September 2016). Hsieh, C.T.; Ting, J.-M.; Yang, C.; Chung, C.K. The introduction of MEMS packaging technology. In Proceedings of the 4th International Symposium on Electronic, Materials and Packaging, Kaohsiung, Taiwan, 4–6 December 2002; pp. 300–306. Beeby, S.; Ensell, G.; Kraft, M.; White, N. MEMS Mechanical Sensors; Artech House Publishers: Norwood, MA, USA, 2004. © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

Design and Evaluation of a Scalable and Reconfigurable Multi-Platform System for Acoustic Imaging.

This paper proposes a scalable and multi-platform framework for signal acquisition and processing, which allows for the generation of acoustic images ...
8MB Sizes 0 Downloads 11 Views