sensors Article

LEA Detection and Tracking Method for Color-Independent Visual-MIMO Jai-Eun Kim, Ji-Won Kim and Ki-Doo Kim * Department of Electronic Engineering, Kookmin University, Seongbuk-gu, Seoul 136-702, Korea; [email protected] (J.-E.K.); [email protected] (J.-W.K.) * Correspondence: [email protected]; Tel.: +82-2-910-4707 Academic Editors: Thomas Litte, Takaya Yamazato and Yu Zeng Received: 28 February 2016; Accepted: 27 June 2016; Published: 2 July 2016

Abstract: Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting array (LEA) detection and tracking errors in the received image because the image sensor included in the camera must be used as the receiver in the visual-MIMO system. In this paper, in order to improve detection reliability, we first set up the color-space-based region of interest (ROI) in which an LEA is likely to be placed, and then use the Harris corner detection method. Next, we use Kalman filtering for robust tracking by predicting the most probable location of the LEA when the relative position between the camera and the LEA varies. In the last step of our proposed method, the perspective projection is used to correct the distorted image, which can improve the symbol decision accuracy. Finally, through numerical simulation, we show the possibility of robust detection and tracking of the LEA, which results in a symbol error rate (SER) performance improvement. Keywords: visual-MIMO; GCM; color independency; LEA detection; LEA tracking

1. Introduction Light emitting diodes (LEDs) are used in a wide variety of lighting equipment because they are eco-friendly, low-power devices. Recently, many studies of visual-multiple input multiple output (visual-MIMO) techniques have been conducted [1–10]. Here, visual-MIMO indicates visible light communication (VLC) between the light emitting array (LEA) and the camera. In this concept, optical transmissions by an array of light emitting devices are received by an array of photo detector elements (pixels) of a camera. The pixels in the camera can be treated as an array of highly directional receiver elements. This structure allows a reduction of interference and noise from other light sources in the channel. This system offers the freedom to select and combine a subset of receiver elements that receive a strong signal from the transmitter, and thus, achieves high signal-to-noise ratios (SNRs) [1]. In [3], the authors proposed an LED array detection method using M-sequence and an LED array tracking method using inverted signals. Here, an OOK modulated signal transmitted through LED array is received by a high-speed camera. In [5], LED array detection was performed using the pattern of Sync-LEDs. Based on Sync-LEDs, they could find out the start point of an on-off keying (OOK) modulated data sequence, and calibrate the distorted image snapshots. In [6], the communication between a vehicle and an LED traffic light was conducted using an LED traffic light as a transmitter, and an on-vehicle high-speed camera as a receiver. Here, the luminance value of LEDs in the transmitter should be captured in consecutive frame. In [7], a high frame rate CMOS image sensor camera was introduced as a potential V2I-VLC system. Also, inverted LED patterns were used for tracking. In [8], a special CMOS image sensor, i.e., an optical communication image sensor (OCI) was employed and a “flag image” obtained from OCI was used for real-time LED detection. In [9], they presented HiLight, a new form of real-time screen-camera communication without showing any coded images Sensors 2016, 16, 1027; doi:10.3390/s16071027

www.mdpi.com/journal/sensors

Sensors 2016, 16, 1027 Sensors 2016, 16, 1027

2 of 12 2 of 12

without showing any coded images (e.g., barcodes) for off-the-shelf smart devices. Although it mentioned the importance of transmitter tracking, but a detailed analysis not done, the (e.g., barcodes) for off-the-shelf smart devices. Although it mentioned thewas importance ofleaving transmitter future challenges. In [10], analysis we showed applicability of generalized color modulation (GCM)-based tracking, but a detailed wasthe not done, leaving the future challenges. In [10], we showed visual-MIMO forofV2X. By usingcolor the modulation proposed visual-MIMO while performing seamless the applicability generalized (GCM)-basedscheme, visual-MIMO for V2X. By using the communication, we can scheme, maintainwhile the original colorseamless and brightness in addition increasing proposed visual-MIMO performing communication, weto can maintain the the capacity using encoding. As described above, in the most of the previous a high-speed original by color andcolor brightness in addition to increasing capacity by usingworks, color encoding. As camera (or above, a special image sensor) was usedaashigh-speed a receiver and an intensity-based modulation described inCMOS most of the previous works, camera (or a special CMOS image (e.g., OOK) been On and the other hand, we have used a general commercial sensor) washas used as aused. receiver an intensity-based modulation (e.g., purpose OOK) has been used.camera On the as a receiver. Also, our apaper, detection and tracking method of an LEA for other hand, we haveinused generalthe purpose commercial camera as a receiver. Also,optimized in our paper, GCM-based visual-MIMO described systematically based image processing. Our LEA detection the detection and trackingismethod of an LEA optimized foron GCM-based visual-MIMO is described method is bestbased suitable for GCM and theOur GCM originally proposed by ourselves [11]. This systematically on image processing. LEAwas detection method is best suitable for GCM and the visual-MIMO based communication technique has numerous applications in situations in which GCM was originally proposed by ourselves [11]. This visual-MIMO based communication technique line-of-sight (LOS) communication is desirable. Forline-of-sight example, this approach enables novel advertising has numerous applications in situations in which (LOS) communication is desirable. For applications as smartphone usersadvertising pointing cell phone cameras billboards to receive example, thissuch approach enables novel applications such at as electronic smartphone users pointing cell further information including documents, moviefurther clips, or website URLs. Another examplemovie is a phone cameras at electronic billboards to receive information including documents, museum application in which a kiosk display transfers application exhibit information cell phone cameras to clips, or website URLs. Another example is a museum in whichto a kiosk display transfers produce maps, images, and customized audio museum tours. Applications are not limited to exhibit information to cell phone cameras to produce maps, images, and customized audio museum hand-held cameras are and displays: cameras they also vehicle-to-everything (V2X) tours. Applications notelectronic limited to hand-held andinclude electronic displays: they also include communication, robot-to-robot communication, and hand-held displays and for hand-held fixed surveillance vehicle-to-everything (V2X) communication, robot-to-robot communication, displays cameras for fixed[8,10]. surveillance cameras [8,10]. Additionally, Additionally, LED LED lighting lighting devices devices have have been been developed developed to to aa high high level: level: currently, currently, LEDs LEDs can can emit light in various colors (close to full-color). The lighting color might be changed depending on emit light in various colors (close to full-color). The lighting color might be changed depending the person’s emotion or environmental factors. However, it will be useful to achieve visible light on the person’s emotion or environmental factors. However, it will be useful to achieve visible communication that can the the original color and brightness light communication that maintain can maintain original color and brightnesswhile whileperforming performing seamless seamless communication. To solve this problem, a color-space-based modulation (CSBM) scheme, communication. To solve this problem, a color-space-based modulation (CSBM) scheme, called called generalized generalized color color modulation modulation (GCM), (GCM), was was proposed proposed and and analyzed analyzed for for color-independent color-independent VLC VLC systems “color-independent” indicates thethe independence of the variations in the systems[11]. [11].The Themodifier modifier “color-independent” indicates independence of the variations in light color and light intensity. Some notable features of GCM include color independency, dimming the light color and light intensity. Some notable features of GCM include color independency, dimming control, control, and and reasonable reasonable bit bit error error rate rate (BER) performance performance during color variation. By By incorporating incorporating GCM into visual-MIMO, we can obtain better symbol error rate (SER) performance, higher data data rate GCM into visual-MIMO, we can obtain better symbol error rate (SER) performance, higher over a larger transmission range, and most importantly, color independency when compared with rate over a larger transmission range, and most importantly, color independency when compared conventional LED LED communication. Figure 1 shows a block diagram with conventional communication. Figure 1 shows a block diagramofofthe the color-independent color-independent visual-MIMO method based on the color space [10]. visual-MIMO method based on the color space [10].

Figure 1.1.Color-space-based Color-space-based color-independent visual-MIMO transceiving procedure using Figure color-independent visual-MIMO transceiving procedure using image image processing. processing.

At At the the transmitter, transmitter, it it performs performs the color-space-based color-space-based modulation modulation on on the encoded data. Each Each constellation constellationpoint pointin inthe thelight lightcolor colorspace spacerepresents representsaacorresponding correspondingcolor colorand andaatarget targetcolor colorisis the the average average of of all all appropriate appropriate constellation constellation points. points. Here, Here,target targetcolor colorindicates indicatesthe thewanted wantedcolor colorof of the the LEA LEAlighting. lighting.The Thetarget targetcolor colorcorresponding correspondingto toany anyVLC VLCsignal signalcan canbe bechosen chosenfrom fromthe thegamut gamutarea area and choosing the appropriate appropriate constellation and an an information information data data stream stream can can then then be be sent sent by choosing constellation diagram diagram corresponding to this target color. The proposed visual-MIMO system thus enables color-independent

Sensors 2016, 16, 1027

3 of 12

corresponding to this target color. The proposed visual-MIMO system thus enables color-independent communication. Then, the serial to parallel conversion is performed on the modulated symbols by 16, 1027 3 of 12 a set of the LEASensors size.2016, These symbols are then mapped onto LEA as prescribed order. At the receiver, symbolscommunication. (colors) are detected by the image sensor and the symbol (color) decision corresponding to Then, the serial to parallel conversion is performed on the modulated symbols by each LED is performed by using outputorder. of demapping in the Sensors 2016, 16, 1027 3color ofof 12 space the LEA size. These symbols image are thenprocessing. mapped ontoFinally, LEA as the prescribed At the receiver, a set is converted into a serial stream that is sent to and an information sink.decision corresponding to symbols (colors) are data detected by the image sensor the symbol (color) communication. Then, the to parallel conversion is performed on of thedemapping modulated in symbols by each LED performed by serial using image processing. the output color Because the is camera is used as a receiver, a varietyFinally, of distortions might occur whenthe projected onto the LEA size. These symbols are then mapped onto LEA as prescribed order. At the receiver, a set of space is converted into a serial data stream that is sent to an information sink. the image sensor, thereby adversely affecting the performance of color-independent visual-MIMO. In symbols (colors) detected by the sensor and the (color) decision to Because the are camera is used as aimage receiver, a variety ofsymbol distortions might occurcorresponding when projected this paper, we propose an LEA detection and tracking method, shown in the image processing step each LED performed using image processing. Finally, output of demapping in the color onto the is image sensor,by thereby adversely affecting thethe performance of color-independent in Figure 1, sois as to improve remainder of this papershown is organized as follows: space converted into a SER serial data stream isdetection sent to an information sink. visual-MIMO. In this paper, weperformance. propose an that LEAThe and tracking method, in the image Because the camera is used as a receiver, a variety of distortions might occur when projected Section processing 2 provides a detailed of the SER proposed LEA detection andoftracking techniques; step in Figure explanation 1, so as to improve performance. The remainder this paper is the as image thereby adversely affecting the performance of color-independent follows: Section 2 provides a detailed explanation the proposed detection and Section organized 3onto represents the sensor, results and discussion; finally, Section 4ofconcludes theLEA paper.

visual-MIMO. In this Section paper, we propose anthe LEAresults detection tracking finally, method,Section shown 4inconcludes the image tracking techniques; 3 represents andand discussion; processing step in Figure 1, so as to improve SER performance. The remainder of this paper is paper. and Tracking 2. LEAthe Detection organized as follows: Section 2 provides a detailed explanation of the proposed LEA detection and tracking techniques; 3 represents the results and discussion; the finally, Section 4 of concludes Although most of visual-MIMO related works [2,5,9] emphasized importance LEA detection 2. LEA Detection andSection Tracking the paper. and tracking, they did not address detail method and the corresponding effect on the communication Although most of visual-MIMO related works [2,5,9] emphasized the importance of LEA performance. Inand [2–4], they they did not consider effect of misrecognition on the effect SER on performance detection tracking, did not addressthe detail method and the corresponding the 2. LEA Detection and Tracking and alsocommunication did not analyze the temporal movement an LEA with tracking. On the other performance. In [2–4], they did notofconsider theassociated effect of misrecognition on the SER Although most of related works [2,5,9] emphasized the importance of LEA didvisual-MIMO not analyze the temporal movement of an optimized LEA associated tracking. hand, inperformance our paper,and thealso detection and tracking method of an LEA forwith color-independent detection and hand, tracking, theypaper, did not detail method and the corresponding effect on the On the is other insystematically our theaddress detection and tracking method of an proposed LEA optimized for visual-MIMO described based on image processing. Our LEASER detection communication performance. In [2–4], they did not consider the effect of misrecognition on the Our color-independent visual-MIMO is described systematically based on image processing. (including tracking) method is also most suitable for the GCM to enhance the SER performance. In performance also did(including not analyze the temporal of an LEA associated with proposed LEAand detection tracking) methodmovement is also most suitable for the GCM to tracking. enhance many practical applications of visual-MIMO (e.g., vehicle-to-vehicle (V2V) application), both the On the other hand, in our paper, the detection and tracking method of an LEA optimized for the SER performance. In many practical applications of visual-MIMO (e.g., vehicle-to-vehicle (V2V) color-independent visual-MIMO described systematically based image processing. Ourmethod transmitter and the both receiver can move.isand Therefore, we propose the LEAon detection and tracking application), the transmitter the receiver can move. Therefore, we propose the LEA proposed LEA detection (including tracking) method is also most suitable for the GCM to enhance detection shown in Figureand 2. tracking method shown in Figure 2. the SER performance. In many practical applications of visual-MIMO (e.g., vehicle-to-vehicle (V2V) application), both the transmitter and the receiver can move. Therefore, we propose the LEA detection and tracking method shown in Figure 2.

2. Block diagram of the proposed LEA detection and tracking method. FigureFigure 2. Block diagram of the proposed LEA detection and tracking method.

The entire process consists of four steps. The first step is to specify the region of interest (ROI) Figure Block ofThen, the proposed LEA detection and tracking Theforentire process consists ofdiagram four steps. The firstthe step is tocorner specify the method. region of interest (ROI) for an LEA from the 2. received image. using Harris detection algorithm [12], we extract the features of the LEA such as the corners of a rectangle. In the third step, a Kalman filter [13] an LEA from the received image. Then, using the Harris corner detection algorithm [12], we extract The entire process consists of four steps. The first step is to shape specify region of interest (ROI) is used tothe track the desired LEA. Finally, weof correct the distorted ofthe thestep, LEA using perspective the features ofLEA LEA such as theimage. corners a rectangle. Incorner the third a Kalman filter for an from the received Then, using the Harris detection algorithm [12], we [13] is projection [14]. Figure 3 shows the original configuration and the distorted shape example of an LEA used to extract track the desired LEA. Finally, we correct the distorted shape of the LEA using perspective the features of the LEA such as the example, corners ofwe a rectangle. In the third step, a Kalman filter [13] in the received image. As an experimental used the LEA configuration of a square shape is used to track the desired LEA. Finally, we correct the distorted shape of the LEA using perspective projection [14]. Figure 3 shows the original configuration and the distorted shape example with a size of 4 × 4. The shape of the LEA on a received image can be distorted in the form of of an projection [14]. Figureand 3As shows the original configuration and distorted shape example of an LEA LEA in the received image. an experimental example, wethe used the configuration of a square translation, rotation, warping. We correct the distorted shape byLEA utilizing the perspective in the received image. As an experimental example, we used the LEA configuration of a square shape projection technique, to increase the probability of right decision for each LED color (or symbol) in shape with a size of 4 ˆ 4. The shape of the LEA on a received image can be distorted in the form of translation, with a size of 4 × 4. The shape of the LEA on a received image can be distorted in the form of the array. rotation, and warping. We correct the distorted shape by utilizing the perspective projection technique, translation, rotation, and warping. We correct the distorted shape by utilizing the perspective to increase the probability right decision for each LED decision color (or the array. projection technique, of to increase the probability of right forsymbol) each LEDincolor (or symbol) in

the array.

Figure 3. LEA configuration and shape distortion example of LEA on a received image.

3. LEA configuration and shape distortion example of LEA on a received image. Figure Figure 3. LEA configuration and shape distortion example of LEA on a received image.

Sensors 2016, 16, 1027

4 of 12

2.1. ROI Selection for LEA Detection An image received by the camera is likely to include a complex background. In addition, Sensors 2016, 16, 1027 4 of 12 the prominent features of LEA are that it has a rectangular shape and might also appear as a blob shape 2.1.LED ROI Selection for LEA Detection during light emission. However, these are very common features that other objects may also image received byeasy the camera is likely to include complex In addition, the include. An Therefore, it is not to detect the LEA over athe entirebackground. image. To solve this problem, prominent of LEA are that it hascan a rectangular shapeHowever, and mightthe alsouse appear as a blob shape reference LEDsfeatures or reference LEA patterns be used [3,5]. of references can lower during emission. However, thesemight are very the data rateLED andlight the appearance of the LEA notcommon be good.features that other objects may also include. Therefore, it is not easy to detect the LEA over thethe entire image. To we solve this problem, To overcome this weakness, by using the principle of color space, specify the ROI for reference LEDs or reference LEA patterns can be used [3,5]. However, the use of references can lower a desired LEA in a received image with a complex background. Because the color-independent the data rate and the appearance of the LEA might not be good. visual-MIMO system uses a GCM that is based on a color space, the color information associated with To overcome this weakness, by using the principle of the color space, we specify the ROI for a the GCM can be an LEAa detection. Figure 4 presents example of a circle-type desired LEA in effective a receivedmeans imageofwith complex background. Becauseanthe color-independent constellation diagram theaCIE1931 space Thethe input symbolassociated is represented visual-MIMO systeminuses GCM thatcolor is based on a[15,16]. color space, colordata information with by a constellation point y) andmeans each of constellation point in 4the constellation diagram represents a the GCM can be an(x, effective LEA detection. Figure presents an example of a circle-type in the CIE1931 color space [15,16]. symbol is represented by a after colorconstellation in the colordiagram space. In Figure 4, the target color, i.e.,The theinput colordata perceivable to human eyes constellation point (x, y) and each constellation point in the constellation diagram represents a color modulation, is the average of all appropriate constellation points. Here, constellation points in the color thebe color space. using In Figure 4, thearrangement target color, i.e., the used color in perceivable to quadrature human eyesamplitude after spaceincan arranged a similar to that RF circular modulation, is the average of all appropriate constellation points. Here, constellation points in the modulation (QAM). Supposing an equiprobable symbol transmission, which is reasonable, due to the color space can be arranged using a similar arrangement to that used in RF circular quadrature compensation and interleaving algorithms, the target color can be obtained as the averaged RGB value amplitude modulation (QAM). Supposing an equiprobable symbol transmission, which is alongreasonable, a numberdue of symbols as shown in Equation (1) [10]: to the compensation and interleaving algorithms, the target color can be obtained as ¸ ˜ ř as shown the averaged RGB value along a number of symbols (1) [10]: ř N in Equation N pxt , yt q “ ( ,

)=

i “1



N

xi

y , ∑ i “1 i N ,

(1)

(1)

where ) denotes the position a target color ) denotes position pxt , y(t q ,denotes pxi , (yi q, denotes where the position of aof target color andand thethe position of of thethe ith symbol. symbol. Note that the value of ( , ) is closer to the true target color when N is increasing in the Note that the value of pxt , yt q is closer to the true target color when N is increasing in the probability probability sense. Here, N is the number of total LEDs of the LED array. sense. Here, N is the number of total LEDs of the LED array.

Figure 4. Example circle-typeconstellation constellation diagram thethe CIE1931 color space. Figure 4. Example ofofcircle-type diagraminin CIE1931 color space.

Figure 5 shows a received image example in the form of a lattice structure and the corresponding

Figure 5 shows aof received image example in area the form a latticecolor structure theexperiment, corresponding color distribution the sliding search window in theofCIE1931 space. and In the colorwe distribution of the sliding search window area in the CIE1931 color space. In the experiment, used a Windows 7 library image from Microsoft Corporation as a background. To select the ROI, we usedwe a Windows 7 library image from a background. To select the ROI, divided the image into a grid unitMicrosoft and used aCorporation sliding searchaswindow. The window consisting of we fourthe grids is indicated by the redand boxused in Figure 5. This window is moved the gridconsisting unit and the divided image into a grid unit a sliding search window. Theby window of four distribution of the is analyzed in the CIE1931 spacethe to color gridscolor is indicated by the redsliding box insearch Figurewindow 5. This area window is moved by the grid color unit and determine the desired exists area at that In in a circle-type constellation diagram, it is distribution ofwhether the sliding searchLEA window islocation. analyzed the CIE1931 color space to determine

Sensors 2016, 16, 1027

5 of 12

Sensors 2016, 16, 1027

5 of 12

whether the desired Sensors 2016, 16, 1027 LEA exists at that location. In a circle-type constellation diagram, it is important 5 of 12 to note that the polygon formed by connecting the coordinate point of each symbol has a constant important to note that the polygon formed by connecting the coordinate point of each symbol has a important to note that the polygon formed by connecting the coordinate point of each symbol has a side ratioside andratio symmetry property.property. Using these properties, in this paper, wepaper, propose LEA detection constant and symmetry Using these properties, in this we an propose an LEA constant side ratio and symmetry property. Using these properties, in this paper, we propose an LEA method that analyzes the color the distribution of a sliding window in a area CIE color space. To detection method that analyzes color distribution of asearch sliding search area window in a CIE color detection method that analyzes the color distribution of a sliding search window area in a CIE color classify the samples distributed in the two-dimensional color space, the k-means clustering algorithm space.space. To classify the samples distributed colorspace, space, k-means clustering To classify the samples distributedininthe thetwo-dimensional two-dimensional color thethe k-means clustering is used [17]. Then, we select the desired ROI by checking the side ratio and the symmetry property of algorithm is used [17]. Then, we select the desired ROI by checking the side ratio and the symmetry algorithm is used [17]. Then, we select the desired ROI by checking the side ratio and the symmetry the polygon generated after the center of each cluster has been connected. property of theofpolygon generated after the clusterhas hasbeen been connected. property the polygon generated after thecenter centerof of each each cluster connected.

Figure 5. Received image example in the form of lattice structure and corresponding color distribution Figure 5. Received image example in the form of lattice structure and corresponding color distribution of5.pixels within the sliding search window area in the CIE1931 color Figure image example the form of lattice andspace. corresponding color distribution of pixelsReceived within the sliding searchin window area in the structure CIE1931 color space. of pixels within the sliding search window area in the CIE1931 color space. Figure 6 shows examples for different locations of sliding search windows and the corresponding color distribution of adifferent window area in sliding the CIE1931 color space. can thatand the the Figure examples for different locations of windows and thesee corresponding Figure 66 shows shows examples for locations ofsearch sliding searchWe windows color is more uniformly distributed, when the window overlaps more with the desired LEA. Tomore color distribution a window area the CIE1931 space. We canspace. see that is corresponding colorofdistribution of a in window area incolor the CIE1931 color Wethe cancolor see that the analyze the color distribution, we use the k-means clustering algorithm [17]. In these examples, the uniformly distributed, when the window overlaps more overlaps with the desired LEA.the Todesired analyzeLEA. the color color is more uniformly distributed, when the window more with To value of kwe is 4, and a quadrangle can be formed by connecting the center of eachthe cluster. The ROI for distribution, use the k-means clustering algorithm [17]. In these examples, value of k is 4, and analyze colorLEA distribution, we use k-means clustering algorithmis [17]. these examples, thea thethe desired can be selected if thethe aspect ratio value of a quadrangle belowInthe threshold.

quadrangle can beaformed by connecting the center of each cluster. The ROI for cluster. the desired can value of k is 4, and quadrangle can be formed by connecting the center of each The LEA ROI for be selected if the aspect ratio value of a quadrangle is below the threshold. the desired LEA can be selected if the aspect ratio value of a quadrangle is below the threshold.

Figure 6. Cont.

Sensors 2016, 16, 1027

6 of 12

Sensors 2016, 16, 1027

6 of 12

Figure 6.6. Examples Examplesofofdifferent differentlocations locationsofofsliding slidingsearch searchwindow windowand andthe the corresponding correspondingcolor color Figure distribution of pixels within a window area in the CIE1931 color space. distribution of pixels within a window area in the CIE1931 color space.

2.2. 2.2.LEA LEADetection DetectionUsing Usingthe theHarris HarrisCorner CornerMethod Method The TheLEA LEAconfiguration, configuration,as asshown shownin inFigure Figure3,3,has hasaarectangular rectangularshape shapeand andan anintensity intensitydifference difference between betweenthe theinside insideand andthe theoutside outsideof ofthe theLEA. LEA.(This (Thischaracteristic characteristicmay maybe beused usedtotoidentify identifythe theLEA.) LEA.) Because Becauseaaquadrangle quadranglehas hasfour fourvertices, vertices,in inthis thispaper, paper,we weextract extractthe thevertices verticesfor forLEA LEAdetection detectionusing using the theHarris Harris corner corner method method [12]. [12]. Harris Harriscorner cornermethod methodhas hasbeen beenimproved improvedupon uponMoravec's Moravec'scorner corner detector detectorby byconsidering consideringthe thedifferential differentialof ofthe thecorner cornerscore scorewith withrespect respecttotodirection directiondirectly directly[12]. [12].Here, Here, the to as as the the autocorrelation. autocorrelation.For Forcalculating calculatingthe thecorner cornerscore, score,the thesum sum thecorner corner score score is is often referred to of of the squared differences defined Equation the squared differences is is defined asas in in Equation (2):(2): 2 ÿ  I px u, v“   y  rI , y y v` vq I ´ x, Iy px,  x,yq  x `uu,  yqs2 S pu,S vq w wpx, x, y

(2)(2)

x, y

where ( , ) denotes the window at position ( , ) , ( , ) is the intensity at ( , ) , and yq, I px, yq where w px, yq denotes the window at position px, is the intensity at px, yq, and ( + , + ) is the intensity at the moved window ( + , + ). Equation (1) can be expressed in I px ` u, y ` vq is the intensity at the moved window px ` u, y ` vq. Equation (1) can be expressed in matrix form as in Equation (3): matrix form as in Equation (3):   I 2 I x I y ff¸   u« ff u  ff  S  u , v  ˜ÿ u v    w  x«, y  2 x  u v M   « IxI x I y Ix Iyy2    v  u    x, y u v    S pu, vq “ ru vs wpx, yq “ ru vs M I I I2 v v x, y

x y

(3) (3)

y

where and are the partial derivatives of I. Then, we can obtain the score for each window using Equation where Ix (4): and Iy are the partial derivatives of I. Then, we can obtain the score for each window using Equation (4): 2 S     k 1   2 2 (4) SCC “ λ11 λ22 ´ k pλ (4) 1 ` λ2 q





where arethe the eigenvalues eigenvalues of of the the matrix in Equation Equation (3), (3), and where λ1 and and λ2 are matrix M in and kk is is aa tunable tunablesensitivity sensitivity parameter. andλ have have large positive values, then the position ( can , ) be can be identified as a px, yq parameter. If If λ1 and large positive values, then the position identified as a corner. 2 corner.

Sensors 2016, 16, 1027

7 of 12

Sensors 2016, 16, 1027

7 of 12 7 of 12

2.3. LEA Tracking with Kalman Filtering Sensors 2016, 16, 1027

2.3. LEA Tracking with Filtering The Kalman filter isKalman used to estimate the location of the LEA in the received image when the 2.3. LEA Tracking with Kalman Filtering relative position between the camera and the varies. addition, the smoothing effect The Kalman filter is used to estimate theLEA location of theInLEA in the received image when theof the The Kalman filter is used to estimate the location of the LEA in the received image when the Kalman filterposition improves the tracking result bythe reducing the uncertainty measurement noise relative between the camera and LEA varies. In addition, of thethe smoothing effect of the [18]. relative between camera andby the LEA varies. In addition, theLEA smoothing effect of[18]. the Kalman filter improves thethe tracking result the the measurement noise Filtering alsoposition helps to handle the situation inreducing which theuncertainty corners ofofthe that are momentarily Kalman improves the tracking result byin reducing the uncertainty of the measurement noise [18]. Filtering also helps to the situation which corners of the that are momentarily missed can befilter detected. Forhandle example, the LEA might bethe misrecognized asLEA another similar square object Filtering also helps to handle the situation in which the corners of the LEA that are momentarily missed can be detected. For example, the LEA might be misrecognized as another similar square or be obscured by obstacles. missed be detected. For example, the LEA might be misrecognized as another similar square object orcan be obscured by obstacles. We useorabe discrete-time Kalman filter to predict the motion of the LEA in the received image plane. objectWe by obstacles. useobscured a discrete-time Kalman filter to predict the motion of the LEA in the received image To apply We the use Kalman filter to LEA tracking, we selectthe themotion cornerofpoints of in thethe LEA as theimage variables a discrete-time predict the LEA plane. To apply the KalmanKalman filter to filter LEA to tracking, we select the corner points of received the LEA as the of theplane. Kalman filter; therefore, state vector xkvector and measurement vectorof ythe time step To of apply the Kalman filter to LEA wethe points the k are k at LEA variables the Kalman filter;the therefore, thetracking, state xselect andthe thecorner measurement vector y atas time defined as in Equation (5). Figure 7 presents a detailed overview of the discrete-time Kalman variables of the Kalman filter; therefore, the state vector x and the measurement vector y at time step k are defined as in Equation (5). Figure 7 presents a detailed overview of the discrete-time filter step k[13]. are defined as 8, in[13]. Equation 7 presents a detailed overview ofwhich theasdiscrete-time operation In operation Figure we can see(5). the8,Figure four corners the LEA, which used the variables Kalman filter In Figure we can see theoffour corners of the are LEA, are used as of Kalman filter operation [13]. In Figure 8, we can see the four corners of the LEA, which are used as the variables the Kalman filter.of the Kalman filter. “ ‰ the variables of the Kalman filter. x “ x, y, v , v y “ rx, ys (5) kx = , , x , y y k= , (5) x =

, ,

,

y =

,

(5)

Figure 7. Discrete-time Kalman filter loop. Figure 7. Discrete-time Kalman filter loop.

Figure 7. Discrete-time Kalman filter loop.

Figure 8. Four corners of LEA used as the variables of the Kalman filter. Figure 8. Four corners of LEA used as the variables of the Kalman filter.

Figure 8. Four corners of LEA used as the variables of the Kalman filter. 2.4. Perspective Projection for Correcting Image Distortion 2.4. Perspective Projection for Correcting Image Distortion If LEAProjection detection and tracking areImage successfully implemented, we must then make a symbol (color) 2.4. Perspective for Correcting Distortion If LEA tracking successfully wereceived must then makecan a symbol (color) decision fordetection each LED.and Because theare camera is used implemented, as a receiver, the image be distorted, decision for each LED. theperformance. camera is used as distorted aimplemented, receiver,shape the received imageis can distorted, If LEAadversely detection andBecause tracking are successfully then make a for symbol which affects the SER The ofwe themust LEA notbe suitable which adversely affectsLED. the each SER performance. distorted shape of the LEA not for can (color) decision for each Because the camera is LEA usedusing as a image receiver, theisreceived image determining the symbol for LED inside theThe square processing. Tosuitable correct the determining the symbol for each LEDthe inside square LEA using processing. To the correct theis not be distorted, which adversely affects SERthe performance. The image distorted shape of LEA

suitable for determining the symbol for each LED inside the square LEA using image processing.

Sensors 2016, 16, 1027

8 of 12

To correct the distorted shape of the LEA, perspective projection is used in the final step of our proposed method, as shown in Figure 2. Geometrical distortions such as scaling, rotation, skewing, and perspective distortion are very common transformation effects. Each distortion is represented by a Sensors 2016, 16, 1027 8 of 12 linear transformation, which is well-investigated in linear algebra [19]. These transformations can be performed using simple shown inisEquation distorted shape of themultiplication, LEA, perspectiveasprojection used in the(6): final step of our proposed method, as shown in Figure 2. Geometrical distortions˛such ¨ ¨as scaling, ˛ ¨ rotation, ˛ skewing, and perspective distortion are very common transformation is represented by a linear a1 a2 b1 effects. xEach distortion x1 ˚ ‹ ˚ algebra ‹ ˚ 1 ‹ transformation, which is well-investigated These transformations can be ˆ ˝ y ‚ “ ˝[19]. y ‚ ˝ a3 a4 b2in‚linear performed using simple multiplication, in Equation (6): 1 c c as shown 1 1 1

˜

a1 a3

a2 a4

(6)

2



¸

×

=

(6)

′ is a rotation matrix. The above matrix 1 defines the kind of transformations that will 1 1 ˜ ¸ b 1 matrix defines the kind of transformations that will is rotation, a rotationand matrix. The above where be performed: scaling, so on. is the translation vector; it simply moves the points b2 ´ ¯ be performed: scaling, rotation, and so on. is the translation vector; it simply moves the points in the x-y plane. c1 c2 is the projection vector. Here, x and y are the source points on the received ) is the projection vector. Here, x and y are the source points on the received in the x-y plane. ( image. And x’ and y’ are the destination points of the transformed coordinates. image. And x’ and y’ are the destination points of the transformed coordinates. Finally, given thethe four corner inthe theimage, image,the the perspective projection can be Finally, given four cornerpoints pointsof ofthe the LEA LEA in perspective projection can be applied to correct the distorted LEA, as shown in Figure 9. applied to correct the distorted LEA, as shown in Figure 9.

where

Figure Perspectiveprojection projection to distorted array. Figure 9. 9. Perspective tocorrect correctthe the distorted array.

3. Results and Discussion

3. Results and Discussion

In an actual experiment, a commercial camera (Logitech's webcam) was used as a receiver and

In an actual experiment, a commercial camera (Logitech's webcam) was used as a receiver and the resolution of a received image is 640 × 480 (VGA resolution). We used the Open CV library to the resolution received algorithm. image is 640 ˆ 1480 (VGA Wea used the Open library implement of oura proposed Table shows theresolution). parameters of transmitter used CV in the to implement shows the parameters of a transmitter used simulation.our Theproposed size of the algorithm. LEA is 4 × 4 Table and the1 number of symbols (or constellation points) is four.in the simulation. The asize LEAsymbols is 4 ˆ 4toand thethe number of symbols (or constellation is four. We transmit totalofofthe 16,000 ensure reliability of the received performance points) result and computeathe SER. usedsymbols the CIE1931 color-space-based constellation shown in Figure 4. We transmit total ofWe 16,000 to ensure the reliability of the received performance result and compute the SER. We used the CIE1931 color-space-based constellation shown in Figure 4. Table 1. Simulation parameters.

Table 1. Simulation parameters. Parameter Color space Parameter RGB model Color space white Reference RGB model LED array size Reference white Number of constellation points LED array size Intensity (Y value) Number of constellation points Intensity (Y Total number of value) symbols transmitted Total number of symbols transmitted

Three positions of RGB the CIE1931 Three positions of RGB LEDsLEDs in the in CIE1931 space space

Value CIE1931 Value CIE RGB CIE1931E CIE4RGB × 4 (16) E 4 4 ˆ 4 (16) 0.165 4 0.165 16,000 R:16,000 (0.0735, 0.265) R: (0.0735, 0.265) G: (0.274, 0.717) G: (0.274, 0.717) B: (0.167, 0.009) B: (0.167, 0.009)

In our simulation, we assumed that the LEA on the display device moved horizontally with a

In our simulation, that the on the display device moved horizontally with a constant velocity andwe thatassumed color distortion didLEA not occur during transmission. We designedly added constant and that color occur during transmission. We designedly added only velocity a Gaussian error value to distortion the points ofdid thenot corners. Therefore, we consider only errors that occur in the process of projection as a target region for the symbol decision. Figure 10 shows the tracking

Sensors 2016, 16, 1027

9 of 12

only a Gaussian error value to the points of the corners. Therefore, we consider only errors that occur in theSensors process projection as a target region for the symbol decision. Figure 10 shows the9 of tracking 2016, of 16, 1027 12 results for a horizontally moving LEA. The centroid of the LEA was tracked over 100 frames to verify Sensors 2016, 16, 1027 9 of 12 results for a horizontally moving LEA. The figure, centroid“GT” of theindicates LEA was the tracked over truth, 100 frames to verify the performance of the Kalman filter. In the ground which represents the performance of the Kalman filter. InThe theFigure figure,10a, “GT” thetracked ground truth, represents results for a horizontally moving LEA. centroid of the LEA over 100which frames to verify the true location of the moving LEA. From itindicates can bewas seen that the LEA is moving from left the true location of the moving LEA. From Figure 10a, it can be seen that the LEA is moving from the performance of the Kalman filter. In the figure, “GT” indicates the ground truth, which represents to right. From Figure 10b, it also can be seen that the LEA moves horizontally, because the left vertical to From Figure it also canFrom be seen that10a, the LEA horizontally, because thefrom vertical theright. trueislocation of the10b, moving LEA. Figure it canmoves be that the LEA moving leftbetter movement very small. The results show that the position of seen the moving LEAiscan be tracked movement is very small. The results thatthat the the position of the moving LEA can be tracked better right. From Figure it also canshow be seen LEA moves horizontally, because the vertical whentothe Kalman filter is10b, applied. when the Kalman applied. movement is very filter small.isThe results show that the position of the moving LEA can be tracked better when the Kalman filter is applied.

(a) (a)

(b) (b) LEA. (a) x plot vs. frame number; (b) y plot vs. Figure 10. Tracking results for a horizontally moving Figure 10. Tracking results for a horizontally moving LEA. (a) x plot vs. frame number; (b) y plot vs. frame Figurenumber 10. Tracking results for a horizontally moving LEA. (a) x plot vs. frame number; (b) y plot vs. frame number frame number

(a)

(b)

(a)results for the detected LEA. (b) Figure 11. Perspective projected (a) Distorted received image; (b) Perspective projected projected results. results for the detected LEA. (a) Distorted received image; Figure 11. Perspective Figure 11. Perspective projected results for the detected LEA. (a) Distorted received image; (b) Perspective projected results.

(b) Perspective projected Figure 11 shows theresults. correction results for the detected LEA after perspective projection. In the figure, the small yellow box inside theresults LED represents the areaLEA usedafter for the symbol (or color) decision. Figure 11 shows the correction for the detected perspective projection. In the

figure, the small yellow box inside the LED represents the area used for the symbol (or color) decision.

Sensors 2016, 16, 1027

10 of 12

Sensors 2016, 16, Figure 111027 shows

12 the correction results for the detected LEA after perspective projection. 10 Inofthe figure, the small yellow box inside the LED represents the area used for the symbol (or color) decision. Although LEA LEA detection detection and and correction correction were were successfully successfully performed, performed, the the symbol symbol (or (or color) color) decision decision Although might be difficult because of image distortion, which causes the yellow box to be placed outside the might be difficult because of image distortion, which causes the yellow box to be placed outside the LED region. However, if perspective projection is performed, the decision area is determined LED region. However, if perspective projection is performed, the decision area is determined correctly correctly andperformance the SER performance can be improved. and the SER can be improved. Figure 12 presents the improvement in SER SER performance performance when when we we use use aa Kalman Kalman filter filter with with Figure 12 presents the improvement in perspective correction. perspective correction.

Figure 12. SER performance comparison with and without Kalman filtering.

4. Conclusions Conclusions

paper, we we proposed proposed an an LEA LEA detection detection and and tracking tracking method method for for aa color-independent color-independent In this paper, visual-MIMO system utilizing image processing technique that is applicable to various applications. applications. To increase the reliability of LEA detection, we selected the ROI using color-space-based analysis of To increase the reliability of LEA detection, we selected the ROI using color-space-based analysis of the the received image a complex background. Next, the LEA with ashape square shape was using detected received image with with a complex background. Next, the LEA with a square was detected the using the Harris corner method. Furthermore, we used a Kalman filter to better track the moving Harris corner method. Furthermore, we used a Kalman filter to better track the moving LEA, which LEA,bewhich can bebydisturbed obstacles. Finally, to facilitate thesymbol) color (ordecision symbol)for decision for each can disturbed obstacles.byFinally, to facilitate the color (or each LED, the LED, the perspective process was performed on theimage. distorted image. Experimental results perspective projectionprojection process was performed on the distorted Experimental results show that show that ourmethod proposed method provides reliable LEA and tracking.SER Furthermore, SER our proposed provides reliable LEA detection and detection tracking. Furthermore, performance is performance is improved when using perspective with a Kalman filter. improved when using perspective projection with aprojection Kalman filter. Supplementary Materials: atat http://www.mdpi.com/1424-8220/16/7/1027/s1. Materials: The Thefollowing followingare areavailable availableonline online http://www.mdpi.com/1424-8220/16/7/1027/s1. Figure Figure S1: S1: The The illustration illustration of of LEA LEA Detection Detection and and Tracking Tracking Method Method for forColor-independent Color-independentVisual-MIMO. Visual-MIMO. Acknowledgments: This research was supported by Basic Science Research Program through the National Acknowledgments: supported by BasicofScience Research Program through the Research Foundation This (NRF)research of Koreawas funded by the Ministry Education (2015R1D1A1A01061396) and National was also Research Foundation (NRF) of KoreaFoundation funded byof the Ministry Education (2015R1D1A1A01061396) and was supported by the National Research Korea Grantoffunded by the Ministry of Science, ICT, Future Planning (2015R1A5A7037615). also supported by the National Research Foundation of Korea Grant funded by the Ministry of Science, ICT, Future Planning (2015R1A5A7037615). Author Contributions: Jai-Eun Kim designed and performed the main experiments and wrote the manuscript. Ji-Won Kim performed the experiments and analyzed the experimental results. Ki-Doo Kim as corresponding Authorinitiated Contributions: Kim designed and process performed the research main experiments wrote the manuscript. author the ideaJai-Eun and supervised the whole of this and wroteand the manuscript. Ji-Won Kim performed the experiments and analyzed the experimental results. Ki-Doo Kim as corresponding Conflicts of Interest: The authors declare no conflict of interest. author initiated the idea and supervised the whole process of this research and wrote the manuscript.

Conflicts of Interest: The authors declare no conflict of interest.

Sensors 2016, 16, 1027

11 of 12

Abbreviations The following abbreviations are used in this manuscript: BER GCM CSBM LEA LED LOS MIMO ROI SER SNR V2V V2X VLC

bit error rate generalized color modulation color-space-based modulation light emitting array light emitting diode line-of-sight multiple-input multiple-output region of interest symbol error rate signal-to-noise ratio vehicle-to-vehicle vehicle-to-everything visible light communication

References 1.

2.

3.

4.

5.

6.

7. 8. 9.

10. 11. 12. 13. 14.

Ashokz, A.; Gruteserz, M.; Mandayamz, N.; Dana, K. Characterizing Multiplexing and Diversity in Visual MIMO. In Proceedings of the IEEE 45th Annual Conference on Information Sciences and Systems, Baltimore, MD, USA, 23–25 March 2011. Yuan, W.; Dana, K.; Ashok, A.; Varga, M.; Gruteser, M.; Mandayam, N. Dynamic and Invisible Messaging for Visual MIMO. In Proceedings of the IEEE Workshop on the Applications of Computer Vision, Breckenridge, CO, USA, 9–11 January 2012. Nagura, T.; Yamazato, T.; Katayama, M.; Yendo, T. Tracking an LED Array Transmitter for Visible Light Communications in the Driving Situation. In Proceedings of the IEEE 7th International Symposium on Wireless Communication Systems, New York, NY, USA, 19–22 September 2010. Nagura, T.; Yamazato, T.; Katayama, M.; Yendo, T. Improved Decoding Methods of Visible Light Communication System for ITS using LED Array and High-Speed Camera. In Proceedings of the 2010 IEEE 71st Vehicular Technology Conference, Taipei, Taiwan, 16–19 May 2010. Yoo, J.-H.; Jung, S.-Y. Cognitive Vision Communication Based on LED Array and Image Sensor. In Proceedings of the IEEE 56th International Midwest Symposium on Circuits and Systems, Columbus, OH, USA, 4–7 August 2013. Premachandra, H.C.N.; Yendo, T.; Tehrani, M.P.; Yamazato, T.; Okada, H.; Fujii, T.; Tanimot, M. High-speed-camera Image Processing Based LED Traffic Light Detection for Road-to-Vehicle Visible Light Communication. In Proceeding of the 2010 IEEE Intelligent Vehicles Symposium, San Diego, CA, USA, 21–24 June 2010. Yamazato, T.; Takai, I.; Okada, H.; Fujii, T. Image-sensor-based visible light communication for automotive applications. IEEE Mag. 2014, 52, 88–97. [CrossRef] Takai, I.; Harada, T.; Andoh, M.; Yasutomi, K.; Kagawa, K.; Kawahito, S. Optical Vehicle-to-Vehicle Communication System Using LED Transmitter and Camera Receiver. IEEE Photonics J. 2014, 6. [CrossRef] Li, T.; An, C.; Xiao, X.; Campbell, A.T.; Zhou, X. Real-Time Screen-Camera Communication Behind Any Scene. In Proceedings of the 13th Annual International Conference on Mobile Systems, Florence, Italy, 18–22 May 2015. Kim, J.-E.; Kim, J.-W.; Park, Y.; Kim, K.-D. Color-Space-Based Visual-MIMO for V2X Communication. Sensors 2016, 16, 898–901. [CrossRef] [PubMed] Das, P.; Kim, B.Y.; Park, Y.; Kim, K.D. Color-independent VLC based on a color space without sending target color information. Opt. Commun. 2013, 286, 69–73. [CrossRef] Harris, C.; Stephens, M. A combined corner and edge detector. In Proceedings of the 4th Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988. Brown, R.G.; Hwang, P.Y.C. Introduction to Random Signals and Applied Kalman Filtering with Filtering with Matlab Exercises, 4th ed.; Wiley: New Jersey, NJ, USA, 2012. Hartley, I.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004.

Sensors 2016, 16, 1027

15.

16. 17.

18. 19.

12 of 12

Das, P.; Park, Y.; Kim, K.-D. Performance Analysis of Color-Independent Visible Light Communication Using a Color-Space-Based Constellation Diagram and Modulation Scheme. Wirel. Pers. Commun. 2014, 74, 665–682. [CrossRef] Berns, R.S. Principles of Color Technology, 3rd ed.; Wiley: New Jersey, NJ, USA, 2000; pp. 44–62. Shi, N.; Liu, X.; Guan, Y. Research on k-means Clustering Algorithm: An Improved k-means Clustering Algorithm. In Proceedings of the 2010 3rd International Symposium on Intelligent Information Technology and Security Informatics, Jinggangshan, China, 2–4 April 2010. Soo, S.T.; Thomas, B. A Reliability Point and Kalman Filter-Based Vehicle Tracking Technique. In Proceedings of the International Conference on Intelligent Systems, Penang, Malaysia, 19–20 May 2012. Affine and Projective Transformations. Available online: http://www.graphicsmill.com/docs/gm5/ Transformations.htm (accessed on 23 February 2016). © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

LEA Detection and Tracking Method for Color-Independent Visual-MIMO.

Communication performance in the color-independent visual-multiple input multiple output (visual-MIMO) technique is deteriorated by light emitting arr...
5MB Sizes 4 Downloads 12 Views