sensors Article

Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area Jun-Hyuck Im 1 , Sung-Hyuck Im 2 and Gyu-In Jee 1, * 1 2

*

Department of Electronic Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 05029, Korea; [email protected] Satellite Navigation Team, Korea Aerospace Research Institute, 169-84 Gwahak-ro, Yuseong-gu, Daejeon 305-806, Korea; [email protected] Correspondence: [email protected]; Tel.: +82-2-450-3070

Academic Editor: Felipe Jimenez Received: 31 March 2016; Accepted: 4 August 2016; Published: 10 August 2016

Abstract: Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vertical corners that meet the vertical planes are present everywhere in urban areas. These corners act as convenient landmarks, which can be extracted by using the light detection and ranging (LIDAR) sensor. A vertical corner feature based precise vehicle localization method is proposed in this paper and implemented using 3D LIDAR (Velodyne HDL-32E). The vehicle motion is predicted by accumulating the pose increment output from the iterative closest point (ICP) algorithm based on the geometric relations between the scan data of the 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea. In the experimental results, the maximum horizontal position error is about 0.46 m and the 2D Root Mean Square (RMS) horizontal error is about 0.138 m. Keywords: precise vehicle localization; vertical corner feature; urban area; corner map; 3D LIDAR

1. Introduction Precise vehicle localization has become important for the safe driving of autonomous vehicles. Generally, a Global Positioning System (GPS) is most often used for position recognition. However, the position accuracy of the GPS is not reliable in urban areas. To solve this problem, techniques for recognizing position by integrating several sensors (e.g., GPS, Inertial Measurement Unit (IMU), Vision, light detection and ranging (LIDAR), etc.) have been continuously researched. Kummerle et al. [1] presented an autonomous driving system in a multi-level parking structure. This approach utilizes multi-level surface maps to localize a self-driving car based on 3D LIDAR. Brenner [2] presented a vehicle localization approach using the typical roadside pole obtained by 2D LIDAR. Baldwin and Newman [3] and Chong et al. [4] presented a vehicle localization method with 2D push-broom LIDAR and 3D priors. Particularly, [4] utilized LIDAR accumulation with a 3D rolling window. Hu et al. [5], Choi and Maurer [6,7] presented hybrid map-based localization methods. The topological-metric hybrid map of [5] include the lane marking and 2D occupancy grid map. Also, the grid-feature hybrid map of [6,7] includes the typical roadside pole and 2D occupancy grid map. Thus, many methods have been researched for vehicle localization. However, these methods did not provide a complete solution for an urban area. Recently, a localization method using a Velodyne LIDAR based road intensity map has been researched. Levinson et al. [8,9], Hata and Wolf [10] presented an intensity map based localization method using the Velodyne LIDAR. Wolcott and Eustice [11] presented a visual localization method Sensors 2016, 16, 1268; doi:10.3390/s16081268

www.mdpi.com/journal/sensors

Sensors 2016, 16, 1268

2 of 23

within a LIDAR intensity map. The LIDAR based road intensity map matching method has the advantage of a very high localization accuracy. The camera based road intensity map matching method has the advantage of being able to be used with an already mounted low cost camera. However, both methods have the disadvantage of not being able to be used when the vehicle is in congested traffic in an urban area. Also, the data file size of the road intensity map is very large. Most recently, Wolcott and Eustice [12] presented a new scan matching algorithm that leverages Gaussian mixture maps. This solution is able to overcome the lack (harsh weather and poor road surface texture) of a road intensity map by exploiting the structure. Because tall buildings in urban areas obstruct the GPS satellite signal reception, the position accuracy of the GPS is very poor. However, buildings in urban areas can be used to generate a landmark map for precise vehicle localization. This map’s information can be represented in various forms (e.g., 2D/3D occupancy grid map, 3D point cloud, 2D corner feature, 2D contour (line), and 3D vertical surface, etc.). We focused on the outer wall of a building which is erected vertically to the ground and almost flat. These corners can provide very good landmarks and also can be extracted using the LIDAR. In contrast to the use of road marking, corners can be extracted during traffic congestion periods. A significant amount of research has been carried out on the corner detection method using LIDAR [13–17]. In addition, a large number of papers have been presented on the position estimation of a mobile robot using detected corner points. However, most of these papers focused on the indoor localization of the mobile robot [18–21] and research on vehicle localization using the vertical corner extraction based on LIDAR in an urban area has not been presented. In this paper, we used corner feature information for precise vehicle localization in an urban area. In indoor environments, corridor walls can be clearly scanned by 2D LIDAR [18–21]. This is because there is almost no object covering the corridor wall. Therefore, the corner can be easily extracted by 2D LIDAR. However, in urban environments, corner extraction using only 2D LIDAR is difficult. Because of the many roadside trees in urban areas, LIDAR can scan only a part of the building. Also, if the outer wall of the building is made of glass, the laser cannot be reflected. In this case, by using 3D LIDAR, the false positive rate can be reduced through combining the multiple corner extraction results from each layer. The corner map contains information describing the geometrical properties of the vertical corner of the corresponding building. The vehicle position error can be corrected through corner map matching. This corner map has the advantage of having a small data file size. The Extended Kalman Filter (EKF) is used to estimate the vehicle position. The vehicle motion is updated using the output from the ICP algorithm based on the geometric relations between the scan data of 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea. The remaining part of this paper is organized as follows. In Section 2, the vertical corner feature is defined and the corner extraction method is proposed. The procedure for the generation of the vertical corner map is described in Section 3. In Section 4, the Kalman filter configuration and the observability analysis are described. The vehicle driving test results are shown in Section 5. Finally, the conclusion is given in Section 6. 2. Corner Extraction In general, a corner is a position formed by the interconnection of two converging lines or surfaces. The outer wall of the building is almost flat, forming a vertical plane perpendicular to the ground. Therefore, the vertical corner could be a distinctive and useful point landmark for the map based vehicle localization in an urban area. In general, each building has at least four vertical corners and these can be extracted using LIDAR.

Sensors 2016, 16, 1268 Sensors 2016, 16, 1268 Sensors 2016, 16, 1268

3 of 23 3 of 23 3 of 23

2.1.Corner Corner Definition 2.1. 2.1. CornerDefinition Definition In this paper, the vertical corner is projected intoground the ground plane and treated as a point InInthis thethe vertical corner is projected intointo the planeplane and treated as a point thispaper, paper, vertical corner is projected the ground and treated as a feature point feature on the 2D horizontal plane. Figure 1 shows the corner definition. on the 2D plane. Figure shows1the corner feature onhorizontal the 2D horizontal plane.1Figure shows thedefinition. corner definition.

Figure1.1.Corner Corner definition. Figure Figure 1. Corner definition. definition.

As the corner corner consisted consisted of of aa corner cornerpoint pointand andtwo twocorner cornerlines. lines.Figure Figure2 2 As shown in Figure 1, the As shown in Figure 1, the corner consisted of a corner point and two corner lines. Figure 2 shows shows corner. shows the the attributes of the corner. the attributes of the corner.

Figure 2. Attribute of the corner feature. Figure Figure 2. 2. Attribute Attributeof ofthe thecorner cornerfeature. feature.

The position in the east-north-up (ENU) frame. Thecorner cornerpoint point isis represented by 2D The corner point isrepresented representedby byaa2D 2Dhorizontal horizontalposition positionin inthe theeast-north-up east-north-up(ENU) (ENU)frame. frame. The corner line has a directional property of the corresponding building. Therefore, as shown The corner line has a directional property of the corresponding Therefore, as shown The corner line has a directional property of the corresponding building. Therefore, as showninin in Figure is represented by aa directional angle (θ , θ2 ) )ininthe ENU frame. Figure2,2, 2,the thecorner cornerline line , 2 2 ) inthe theENU ENUframe. frame. Figure the corner line is is represented represented by by a directional directionalangle angle( (11 1,  2.2. Corner Extraction 2.2. Corner Corner Extraction Extraction 2.2. In this paper, the vehicle localization utilizes the vertical corner feature of the building. Therefore, In this this paper, paper, the vehicle localization utilizes the corner feature of the In the localization the vertical vertical corner feature thebuilding. building. we only use the data of thevehicle eight upper layers ofutilizes the 3D LIDAR [22]. The scanned dataof is processed for Therefore, we only use the data of the eight upper layers of the 3D LIDAR [22]. The scanned data isis Therefore, we only use the data of the eight upper layers of the 3D LIDAR [22]. The scanned data each layer. processed for each layer. processed for each layer. 2.2.1. Line Extraction 2.2.1. Line Line Extraction Extraction 2.2.1. In order to extract the corner, the line must be accurately extracted. Several line extraction methods In order to extract the corner, the line must be accurately extracted. Several line extraction In order tobeen extract the corner, theInline be accurately extracted. Several line algorithm extraction have previously presented [23–26]. thesemust methods, the Iterative-End-Point-Fit (IEPF) methods have previously been presented [23–26]. In these methods, the Iterative-End-Point-Fit methods haveperformance previously in been presented [23–26]. In these methods, the Figure Iterative-End-Point-Fit shows the best terms of accuracy and computational time [27]. 3 shows the line (IEPF) algorithm shows the best performance in terms of accuracy and computational time [27]. (IEPF) algorithm shows best performance in terms of accuracy and computational time [27]. extraction result using thethe IEPF algorithm. Figure 3 shows the line extraction result using the IEPF algorithm. Figure showsinthe line extraction usingpoints the IEPF algorithm. As 3shown Figure 3, the laserresult scanning are divided into many line segments. As shown in Figure 3, the laser scanning points are divided into many line segments. As shown in Figure 3, the laser scanning points are divided into many line segments.

Sensors 2016, 16, 1268 Sensors 2016, 16, 1268 Sensors 2016, 16, 1268

4 of 23 4 of 23 4 of 23

Figure 3. Line extraction result using the IEPF algorithm. Figure 3. Line extraction result using the IEPF algorithm. Figure 3. Line extraction result using the IEPF algorithm.

2.2.2. Outlier Removal 2.2.2. Outlier Removal 2.2.2. Outlier Removal In Figure 3, most of these line segments are a set of scan points from the leaves of the roadside In Figure 3, most of these line segments are a set of scan points from the leaves of the roadside Figure 3,the most of these line segments a set of scan points from the be leaves of the Figure roadside trees. In Therefore, laser data reflected by theare leaves roadside trees must removed. 4 trees. Therefore, the laser data reflected by the leaves of roadside trees must be removed. Figure 4 trees. Therefore, the laserpoints data reflected of roadside removed. Figure 4 shows the laser scanning reflected by the leaves and the outertrees wallmust of thebe building. shows the laser scanning points reflected by the leaves and the outer wall of the building. shows the laser scanning points reflected by the leaves and the outer wall of the building.

Figure 4. Laser scanning points reflected by the leaves of roadside trees and the outer wall of Figure 4. Laser scanning points reflected by the leaves of roadside trees and the outer wall of the the building. Figure 4. Laser scanning points reflected by the leaves of roadside trees and the outer wall of the building. building.

As shown of of thethe laser scanning points that that are reflected by thebytwo As shown in inFigure Figure4,4,the thefeatures features laser scanning points are reflected thetypes two of objects are clearly distinguished. For roadside trees, the variance of distance errors between the shown are in Figure the features For of the laser scanning that reflected bybetween the two typesAs of objects clearly 4, distinguished. roadside trees, the points variance of are distance errors extracted line and each point is very large. On the other hand, for outer wall of a building, the variance types of objects clearly For roadside trees, hand, the variance of distance between the extracted lineare and each distinguished. point is very large. On the other for outer wall of aerrors building, the of the distance errors is very small. Figure 5 shows the pseudo code for outlier removal. the extracted and each point is very large. On5the other hand, for code outerfor wall of a building, variance of theline distance errors is very small. Figure shows the pseudo outlier removal. the

variance of the distance errors is very small. Figure 5 shows the pseudo code for outlier removal.

Sensors 2016, 16, 1268 Sensors 2016, 2016, 16, 16, 1268 1268 Sensors

5 of 23 of 23 23 55 of

Figure 5. 5. Pseudo Pseudo code code for for outlier outlier removal. removal. Figure

Through the the process of of the pseudo pseudo code, the the outliers such such as the the roadside trees trees are removed. removed. Through Through the process process of the the pseudo code, code, the outliers outliers such as as the roadside roadside trees are are removed. Figure 66 shows shows the the line line extraction extraction result result after after the the outlier outlier removal. removal. Figure Figure 6 shows the line extraction result after the outlier removal. -3560 -3560

-3580 -3580

-3600 -3600

-3620 -3620

-3640 -3640

-3660 -3660 -2080 -2080

-2060 -2060

-2040 -2040

-2020 -2020

-2000 -2000

-1980 -1980

Figure 6. Line extraction result (after outlier removal). Figure 6. 6. Line Line extraction extraction result result (after (after outlier outlier removal). removal). Figure

In In Figure Figure 6, 6, the the green green points points represent representthe theline linesegments segmentsthat thatare arefinally finallyextracted. extracted. In Figure 6, the green points represent the line segments that are finally extracted. 2.2.3. Corner Candidate Extraction 2.2.3. Corner Corner Candidate Candidate Extraction Extraction 2.2.3. By By the the corner corner definition definition of ofthe theSection Section2.1, 2.1,the thecorner cornercandidate candidateis isextracted. extracted. Figure Figure 777 shows shows the the By the corner definition of the Section 2.1, the corner candidate is extracted. Figure shows the extracted corner candidate. extracted corner corner candidate. candidate. extracted

Sensors 2016, 16, 1268

6 of 23

Sensors 2016, 16, 1268

6 of 23

Sensors 2016, 16, 1268

6 of 23

-3634 -3634 -3636 -3636 -3638 -3638

-3640

-3640

-3642

-3642

-3644 -3644

-3646 -3646

-3648

-3648

-2052 -2050 -2048 -2046 -2044 -2042 -2040 -2038 -2052 -2050 -2048 -2046 -2044 -2042 -2040 -2038

Figure 7. Extracted Extracted corner candidate (blackstar star and lines). Figure 7. (black star and lines). Figure 7. Extractedcorner cornercandidate candidate (black and lines).

As shown in Figure 7, 7, thethecorner star) and andthe thecorner corner lines (black As shown in Figure corner point point (black (black star) lines (black lines)lines) are are As shown in Figure 7, the corner point (black star) and the corner lines (black lines) are extracted extracted as corner the corner candidate. extracted as the candidate. as the corner candidate. Corner Determination 2.2.4.2.2.4. Corner Determination 2.2.4. Corner Determination By using 3D LIDAR, the false positive rate can be reduced through the redundant corner By using using 3D LIDAR, LIDAR, the the false false positive rate rate can be be reduced through through the the redundant redundant corner corner By candidates 3D from each layer. Figure 8positive shows the 3D can positionreduced of the corner candidates extracted from candidates from each layer. Figure 8 shows the 3D position of the corner candidates extracted candidates from each layer. Figuredetermination 8 shows the 3D position of the corner candidates extracted fromfrom the the eight layers and the corner result. the eight layers and the corner determination result. eight layers and the corner determination result.

Figure 8. (a) 3D position of corner candidates; (b) Corner determination result.

In Figure 8a, the multiple corner candidates from each layer can be found. Generally, the outer Figure 8. (a) 3D of candidates; (b) determination result. 8. is (a)erected 3D position position of corner corner (b) Corner Corner determination wall of the Figure building vertically to thecandidates; ground. Thus, as shown In Figure 8b,result. the 2D positions of the corner candidates extracted in one vertical corner are approximately the same. The corner In Figure 8a, 8a, the multiple corner candidates from each layer layer can be be found. found. Generally, the outer outer candidates that exist within corner a certain area in thefrom 2D east-north frame are classified into the same In Figure the multiple candidates each can Generally, the wall of of the building building is erected erected vertically to the ground. ground. Thus, as shown shown In Figure Figure 8b, the 2D 2D positions corner. Then, if is the numbervertically of cornerto candidates that Thus, exist within a certain area is greater than a wall the the as In 8b, the positions

of the the corner corner candidates candidates extracted extracted in in one one vertical vertical corner corner are are approximately approximately the the same. same. The The corner corner of candidates that exist within a certain area in the 2D east-north frame are classified into the same candidates that exist within a certain area in the 2D east-north frame are classified into the same corner. Then, corner. Then, ifif the the number number of ofcorner cornercandidates candidatesthat thatexist existwithin withina acertain certainarea areaisisgreater greaterthan thana

Sensors 2016, 2016, 16, 16, 1268 1268 Sensors

of 23 23 7 of

predetermined number (redundancy check), the corner is determined. In Figure 8b, the position and aangle predetermined number corner (redundancy the corner In Figure 8b, the position of the determined (red) ischeck), the average valueisofdetermined. the corner candidates (black) for the and angle of the determined corner (red) is the average value of the corner candidates (black) for the corresponding corner. corresponding corner. 2.2.5. Corner Extraction Result 2.2.5. Corner Extraction Result Figure 9 shows the final corner extraction result. Figure 9 shows the final corner extraction result.

-3560

North (m)

-3580

-3600

-3620

-3640

-3660 -2080

-2060

-2040

-2020 East (m)

-2000

-1980

Figure Figure 9. 9. Corner Corner extraction extraction result. result.

In Figure 9, the the extracted extractedline linesegments segmentsare arerepresented representediningreen greenand and the objects (outlier) such the objects (outlier) such as as roadside trees represented yellow.The Thestars starsrepresent representthe thecorner corner candidates candidates and and the red thethe roadside trees areare represented inin yellow. stars represent shown in Figures 8 and 9, all of the black stars were represent the the final final extracted extracted corners. corners. As shown were extracted extracted from from only only one one layer layer and and these these were were excluded excluded from fromthe theredundancy redundancycheck. check. 3. CornerMap MapGeneration Generation 3. Corner A A corner corner map map is is essential essential for for precise precise vehicle vehicle localization. localization. By By using using the the corner corner map, map, the the vehicle vehicle position position can can be be very very accurately accurately estimated. estimated. 3.1. 3.1. Corner Corner Map Map Definition Definition Basically, a Basically, a corner corner map map must must have have the the 2D 2D positions positions of of the the corners. corners. However, However, if if the the corner corner map map only in the process. only has has the the position position information information of of the the corners, corners, problems problems can can arise arise in the data data association association process. Figure Figure 10 10 shows shows an an extraction extraction result result of of the the two two nearest nearestcorners. corners. In Figure 10, the red circles represent the position of the of corner theAlso, red stars In Figure 10, the red circles represent the position the map. cornerAlso, map. the represent red stars the extracted corners, where the distance between the two corners is approximately 2.5 m. In represent the extracted corners, where the distance between the two corners is approximately 2.5this m. situation, when the position error of the vehicle has occurred more than 1 m from the true position, the In this situation, when the position error of the vehicle has occurred more than 1 m from the true possibility of data association failure greatly failure increased. Therefore, the direction information of position, the possibility of data association greatly increased. Therefore,angle the direction angle the two corner lines is added to the corner map (see Figure 2). The two corners shown in Figure 10 can information of the two corner lines is added to the corner map (see Figure 2). The two corners be clearly the direction angle information of the corner shown in identified Figure 10 by canusing be clearly identified by using the direction angle lines. information of the corner

lines.

Sensors 2016, 16, 1268 Sensors 2016, 16, 1268

8 of 23 8 of 23

Finally,totodetermine determine quality the extracted corner information, theerror position error Finally, the the quality of theofextracted corner information, the position covariance covariance corner is added the corner Table 1 shows an of a corner map. of the cornerofisthe added to the cornertomap. Table 1map. shows an example of aexample corner map.

-3668

North (m)

-3670

-3672

-3674

-3676

-3678 -1624

-1622

-1620 -1618 East (m)

-1616

-1614

Figure Figure 10. 10. Extraction Extraction result result of of the the two two nearest nearest corners. corners. Table Table1.1. An An example example of ofaacorner cornermap. map.

EastEast North North IndexIndex (m) (m) (m) (m) 1 10 10 1 10 10 2 20 20 2 3030

Direction Angle Direction Direction Angle 1 1 Direction Angle 2 Angle 2 Covariance Covariance Matrix (2Matrix ˆ 2) (2 × 2) (Degree) (Degree)(Degree) (Degree) 132132 ´13−13

45 ´13

45 −13

0.0021 0.0042

0.0021 0.0009 0.0009 0.0009 0.0009 0.00290.0029 ´0.001 −0.001 ´0.001 −0.001 0.00150.0015 0.0042

As shown shown in in Table Table1, 1,the thecorner cornermap mapinformation informationisissaved savedas asaatext textfile. file. Based Based on on aa driving driving path path As of about 2 km, the number of extracted corners is 271 and the text file size is 28 KB. The corner map of about 2 km, the number of extracted corners is 271 and the text file size is 28 KB. The corner map therefore has hasthe the advantage advantageof ofbeing beingable ableto tocover coveraalarge largearea areawith withaavery verysmall smalldata datafile filesize. size. therefore 3.2. Corner Corner Map Map Generation Generation 3.2. The experiment experimenthas hasbeen been carried in the Gangnam of Seoul, 11 The carried outout in the Gangnam area area of Seoul, SouthSouth Korea.Korea. Figure Figure 11 shows shows the vehicle trajectory and the street view at the four intersections. the vehicle trajectory and the street view at the four intersections. As shown shownininFigure Figure11,11, experimental environment a dense urban surrounded by As thethe experimental environment is aisdense urban area area surrounded by high high buildings, andlaps twowere lapsdriven were driven from the starting to the finish point. The traveling buildings, and two from the starting point topoint the finish point. The traveling distance distance 4.5the kmmaximum and the maximum traveling about 40 Theand position and was aboutwas 4.5 about km and traveling speed wasspeed aboutwas 40 km/h. Thekm/h. position heading heading of thewere vehicle were by acquired by integrated using the integrated system of theKinematic Real-Time(RTK) Kinematic of the vehicle acquired using the system of the Real-Time GPS (RTK) GPS and Inertial Navigation System (INS) (NovAtel RTK/SPAN system). In an environment and Inertial Navigation System (INS) (NovAtel RTK/SPAN system). In an environment where there where there arebuildings, many high the position error of the RTK/INS 1–2 m.the Therefore, are many high thebuildings, position error of the RTK/INS is about 1–2 is m.about Therefore, vehicle the vehicle trajectory musttobe corrected to generate the vehicle corner trajectory map. Thewas vehicle trajectory was trajectory must be corrected generate the corner map. The optimized by using optimized by using the GraphSLAM method [28,29]. In this paper, this optimized vehicle trajectory the GraphSLAM method [28,29]. In this paper, this optimized vehicle trajectory was considered as the was considered as the truth. Figure 12 showsresult the graph result of the vehicle ground truth. Figure 12 ground shows the graph optimization of the optimization vehicle trajectory. trajectory.

Sensors 2016, 16, 1268

9 of 23

Sensors 2016, 16, 1268 Sensors 2016, 16, 1268

9 of 23 9 of 23

Figure 11. Vehicle trajectory and street view (four intersections). Figure (four intersections). intersections). Figure11. 11.Vehicle Vehicletrajectory trajectoryand andstreet street view view (four

As shown at the top left of Figure 12, the intensity maps for the two laps do not match. On the the top leftofof Figurerepresent 12,the theintensity intensity maps for the laps match. the AsAs shown at at the top left Figure 12, maps the two two lapsdo donot not match. On the right of shown Figure 12, the red points the corrected vehicle trajectory after the On graph right of Figure 12,red thepoints points left represent the12,corrected vehicle after optimization. the graph right of Figure 12, represent the corrected vehicle trajectory after thematches. graph optimization. Asthe shown atred the bottom of Figure the intensity map trajectory exactly Here, the optimization. As shown at the bottom12, left of Figure 12,algorithm the intensity matches. Here, the Asincremental shown at the bottom left of Figure the intensity map exactly matches. Here, the incremental pose information outputted from the ICP wasmap usedexactly as edge measurement of incremental pose information outputted from the ICP algorithm was used as edge measurement of pose outputted from thefor ICP wasmethod used as of theThus, graph. theinformation graph. The theory and principles thealgorithm GraphSLAM areedge well measurement described in [28,29]. the graph. The theory and principles for the GraphSLAM method are well described in [28,29]. Thus, obtaining the optimized is possible byare using the GraphSLAM. The theory and principles vehicle for the trajectory GraphSLAM method well described in [28,29]. Thus, obtaining obtaining the optimized vehicle trajectory is possible by using the GraphSLAM. the optimized vehicle trajectory is possible by using the GraphSLAM.

Figure 12. Graph optimization result of the vehicle trajectory using GraphSLAM. Figure 12.12. Graph trajectoryusing usingGraphSLAM. GraphSLAM. Figure Graphoptimization optimizationresult resultof ofthe the vehicle vehicle trajectory

The corner map is generated based on the ground truth. In addition, by applying the corner The corner map generated based onthe theground ground truth. extracted In by applying the corner extraction method described in Section 2,on the data of the corner from each vehicle position The corner map is is generated based truth. In addition, addition, by applying the corner extraction method described in Section 2, the data of the corner extracted from each vehicle position are collected. In the next, clustering is performed the data in suchfrom a corner. theposition mean extraction method described in Section 2, the datatoofclassify the corner extracted eachAlso, vehicle are collected. In direction the next, clustering performed to classify the data in such a corner. Also, the mean mean angle, andisisposition error of data the corner fora each cluster arethe areposition, collected. In the next, clustering performed tocovariance classify the in such corner. Also, position, mean direction angle,theand positionoferror covariance of the cornerwith for aeach cluster are calculated. Finally, to increase reliability the corner map, the corners large position mean position, mean direction angle, and position error covariance of the corner for each cluster are calculated. Finally, toa small increase the reliability ofexcluded the corner map, the corners with a large position error covariance and extracted count are from the corner map. Figure 13 shows the calculated. Finally, to increase the reliability of the corner map, the corners with a large position error error covariance and a small extracted count are excluded from the corner map. Figure 13 shows the covariance and a small extracted count are excluded from the corner map. Figure 13 shows the final

Sensors Sensors 2016, 2016, 16, 16, 1268 1268 Sensors 2016, 16, 1268

10 of 23 10 of 23

final generated generated corner corner map map information information displayed displayed on on the the synthetic synthetic map map of the the 2D 2D occupancy occupancy grid grid final generated corner map information displayed on the synthetic map of the 2Dofoccupancy grid and road and road intensity maps. and road maps. intensity maps. intensity

Figure 13. 13. Generated Generated corner corner map map information information (on (on synthetic synthetic map). map). Figure

As shown shown in in Figure Figure 13, 13, many many corners corners were were extracted, extracted, and and false false positives positives did did not not occur. occur. As were extracted, and false positives did not occur. 3.3. Data Data Association Association with with the the Corner Corner Map Map 3.3. First, the theextracted extracted corners need toassociated be associated associated with themap corner map data.association The data data First, corners need to beto with the corner data.map The data the extracted corners need be with the corner data. The associationtheconsiders considers the position position error covariance of the the14vehicle. vehicle. Figure 14 shows the the data considers position error covariance of the vehicle. Figure shows the data14 association process association the error covariance of Figure shows data association process between the extracted corners and the corner map. between theprocess extracted cornersthe and the corner map.and the corner map. association between extracted corners -3779 -3779 -3780 -3780

North North(m) (m)

-3781 -3781 -3782 -3782 -3783 -3783 -3784 -3784 -3785 -3785 -1687 -1687

-1686 -1686

-1685 -1684 -1684 -1685 East (m) East (m)

-1683 -1683

Figure Data association Figure 14. 14. Data Data association association with with the the corner corner map. map. Figure 14. with the corner map.

-1682 -1682

Sensors 2016, 16, 1268

11 of 23

In Figure 14, the red circle indicates the position of the corner map and the red star shows the position of the extracted corner. The red lines represent the directional angle of the corner lines, and the magenta ellipse represents the position uncertainty of the corner caused by the position error covariance of the vehicle. As shown in Figure 14, when the red circle appears in the elliptic region and the difference between the directional angles of the map and the extracted corner is less than a predetermined angle value, the two corners are determined to be the same corner. 4. Kalman Filter Configuration and Observability Analysis In this paper, the general 2D point landmark-based positioning technique is used to estimate the vehicle position [30]. The landmarks are the vertical corners of the building in the street. 4.1. Motion Update The motion update process is predicted with the translation vector (T) and the rotation matrix (R) from the ICP algorithm based on the range data of 3D LIDAR. The state variables are defined as follows. q “ px , y , θqT (1) u “ pT , RqT where x and y are the horizontal (2D) positions of the vehicle in the ENU frame, and θ is the heading angle of the vehicle. T and R refer to the increment of the 2D position and the heading angle, respectively. The state equation is as follows. qk`1 “ uk » FK qk ` Gkfi 1 0 0 — ffi Fk “ – 0 1 0 fl 0 0 1

»

fi cos pθk q ´sin pθk q 0 — ffi Gk “ – sin pθk q cos pθk q 0 fl 0 0 1

(2)

In Equation (2), the vehicle motion is updated by accumulating the results of the ICP. 4.2. Measurement Update The range and bearing measurements between the vehicle and the corners are used for the measurement update. The measurement equations are as follows. b ri “

pxi ´ xq2 ` pyi ´ yq2 ˆ ´1

αi “ tan

yi ´ y xi ´ x

(3)

˙ ´θ

(4)

where ri and αi are the range and bearing measurements between the vehicle and the ith corner, respectively. xi and yi are the 2D position of the ith corner in the corner map. The measurement matrix H is as follows.

Sensors 2016, 16, 1268

12 of 23

»

b ´px1 ´xq px1 ´xq2 `py1 ´yq2 py1 ´yq px1 ´xq2 `py1 ´yq2

— — — — — — .. H“— . — — b ´pxn ´xq — — pxn ´xq2 `pyn ´yq2 – py n ´yq pxn ´xq2 `pyn ´yq2

´py1 ´yq b px1 ´xq2 `py1 ´yq2 ´px1 ´xq px1 ´xq2 `py1 ´yq2

.. .

´pyn ´yq b pxn ´xq2 `pyn ´yq2 ´pxn ´xq pxn ´xq2 `pyn ´yq2

0

fi

ffi ffi ´1 ffi ffi ffi . . ffi . ffi ffi ffi 0 ffi ffi fl ´1

(5)

4.3. Observability Analysis The observability of the vehicle state estimation using the corner point measurement is required to be verified. This observability of the vehicle state is analyzed in the algebraic framework [31]. First, the state evolution model is as follows. .

x “ ||T|| ¨ cosθ . y “ ||T|| ¨ sinθ

(6)

.

θ“R where || ¨ || is the Euclidean norm and R is a 1 ˆ 1 matrix. Expressing Equation (4) with respect to θ is as follows. ˆ ˙ ´1 y i ´ y θ “ tan ´ αi (7) xi ´ x By taking the derivative with respect to time of Equation (7), we obtain Equation (8). .

1

θ“

´ 1`

¯ y i ´y 2 xi ´ x

#

yi ´ y

1 ´y¨ x¨ xi ´ x pxi ´ xq2 .

+

.

(8)

By substituting Equation (6) in Equation (8), we obtain the following equation. #

1

R“

´ 1`

¯ y i ´y 2 xi ´ x

yi ´ y

1 ´ ||T|| ¨ sinθ ¨ ||T|| ¨ cosθ ¨ 2 xi ´ x pxi ´ xq

+ (9)

By substituting Equation (7) in Equation (9), we obtain Equation (10). ” ||T|| ¨ R“

y i ´y xi ´ x

! ´ ¯ ) ! ´ ¯ )ı y ´y y ´y ¨ cos tan´1 xi ´x ´ αi ´ sin tan´1 xi ´x ´ αi i i " ´ ¯ * y i ´y 2 pxi ´ xq 1 ` x ´x

(10)

i

In Equations (3) and (10), ri is the range measurement, T and R are the input values from the ICP, xi and yi represent the position information from the corner map, and x and y are unknowns. Since there are two equations and two unknown variables, it is possible to calculate x and y. Therefore, x and y are observable. In Equation (7), since αi is a measurement, θ is also observable. From this observability analysis, if at least more than one corner feature can be extracted while the vehicle is moving, the vehicle position can be estimated from the corner map matching.

Sensors 2016, 16, 1268

and y . Therefore,

13 of 23

x and y are observable. In Equation (7), since i is a measurement,  is

also observable. From this observability analysis, if at least more than one corner feature can be extracted while Sensors 2016, 16, 1268 13 of 23 the vehicle is moving, the vehicle position can be estimated from the corner map matching. ExperimentalResults Results 5.5. Experimental thissection, section,we weanalyze analyzethe theexperimental experimentalresults resultsby bycomparing comparingthe thecorner cornermap mapmatching, matching, InInthis RTK/INS,and andICP ICP based Dead Reckoning (DR). We also analyze the causes the position error RTK/INS, based Dead Reckoning (DR). We also analyze the causes of the of position error when when the corner map matching is used. the corner map matching is used. Figure1515shows showsthe the vehicle trajectory each case. The trajectory corner map matching Figure vehicle trajectory forfor each case. The trajectory forfor thethe corner map matching is is almost same as the ground truth. In the case RTK/INS, the trajectoryhas hasa asmall smalloffset offseterror error almost the the same as the ground truth. In the case of of RTK/INS, the trajectory withrespect respecttotothe theground groundtruth. truth.The Thetrajectory trajectoryfor forthe theICP ICPbased basedDR DRsignificantly significantlydiffers differsfrom fromthe the with groundtruth. truth.Figure Figure1616shows showsthe theaccumulated accumulatedposition positionerror errorthat thatoccurs occurswhen whenusing usingonly onlythe theICP ICP ground basedDR DRwithout withoutthe thecorner cornermap mapmatching. matching.As Asisiswell wellknown, known,the theposition positionerror errorofofthe theICP ICPbased based based DRis is divergent. accumulated position errorbecan be removed map DR divergent. ThisThis accumulated position error can removed by usingby theusing cornerthe mapcorner matching. matching. Figure shows the matching. corner map matching. Figure 17 shows the17result of the the result cornerofmap

Figure Figure15. 15.Vehicle Vehicletrajectory trajectoryfor foreach eachcase. case.

Sensors Sensors 2016, 2016, 16, 16, 1268 1268 Sensors 2016, 16, 1268

14 14 of of 23 23 14 of 23

20 20 18 18 16 16

Horizontal Error Horizontal Error Corner Matching Error Corner Matching Error RTK/INS Error RTK/INS Error ICP Error ICP Error

14 14

Meters Meters

12 12 10 10 8 8 6 6 4 4 2 2 0 00 0

1000 1000

2000 2000

3000 4000 3000 4000 Epoch (0.1 sec) Epoch (0.1 sec)

5000 5000

6000 6000

Figure 16. 16. Position Position error error (using (using only only the the ICP ICP based based DR). DR). Figure 16. Position error (using only the ICP based DR). Figure Horizontal Error Horizontal Error 1.2 1.2 1 1

Meters Meters

0.8 0.8 Corner Matching Error Corner Matching Error RTK/INS Error RTK/INS Error ICP Error ICP Error

0.6 0.6 0.4 0.4 0.2 0.2 0 00 0

1000 1000

2000 2000

3000 4000 3000 4000 Epoch (0.1 sec) Epoch (0.1 sec)

5000 5000

6000 6000

Figure Position Figure 17. 17. Position Position error error (corner (corner map map matching matchingand andRTK/INS). RTK/INS). Figure 17. error (corner map matching and RTK/INS).

In Figure Figure17, 17,the the blue line represents the position position error of the the RTK/INS, RTK/INS, andline therepresents red line line In blue lineline represents the position error of the RTK/INS, and the red In Figure 17, the blue represents the error of and the red represents the position error of the corner map matching. The maximum position error of the the position error of the corner map matching. The maximum position error of the RTK/INS is about represents the position error of the corner map matching. The maximum position error of the RTK/INS is about about 1.2 m. m. Here, Here,of the position error of the the RTK/INS at thesmall start because point is is the veryvehicle small 1.2 m. Here, the position error the RTK/INS at of the start point is at very RTK/INS is 1.2 the position error RTK/INS the start point very small because the vehicle trajectory is optimized based on the start point. On the other hand, trajectory is optimized based on the start point. On the other hand, the maximum position error of the because the vehicle trajectory is optimized based on the start point. On the other hand, the maximum position error of the corner map matching is about 0.46 m. The 2D RMS horizontal errors corner mapposition matching is about m. The RMS horizontal errors the2D RTK/INS and the corner maximum error of the0.46 corner map2D matching is about 0.46 m.of The RMS horizontal errors of the RTK/INS and the corner map matching are about 0.85 m and 0.138 m, respectively. In Figure map matching are about 0.85 m and 0.138 m, respectively. In Figure 18, it can be seen that the heading of the RTK/INS and the corner map matching are about 0.85 m and 0.138 m, respectively. In Figure 18, itit is can be seen seen that that the heading error is very very small. The RMS headingiserror error of0.168 the ˝corner corner map error very small. Thethe RMS heading error of the corner map matching aboutof . Figure 19 18, can be heading error is small. The RMS heading the map matching is about 0.168°. Figure 19 shows the Cumulative Distribution Function (CDF) of the shows the is Cumulative Distribution Function of the horizontal vehicle Function position error. matching about 0.168°. Figure 19 shows (CDF) the Cumulative Distribution (CDF) of the horizontal vehicle position error. horizontal vehicle position error.

Sensors 2016, 16, 1268 Sensors 2016, 16, 1268

15 of of 23 23 15 15 of 23

Heading Error Heading Error

0.8 0.8 0.6 0.6 0.4 0.4

Degrees Degrees

0.2 0.2 0 0 -0.2 -0.2 -0.4 -0.4 -0.6 -0.6 -0.8 -0.80 0

1000 1000

2000 2000

3000 4000 3000 4000 Epoch (0.1 sec) Epoch (0.1 sec)

5000 5000

6000 6000

Figure 18. Heading error (corner map matching). Figure Figure18. 18. Heading Heading error error(corner (cornermap mapmatching). matching). CDF of the Horizontal Position Error CDF of the Horizontal Position Error

1 1 0.9 0.9 0.8 0.8

Probability Probability

0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 Corner Matching Error Corner Matching Error RTK/SPAN Error RTK/SPAN Error

0.2 0.2 0.1 0.1 0 00 0

0.2 0.2

0.4 0.4

0.6 0.8 0.6 0.8 Position Error (m) Position Error (m)

1 1

1.2 1.2

1.4 1.4

Figure 19. 19. CDF CDF of of the the horizontal horizontal vehicle vehicle position position error. error. Figure Figure 19. CDF of the horizontal vehicle position error.

The corner corner map mapmatching matchinghas haslocalization localizationaccuracies accuraciesofof about 0.25 and 0.33 m 95% at 95% The about 0.25 mm and 0.33 m at andand 99% The corner map matching has localization accuracies of about 0.25 m and 0.33 m at 95% and 99% confidence levels,levels, respectively. FigureFigure 20 shows the CDF theof2D RMS horizontal errorserrors for each 99% confidence respectively. 20 shows the of CDF the 2D RMS horizontal for confidence levels, respectively. Figure 20 shows the CDF of the 2D RMS horizontal errors for each corner (271 (271 corners) and and the the horizontal position errors of of allall the each corner corners) horizontal position errors theextracted extractedcorners corners(13,260 (13,260 corner corner corner (271 corners) and the horizontal position errors of all the extracted corners (13,260 corner extraction data). extraction data).

Sensors 2016, 16, 1268 Sensors 2016, 16, 1268

16 of 23 16 of 23

CDF of the 2D RMS Error for each corner and the All Extracted Corner Position Error 1 0.9 0.8

Probability

0.7 0.6 0.5 2D RMS Error for each corner All Extracted Corner Position Error

0.4 0.3 0.2 0.1 0

0

0.1

0.2

0.3 0.4 0.5 0.6 0.7 Horizontal position Error (m)

0.8

0.9

1

Figure Figure 20. 20. CDF CDF of of the the 2D 2D RMS RMS horizontal horizontal errors errors for for each each corner corner and and the the horizontal horizontal position position errors errors of of all extracted corners. all extracted corners.

about 0.27 m and In Figure 20, the horizontal position accuracies of all the extracted corners are about 0.45 m at 95% and 99% confidence levels, respectively. Each corner has 2D RMS horizontal errors of 95% and 99% confidence levels, respectively. Each corner has 2D RMS horizontal errors about 0.26 mm and 0.32 of about 0.26 and 0.32mmatat95% 95%and and99% 99%confidence confidencelevels, levels,respectively. respectively. As As shown shown in Figures 19 corner map map matching matching error error is is very very small, small, since since the the corner corner measurement measurement isisaccurate. accurate. Also, Also, and 20, the corner the CDF CDFofofthe the2D2D RMS error for each corner shows that the position error for each corner is RMS error for each corner shows that the position error for each corner is affected affected by the characteristic of the corresponding by the characteristic of the corresponding corner. corner. shownininFigures Figures1717 and corner matching are greater than m at As shown and 19,19, thethe corner mapmap matching errorserrors are greater than 0.33 m 0.33 at several several regions. These errors are related to the number of extracted corners, distance from regions. These errors are related to the number of extracted corners, distance from the vehicle to the vehicle and to the corner, andFirst, corner shape. First,theletnumber us consider the number as corner, corner shape. let us consider of extracted cornersofasextracted shown incorners Figure 21. shown in Figure 21. Figure 21 shows that there are less than five extracted corners in most areas. Since there are many Figure 21inshows there less than extracted areas. Since there are roadside trees urbanthat areas, it is are impossible to five extract a large corners number in of most corners. Furthermore, if the many roadside impossible to 22 extract large number between of corners. building has glasstrees walls,intheurban cornerareas, cannotitbeisextracted. Figure showsa the relationship the Furthermore, if the has glass walls, corners. the corner Figure 22 showserror the localization error andbuilding the number of extracted Oncannot the leftbe of extracted. Figure 22, the localization relationship the localization theright number of extracted On the corner left of largely occursbetween in the epoch from 1950 toerror 2020. and On the of Figure 22, it cancorners. be seen that Figure 22, the localization occursthe in the epoch from 1950 to the 2020. On theerror rightofofthe Figure was barely extracted at the error same largely epoch. When corner is not extracted, position ICP 22, it can be seen that the corner was barely extracted at the same epoch. When the corner is not based DR is accumulated. However, as shown in Figure 22, the accumulated position error is reduced extracted, position errorisofextracted. the ICP based is accumulated. shown in Figure 22, at the samethe time as a corner FigureDR 23 shows the vehicleHowever, trajectoryas with the position error the accumulated position error is reduced at the same time as a corner is extracted. Figure 23 shows covariance at the same epoch. the vehicle trajectory with the position error covariance at the same epoch.

Sensors 2016, 16, 1268 Sensors2016, 2016,16, 16,1268 1268 Sensors

17 of 23

17ofof23 23 17 NumberofofExtracted ExtractedCorners Corners Number

10 10 99 88 77

Number Number

66 55 44 33 22 11 00 00

1000 1000

2000 2000

3000 4000 3000 4000 Epoch(0.1 (0.1sec) sec) Epoch

5000 5000

6000 6000

Figure 21.Number Numberof ofextracted extractedcorners. corners. Figure 21. Number of extracted Figure 21. corners. HorizontalError Error Horizontal 0.4 0.4 CornerMatching MatchingError Error Corner

0.35 0.35 0.3 0.3

Meters Meters

0.25 0.25 0.2 0.2 0.15 0.15 0.1 0.1 0.05 0.05 00 1800 1800

1850 1850

1900 1900

1950 2000 2000 2050 2050 1950 Epoch(0.1 (0.1sec) sec) Epoch

2100 2100

2150 2150

NumberofofExtracted ExtractedCorners Corners Number 66

55

Number Number

44

33

22

11

00 1800 1800

1850 1850

1900 1900

1950 2000 2050 2050 1950 2000 Epoch(0.1 (0.1sec) sec) Epoch

2100 2100

2150 2150

Figure 22. Relationship between the localization error and the number of extracted corners. Figure22. 22.Relationship Relationshipbetween betweenthe thelocalization localizationerror errorand andthe thenumber numberof ofextracted extractedcorners. corners. Figure

Sensors 2016, 16, 1268 Sensors Sensors 2016, 2016, 16, 16, 1268 1268

18 of 23 18 18 of of 23 23

Figure 23. Vehicle trajectory with the position (the blue trajectory is the ground Figure Vehicletrajectory trajectorywith with the the position position error error covariance covariance ground Figure 23.23.Vehicle error covariance(the (theblue bluetrajectory trajectoryis isthe the ground truth; truth; the the red red trajectory trajectory is is the the estimated estimated trajectory; trajectory; the the red red ellipses ellipses represent represent the the vehicle vehicle position position truth; the red trajectory is the estimated trajectory; the red ellipses represent the vehicle position error error error covariance; covariance; the the green green lines lines are are to to connect connect between between the the extracted extracted corners corners and and the the vehicle vehicle covariance; the green lines are to connect between the extracted corners and the vehicle position). position). position).

Figure 24. Relationship between the corner error covariance and the distance from the Figure Relationship between between the the corner corner position position from thethe Figure 24.24.Relationship position error error covariance covarianceand andthe thedistance distance from vehicle to the corner (the red ellipses on the corner represent the position error covariance of the vehicle to the corner (the red ellipses on the corner represent the position error covariance of the vehicle to the corner (the red ellipses on the corner represent the position error covariance of the extracted extracted corners). corners). extracted corners).

Sensors Sensors2016, 2016, 16, 16, 1268 1268

19 of19 23 of 23

At the bottom right of Figure 23, when the corner was extracted, the accumulated position thethe bottom righterror of Figure 23, when the corner wasHere, extracted, the accumulated position error error At and position covariance were reduced. the ellipse of the covariance is not of and the position error covariance were reduced. Here, the ellipse of the covariance is actual the actual size, because it was expanded for clarity. In this way, even by extracting onlynot oneofcorner, size, because it was expanded for clarity. In this way, even by extracting only one corner, the vehicle vehicle state can be estimated as demonstrated in Section 4.3. Of course, if more corners are state can be estimated as demonstrated in Section 4.3. Of course, if more corners are extracted, the extracted, the localization performance can be improved. localization performance can be improved. Secondly, the localization accuracy is related to the distance from the vehicle to the corner. Secondly, the localization accuracy is related to the distance from the vehicle to the corner. Figure 24 shows the relationship between the corner position error covariance and the distance Figure 24 shows the relationship between the corner position error covariance and the distance from from the vehicle to the corner. the vehicle to the corner. As shown in Figure 24, the position accuracy of the extracted corner is most closely related to As shown in Figure 24, the position accuracy of the extracted corner is most closely related to the the distance the vehicle the extracted The the further the from the distance fromfrom the vehicle to theto extracted corner. corner. The further corner is corner from theisvehicle, thevehicle, greater the greater is the error covariance of the cornerThis position. This is as the distance to the object is the error covariance of the corner position. is because, as because, the distance to the object increases, increases, the more the ranging accuracy of the LIDAR is reduced. Also, the horizontal resolution the more the ranging accuracy of the LIDAR is reduced. Also, the horizontal resolution of the laser of ˝ . If is the laser beam is 50 m from the vehicle, the position uncertainty beam is 0.16 any0.16°. objectIfisany 50 mobject from the vehicle, the position uncertainty of the reflected pointof is the reflected point is 0.14 m. Figure 25 shows that the 2D RMS error of the extracted corners increases 0.14 m. Figure 25 shows that the 2D RMS error of the extracted corners increases in proportion to the in proportion to the Thus,ofthe of the from corner is furthest the vehicle distance. Thus, thedistance. measurement the measurement corner that is furthest thethat vehicle could befrom the cause of could be the cause of the relatively large error. the relatively large error. 2D RMS Error of the extracted corners with respect to each range 0.4 0.35

2D RMS Error (m)

0.3 0.25 0.2 0.15 0.1 0.05 0

0

10

20

30

40 range (m)

50

60

70

80

Figure 25. 25. 2D 2D RMS RMSerror errorofofthe theextracted extracted corners with respect to each distance. corners with respect to each distance.

Finally, letususconsider consider corner shape. 26 the shows the examples of poor goodcorners and poor Finally, let thethe corner shape. FigureFigure 26 shows examples of good and corners for the corner shape. for the corner shape. In corner example, example,the theangle anglebetween betweenthe the two corner lines is perpendicular, In the the good good corner two corner lines is perpendicular, andand the the error covarianceisisvery very small. position accuracy of the extracted corner toisthe related to the error covariance small. TheThe position accuracy of the extracted corner is related Dilution Dilution of Precision (DOP) thelines two[32]. corner lines thethe angle theis two of Precision (DOP) between thebetween two corner When the[32]. angleWhen between two between corner lines corner lines is perpendicular, the DOP of theiscorner position is minimized. as shown perpendicular, the DOP of the corner position minimized. However, as shownHowever, at the bottom right at of Figure 26,right the position of the corner has large uncertainty. Figure 27 shows the estimated the bottom of Figure 26, poor the position of athe poor corner has a large uncertainty. Figure 27 vehiclethe trajectory in a vehicle situationtrajectory affected by poor corner measurement withcorner a largemeasurement covariance. with shows estimated in the a situation affected by the poor a large covariance.

Sensors 2016, 16, 1268 Sensors 2016, 16, 1268 Sensors 2016, 16, 1268

20 of 23 20 of 23 20 of 23

Figure26. 26. Examples of of good and and poor corners Figure corners for for the the corner corner shape shape (the (thered redellipses ellipseson onthe thecorner corner Figure 26. Examples Examples of good good and poor poor corners for the corner shape (the red ellipses on the corner represent the position error covariance of the extracted corners). represent representthe theposition positionerror errorcovariance covarianceof of the the extracted extracted corners). corners).

Figure 27. Estimated vehicle trajectory in the situation affected by the poor corner measurement (the Figure27. 27.Estimated Estimated vehicle vehicle trajectory in in the the situation situation affected Figure affected by by the the poor poor corner corner measurement measurement(the (the blue trajectory is the ground trajectory truth). blue trajectory is the ground truth). blue trajectory is the ground truth).

As shown on the right of Figure 27, the corner map matching error caused by the poor corner As shown on the right of Figure 27, the corner map matching error caused by the poor corner measurement relatively These27, errors are about 0.3matching m. As shownis the rightlarge. of Figure the corner map error caused by the poor corner measurement ison relatively large. These errors are about 0.3 m. So far, the vertical corner feature based precise vehicle localization been described. The measurement is relatively large. These aboutvehicle 0.3 m. localization has So far, the vertical corner featureerrors basedare precise has been described. The results show that the corner map matching performance is determined by the accuracy of the results show that the corner map matching performance is determined by the accuracy of the extracted corner. The corner is a very good landmark which can be extracted accurately. Therefore, extracted corner. The corner is a very good landmark which can be extracted accurately. Therefore,

Sensors 2016, 16, 1268

21 of 23

So far, the vertical corner feature based precise vehicle localization has been described. The results show that the corner map matching performance is determined by the accuracy of the extracted corner. The corner is a very good landmark which can be extracted accurately. Therefore, the vehicle position can be estimated accurately. The results also show that the corner map matching method can guarantee very good localization accuracy in an urban area. The performance can be further improved by accurately extracting more corners. 6. Conclusions In this paper, we proposed the vertical corner feature based precise vehicle localization method using 3D LIDAR in an urban area. First, we defined a corner map which was generated using the proposed corner extraction method as described in Section 3. After generating the corner map, we estimated the vehicle position using corner map matching. In the experimental results, the maximum and 2D RMS horizontal errors generated by using the corner map matching were about 0.46 m and about 0.138 m, respectively. The vertical corner feature based precise vehicle localization method has several advantages. First, the corner map matching performance is very good and 2D RMS horizontal error is about 0.138 m. Second, in contrast to the road intensity map based localization methods, the corner map matching method can be used in a condition of traffic congestion. Third, the data file size of the corner map is very small compared to that of the road intensity map. The calculation time of the corner map matching is also very short compared to the other map matching methods. As previously mentioned, the accuracy of the corner measurements and the number of extracted corners are very important for the precise vehicle localization. Accordingly, a method for extracting many corners with higher accuracy should be investigated in the future. Author Contributions: J.H. Im proposed the vertical corner feature based precise vehicle localization method using 3D LIDAR and the corner map definition; S.H. Im provided advice for improving the quality of this research; G.I. Jee supervised the research work; J.H. Im wrote the paper. Conflicts of Interest: The authors declare no conflict of interest.

References 1.

2. 3.

4.

5. 6. 7. 8.

Kummerle, R.; Hahnel, D.; Dolgov, D.; Thrun, S.; Burgard, W. Autonomous Driving in a Multi-level Parking Structure. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, 12–17 May 2009. Brenner, C. Vehicle Localization Using Landmarks Obtained by a LIDAR Mobile Mapping System. Proc. Photogramm. Comput. Vis. Image Anal. 2010, 38, 139–144. Baldwin, I.; Newman, P. Road vehicle localization with 2D push-broom LIDAR and 3D priors. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, MN, USA, 14–18 May 2012. Chong, Z.J.; Qin, B.; Bandyopadhyay, T.; Ang, M.H., Jr.; Frazzoli, E.; Rus, D. Synthetic 2D LIDAR for Precise Vehicle Localization in 3D Urban Environment. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013. Hu, Y.; Gong, J.; Jiang, Y.; Liu, L.; Xiong, G.; Chen, H. Hybrid Map-Based Navigation Method for Unmanned Ground Vehicle in Urban Scenario. Remote Sens. 2013, 5, 3662–3680. [CrossRef] Choi, J.; Maurer, M. Hybrid Map-based SLAM with Rao-Blackwellized Particle Filters. In Proceedings of the 2014 17th International Conference on Information Fusion (FUSION), Salamanca, Spain, 7–10 July 2014. Choi, J. Hybrid Map-based SLAM using a Velodyne Laser Scanner. In Proceedings of the 2014 IEEE 17th International Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, 8–10 October 2014. Levinson, J.; Thrun, S. Robust Vehicle Localization in Urban Environments Using Probabilistic Maps. In Proceedings of the 2010 IEEE International Conference on Robotics and Automations, Anchorage, Alaska, AK, USA, 3–7 May 2010.

Sensors 2016, 16, 1268

9. 10.

11.

12.

13. 14. 15. 16.

17.

18.

19. 20. 21.

22. 23.

24. 25.

26.

27.

28. 29.

22 of 23

Levinson, J.; Montemerlo, M.; Thrun, S. Map-Based Precision Vehicle Localization in Urban Environments. In Robotics Science and Systems; MIT Press: Cambridge, MA, USA, 2007. Hate, A.; Wolf, D. Road Marking Detection Using LIDAR Reflective Intensity Data and its Application to Vehicle Localization. In Proceedings of the 2014 IEEE 17th International Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, 8–11 October 2014. Wolcott, R.W.; Eustice, R.M. Visual Localization within LIDAR Maps for Automated Urban Driving. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), Chicago, IL, USA, 14–18 September 2014. Wolcott, R.W.; Eustice, R.M. Fast LIDAR Localization using Multiresolution Gaussian Mixture Maps. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015. Li, Y.; Olson, E.B. Extracting General Purpose Feautres from LIDAR Data. In Proceedings of the IEEE International Conference on Robotics and Automamtion, Anchorage, AK, USA, 3–7 May 2010. Chee, Y.W.; Yang, T.L. Vision and LIDAR Feature Extraction; Technical Report; Cornell University: Ithaca, NY, USA, 2013. Park, Y.S.; Yun, S.M.; Won, C.S.; Cho, K.E.; Um, K.H.; Sim, S.D. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board. Sensors 2014, 14, 5333–5353. [CrossRef] [PubMed] Hadji, S.E.; Hing, T.H.; Khattak, M.A.; Ali, M.S.M.; Kazi, S. 2D Feature Extraction in Sensor Coordinates for Laser Range Finder. In Proceedings of the 15th International Conference on Robotics, Control and Manufacturing Technology (ROCOM ’15), Kuala Lumpur, Malaysia, 23–25 April 2015. Yan, R.; Wu, J.; Wang, W.; Lim, S.; Lee, J.; Han, C. Natural Corners Extraction Algorithm in 2D Unknown Indoor Environment with Laser Sensor. In Proceedings of the 2012 12th International Conference on Control, Automation and Systems, Jeju Island, Korea, 17–21 October 2012. Siadat, A.; Djath, K.; Dufaut, M.; Husson, R. A Laser-Based Mobile Robot Navigation in Structured Environment. In Proceedings of the 1999 European Control Conference (ECC), Karlsruhe, Germany, 31 August–3 September 1999. Vazquez-Martin, R.; Nunez, P.; Bandera, A.; Sandoval, F. Curvature-Based Environment Description for Robot Navigation Using Laser Range Sensors. Sensors 2009, 9, 5894–5918. [CrossRef] [PubMed] Lee, S.Y.; Song, J.B. Mobile Robot Localization Using Range Sensors: Consecutive Scanning and Cooperative Scanning. Int. J. Control Autom. Syst. 2005, 3, 1–14. Zhang, X.; Rad, A.B.; Wong, Y.K. Sensor Fusion of Monocular Cameras and Laser Rangefinders for Line-Based Simultaneous Localization and Mapping (SLAM) Tasks in Autonomous Mobile Robots. Sensors 2012, 12, 429–452. [CrossRef] [PubMed] Velodyne LiDAR. Velodyne HDL-32E, User’s Manual and Programming Guide; LIDAR Manufacturing Company: Morgan Hill, CA, USA, 2012. Arras, K.O.; Siegwart, R. Feature Extraction and Scene Interpretation for Map-Based Navigation and Map Building. In Proceedings of the Symposium on Intelligent Systems and Advanced Manufacturing, Pittsburgh, PA, USA, 14–17 October 1997; Volume 3210, pp. 42–53. Borges, G.A.; Aldon, M.J. Line Extraction in 2D Range Images for Mobile Robotics. J. Intell. Robot. Syst. 2004, 40, 267–297. [CrossRef] Siadat, A.; Kaske, A.; Klausmann, S.; Dufaut, M.; Husson, R. An Optimized Segmentation Method for a 2D Laser-Scanner Applied to Mobile Robot Navigation. In Proceedings of the 3rd IFAC Symposium on Intelligent Components and Instruments for Control Applications, Annecy, France, 9–11 June 1997. Harati, A.; Siegwart, R. A New Approach to Segmentation of 2D Range Scans into Linear Regions. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007. Nguyen, V.; Martinelli, A.; Tomatis, N.; Siegwart, R. A Comparison of Line Extraction Algorithms Using 2D Laser Rangefinder for Indoor Mobile Robotics. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005. Grisetti, G.; Kummerle, R.; Stachniss, C.; Burgard, W. A Tutorial on Graph-Based SLAM. IEEE Intell. Transp. Syst. Mag. 2010, 2, 31–43. [CrossRef] Sunderhauf, N.; Protzel, P. Towards a Robust Back-End for Pose Graph SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, St Paul, MN, USA, 14–18 May 2012.

Sensors 2016, 16, 1268

30. 31.

32.

23 of 23

Hu, H.; Gu, D. Landmark-based Navigation of Industrial Mobile Robots. Int. J. Ind. Robot 2000, 27, 458–467. [CrossRef] Tao, Z.; Bonnifait, P. Road Invariant Extended Kalman Filter for an Enhanced Estimation of GPS Errors using Lane Markings. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015. Kelly, A. Precision dilution in triangulation based mobile robot position estimation. Intell. Auton. Syst. 2003, 8, 1046–1053. © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area.

Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vert...
26MB Sizes 1 Downloads 13 Views