sensors Article

3D Tracking via Shoe Sensing Fangmin Li 1,2, *, Guo Liu 2 , Jian Liu 3 , Xiaochuang Chen 2 and Xiaolin Ma 2 1 2

3

*

Department of Mathematics and Computer Science, Changsha University, Changsha 410022, China Key Laboratory of Fiber Optical Sensing Technology and Information Processing, Ministry of Education, School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China; [email protected] (G.L.); [email protected] (X.C.); [email protected] (X.M.) Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA; [email protected] Correspondence: [email protected]; Tel.: +86-731-8426-1249

Academic Editors: Mianxiong Dong, Zhi Liu, Anfeng Liu and Didier El Baz Received: 6 September 2016; Accepted: 20 October 2016; Published: 28 October 2016

Abstract: Most location-based services are based on a global positioning system (GPS), which only works well in outdoor environments. Compared to outdoor environments, indoor localization has created more buzz in recent years as people spent most of their time indoors working at offices and shopping at malls, etc. Existing solutions mainly rely on inertial sensors (i.e., accelerometer and gyroscope) embedded in mobile devices, which are usually not accurate enough to be useful due to the mobile devices’ random movements while people are walking. In this paper, we propose the use of shoe sensing (i.e., sensors attached to shoes) to achieve 3D indoor positioning. Specifically, a short-time energy-based approach is used to extract the gait pattern. Moreover, in order to improve the accuracy of vertical distance estimation while the person is climbing upstairs, a state classification is designed to distinguish the walking status including plane motion (i.e., normal walking and jogging horizontally), walking upstairs, and walking downstairs. Furthermore, we also provide a mechanism to reduce the vertical distance accumulation error. Experimental results show that we can achieve nearly 100% accuracy when extracting gait patterns from walking/jogging with a low-cost shoe sensor, and can also achieve 3D indoor real-time positioning with high accuracy. Keywords: indoor localization; 3D positioning; inertial sensor; walking state classification

1. Introduction Nowadays, location-based service (LBS) has drawn considerable attention as it could provide a number of uses in various domains, such as entertainment, work and personal life, etc. Most LBS services, such as those in the conventional Garmin map and Google map, rely on GPS (Global Positioning System) to obtain accurate location information [1]. Although GPS location service has become very mature and can achieve high positioning accuracy, the performance of a GPS location service becomes poor for indoor environments due to a wide variety of physical signal blocks and potential sources of interference [2]. Thus, it is highly desirable to find an alternative that can provide stable positioning for indoor environments. Recently, infrared-based and WiFi-based indoor positioning solutions have been explored. In general, these existing solutions are mainly focused on the positioning for 2D indoor LBS services. However, the complexity of the building floor structure and the restriction of 2D indoor positioning cannot always meet the demands of indoor LBS services. For example, firefighters in a burning building cannot effectively distinguish their own indoor positions or which floor they are on, making it very hard for them to perform effective rescues. It is crucial for them to have a 3D indoor positioning service to obtain their real-time locations. Another example is tracking hospital patient services. A hospital patient may be in critical condition at anytime and anywhere, and nurses and doctors Sensors 2016, 16, 1809; doi:10.3390/s16111809

www.mdpi.com/journal/sensors

Sensors 2016, 16, 1809

2 of 24

need to be able to know all patients’ fine-grained indoor locations to provide immediate and effective treatments. Meanwhile, human activities are mainly concentrated in indoor environments, such as offices and malls, making the field of 3D indoor positioning/navigation a huge business opportunity. Therefore, how to achieve 3D indoor positioning with low-cost and non-invasive requirements has become a top topic in recent years. Indoor positioning should consider not only the positioning accuracy but also the feasibility, resource consumption, and cost [3,4]. Although there are a number of solutions in this area, the indoor positioning problem has not been addressed satisfactorily due to a series of practical issues (e.g., device cost and deployment limits). Under such circumstances, it is important to develop a low-cost, low power consumption solution to achieve accurate 3D indoor positioning and tracking. Different from existing solutions, we propose the use of shoe sensors, which are low-cost inertial sensors attached to one of the user’s shoes to accurately localize the user in a 3D indoor environment. In this paper, we mainly focus on addressing the following three problems: (1) accurately extracting features from the shoe sensors according to the characteristics of human walking; (2) establishing human walking state classification model, which can distinguish the user’s walking status including normal walking, going upstairs, and going downstairs; and (3) relying on the walking model, reducing the accumulation of positioning errors while walking. Specifically, the following contributions are made in this work:

• • •



We propose a solution using 3D shoe sensors, inertial sensor attached to the user’s shoes that can accurately localize the user in 3D indoor environments. A short-time energy-based mechanism has been proposed to extract gait information while the user is walking. We design a walking state classification model that can distinguish the user’s walking status including normal walking, going upstairs, and going downstairs. The classified walking status can be further used to reduce 3D positioning errors. Extensive experiments demonstrate that the proposed low-cost shoe sensing-based 3D indoor positioning solution can perform real-time localization with high accuracy.

The remainder of this paper is organized as follows. We describe the related work in Section 2. In Section 3, we present the methodology of the proposed shoe sensing-based 3D tracking. We evaluate the performance of our system in Section 4. Finally, conclusions are given in Section 5. 2. Related Work Recently, with the development of MEMS inertial sensor devices, inertial sensor-based navigation solutions are becoming more and more popular in indoor localization scenarios [5]. Specifically, inertial sensor-based navigations can be categorized as either stepping-based or strap-down-based navigation systems [6]. Stepping-based navigation systems use step information (e.g., the number of walking steps and step length) to detect pedestrians’ positions [6–8]. For example, in [6] the users carry an inertial sensor in their pockets, and the system can calculate the inertial sensor’s pitch angle and further estimate the step length. However, this method assumes that different people have the same step length, making it hard to achieve accurate localization in practice. Strap-down-based navigation systems integrate acceleration readings twice to get the walking distance. With the aid of a compass and gyroscope, the system can also capture the walking direction. Existing solutions can be divided into two categories: one is carrying the sensor at the user’s waist, and the other is attaching sensors to the shoes. Inertial sensors fixed at the waist could be used to detect the user pelvis’s vertical displacement and estimate the length of each step [9,10]. However, there are different walking characteristics due to people’s various height, weight, age, etc. In order to improve the accuracy of the estimated step length, step length for walking, and step frequency, Shin et al. [11] use personal training historical

Sensors 2016, 16, 1809

3 of 24

data to study the personal characteristics of each user. Due to the sensor’s internal bias, there is an accumulation drift problem over time [1]. To address this problem, Shih et al. [1] use a mathematical model of simple harmonic motion to calculate each step’s length. However, this method still does not work well due to the different height of each individual. There are also several studies using shoe sensors to estimate a pedestrian’s position. According to the characteristics of human walking (e.g., the sensor velocity should be zero when the foot is on the ground), zero speed correction (zero-velocity update, ZUPT) [12–19] is proposed to correct the accumulation drift. However, these solutions mainly focus on 2D positioning. In addition, Li et al. [15,19,20] use a Kalman filter or complementary filter to calibrate the inertial sensor data, which has high algorithm complexity. Moreover, Placer et al. [17] use a camera sensor attached to the shoes to eliminate measurement error and improve the localization accuracy. Although a camera-based solution can effectively improve the accuracy of step estimation, the camera-based sensor leads to complicated data acquisition and consumes more power.Madgwick et al. [21,22] use gradient descent algorithm to eliminate gyro errors. Additionally, Nilsson et al. [14] uses an expensive inertial sensor unit (i.e., over $700 per unit).Nilsson et al. [23–26] also use an expensive sensor placed under the heel, and such solutions mainly focus on 2D indoor localization. Ojeda et al. [27] proposed ZUPT-based 3D positioning, but this does not apply any amendment to fix sensor measurement errors. Additionally, some studies (e.g., [27–30]) use the inertial sensors of smartphones to complete interior 2D positioning. However, these approaches rely on the aid of other methods such as WiFi to achieve better positioning performance. These approaches increase the complexity of the positioning system and WiFi signals are not always available in indoor environments. 3. Methodology 3.1. Gait Information The estimated distance error will keep accumulating due to the drift of inertial sensor readings [31]. According to the characteristics of human walking, we can eliminate the accumulated error of distance by using zero reference points from gait information. The zero reference points are the moments when the user’s feet step on the floor while walking. The gait information can be derived by using the short-time energy, which is usually applied on the audio signal, on acceleration. Also, we try to find a feasible position for fixing the sensors by comparing the energy of the signal. 3.1.1. Fixed Position Selection The walking patterns and the gait information of every step should be clear from the sensor reading [32]. We observe that the walking pattern and the gait information are more stable with the sensors fixed on the foot compared to other places. √ Figure 1 shows an example of the energy of acceleration (i.e., x = accX 2 + accY 2 + accZ2 − g) on all three axes while a person is walking seven steps with the inertial sensors fixed in different positions (i.e., thigh, leg, and foot). We can see that the energy varies in the duration of every step. What is more, the energy of the acceleration will decrease to zero when the foot steps on the floor. Comparing Figure 1a,b with Figure 1c, we can see that the walking pattern of every step is much more stable with the sensor fixed on the foot. Meanwhile, the duration of zero velocity points is longer than the other two fixed positions. Therefore, we choose the foot as the fixed position of the sensors. While conducting experiments, we fixed the sensors as shown in Figure 2.

positions (i.e., thigh, leg, and foot). We can see that the energy varies in the duration of every step. What is more, the energy of the acceleration will decrease to zero when the foot steps on the floor. Comparing Figure 1a,b with Figure 1c, we can see that the walking pattern of every step is much more stable with the sensor fixed on the foot. Meanwhile, the duration of zero velocity points is longer than the other two fixed positions. Therefore, we choose the foot as the fixed position of the sensors. Sensors 2016, 16, 1809 4 of 24 While conducting experiments, we fixed the sensors as shown in Figure 2.

Sensors 2016, 16, 1809 Sensors 2016, 16, 1809

(a)

(b)

4 of 23 4 of 23

(c) (c) Figure 1. Amplitude of raw acceleration with sensor fixed on different position. (a) Fixed on thigh; (b) Figure 1. Amplitude Amplitudeof ofraw rawacceleration accelerationwith withsensor sensor fixed different position. Fixed on thigh; Figure 1. fixed onon different position. (a) (a) Fixed on thigh; (b) fixed on legs; (c) fixed on foot. (b) fixed on legs; (c) fixed on foot. fixed on legs; (c) fixed on foot.

Figure 2. Inertial sensor fixed on foot. Figure 2. Inertial sensor fixed on foot. Figure 2. Inertial sensor fixed on foot.

3.1.2. Gait Information Extraction 3.1.2. Gait Information Extraction 3.1.2.The Gaitgait Information Extraction information can be derived by comparing the energy fluctuation on all three axes of the The gait information can be derived by comparing the energy fluctuation on all three axes of the accelerometer. We observecan that energy acceleration with the theoffoot. The gait information bethe derived byof comparing thevaries energyalong fluctuation onpatterns all three of axes the accelerometer. We observe that the energy of acceleration varies along with the patterns of the foot. That is to say, while the foot is in the air the energy of acceleration is high; while the foot is stepping accelerometer. We observe that the energy of acceleration varies along with the patterns of the foot. That is to say, while the foot is in the air the energy of acceleration is high; while the foot is stepping on the floor energy ofisacceleration decreased to zero. Therefore, wethe can extract the gait That is to say, the while the foot in the air theisenergy of acceleration is high; while foot is stepping on on the floor the energy of acceleration is decreased to zero. Therefore, we can extract the gait information by using threshold ofisthe short-time energy of the acceleration. the floor the energy ofaacceleration decreased to zero. Therefore, we can extract the gait information information by using a threshold of the short-time energy of the acceleration. The aacceleration ofthe human walking varies axes. In order to take the energy fluctuation by using threshold of short-time energy ofon thethree acceleration. The acceleration of human walking varies on three axes. In order to take the energy fluctuation on all axes into consideration, we use amplitude of acceleration to extract the gait Thethree acceleration of human walking varies onthe three axes. In order to take the energy fluctuation on on all three axes into consideration, we use the amplitude of acceleration to extract the gait information. The amplitude can be calculated as follows: all three axes into consideration, we use the amplitude of acceleration to extract the gait information. information. The amplitude can be calculated as follows: The amplitude can be calculated as follows: 2 x  accX 2  accY 22  accZ 22  g , xp  accX  accY  accZ  g , 2 + accY 2 + accZ2 − g, x = accXon where accX , accY , accZ are the acceleration the X, Y, and Z axis, respectively, and g is gravity. where accX , accY , accZ are the acceleration on the X, Y, and Z axis, respectively, and g is gravity. Therefore, the short-time energy signal of human walking can be derived as: Therefore, the short-time energy signal of human walking can be derived as: where accX, accY, accZ are the acceleration on the X, Y, and Z axis, respectively, and g is gravity. 2 n Therefore, the short-time energy signal of human walking can be derived as: 2 n En    x(m) w  n  m   , En  m  n  N 1   x(m) w  n  m   , m  n  N 1

where n is the position of the center of the sliding window and N is the length of the window. N is where n is the position of the center of the sliding window and N is the length of the window. N is critical for controlling the fluctuation of the energy. Figure 3 shows an example of the short-time critical for controlling the fluctuation of the energy. Figure 3 shows an example of the short-time energy with different sliding window lengths. We can see that the short-time energy wave is energy with different sliding window lengths. We can see that the short-time energy wave is smoother with a longer window length. smoother with a longer window length.

Sensors 2016, 16, 1809

5 of 24

2

n

En =



[ x (m)w (n − m)] ,

m = n − N +1

where n is the position of the center of the sliding window and N is the length of the window. N is critical for controlling the fluctuation of the energy. Figure 3 shows an example of the short-time energy with different sliding window lengths. We can see that the short-time energy wave is smoother with a longer window length. Sensors 2016, 16, 1809 5 of 23

Sensors 2016, 16, 1809

5 of 23

(a) (a)

(b) (b)

(c) (c)

(d) (d)

Figure 3. 3. Energy Energy waveformswith with differentwindow window sizes: (a) window size of 21; (b) window size of Figure (b)(b) window size of of 31; Figure 3. Energy waveforms with different different windowsizes: sizes:(a) (a)window windowsize sizeofof21;21; window size 31; (c) window size of 41; (d) window size of 51. (c) (c) window sizesize of 41; (d) (d) window sizesize of 51. 31; window of 41; window of 51.

According tothe theforegoing foregoingconclusions, conclusions,the theshort-time short-timeenergy energyof ofhuman humanwalking walkingwill willdecrease decrease According According to to the foregoing conclusions, the short-time energy of human walking will decrease to to zero zero while while the foot foot steps on on the the floor. floor. Thus Thus we we can can extract extract the gait gait information information by by setting setting aa to zero while the the foot stepssteps on the floor. Thus we can extract the gaitthe information by setting a threshold E  0.05 threshold of energy. The reference points can be determined as: T 0.05 of threshold energy. The points reference can be determined as: T  ET = 0.05Eof energy. The reference canpoints be determined as: (1 stationary t t 1 1 stationary   stationary (t) = 00  0

E  t   ET

E t E E (t) < ETT . . E  t  EET . TT EE (t)t ≥ E

Figure44shows shows anexample example ofgait gait information(i.e., (i.e., referencepoints) points) extractionusing using ourgait gait Figure Figuremethod. 4 shows an an example of of gait information information (i.e., reference reference points) extraction extraction using our our gait extraction extraction extraction method. method.

(a) (a)

(b) (b)

Figure 4. Waveform of gait signal: (a) waveform of energy and gait; (b) acceleration and gait. Figure Figure4.4.Waveform Waveformof ofgait gaitsignal: signal:(a) (a)waveform waveformof ofenergy energyand and gait; gait; (b) (b) acceleration acceleration and and gait. gait.

3.2.Posture PostureCorrection CorrectionBased Basedon onGait GaitInformation Information 3.2. Quaternionisisaahyper-complex hyper-complexnumber, number,which whichcan canbe beexpressed expressedas: as: Quaternion

QQqq00, ,qq11, ,qq22, ,qq33qq00qq1i1iqq22jjqq33kk. .

Figure55shows showsan anexample exampleof ofcoordinate coordinatesystem systemalignment alignmentwith withaaquaternion. quaternion.OOisisthe thereference reference Figure coordinate system and O’ is the coordinate system of the inertial sensor. In this paper, the system coordinate system and O’ is the coordinate system of the inertial sensor. In this paper, the system needsto toderive derivethe thequaternion quaternionthat thatdescribes describesthe theattitude attitudeof ofthe theinertial inertialsensor sensorrelative relativeto tothe thereference reference needs

Sensors 2016, 16, 1809

6 of 24

3.2. Posture Correction Based on Gait Information Quaternion is a hyper-complex number, which can be expressed as: Q (q0 , q1 , q2 , q3 ) = q0 + q1 i + q2 j + q3 k. Figure 5 shows an example of coordinate system alignment with a quaternion. O is the reference coordinate system and O’ is the coordinate system of the inertial sensor. In this paper, the system needs to derive the quaternion that describes the attitude of the inertial sensor relative to the reference coordinate system. Sensors 2016, 16, 1809 6 of 23

Figure 5. Reference coordinate system and sensor coordinate system.

While the foot is is stepping stepping on the floor, floor, the the acceleration acceleration data data is is stable. stable. Thus Thus we we can can estimate estimate the While the foot on the the initial posture of the sensors by using a method proposed in previous research [22]. initial posture of the sensors by using a method proposed in previous research [22]. Figure 66 shows Figure shows the the process process of of our our posture posture initialization initialization method. method. Supposing Supposing the the initial initial quaternion quaternion of the reference coordinate system is , we can calculate the gravitational acceleration of the reference coordinate system is q0q0= [[1,0,0,0] 1, 0, 0, 0], we can calculate the gravitational acceleration vector aaRR = [0, 0, 1T] T of of the the reference reference coordinate system.  [0,0,1] Gravity matrix expression Quaternion of reference coordinate

Calculate the gravity vector

Acceleration

Acceleration of normalization

Quaternion attitude solver

Variable angle

attitude

Figure Figure 6. 6. Posture Posture initialization initialization process. process.

Following this direction, since the quaternion rotation matrix can be derived as: Following this direction, since the quaternion rotation matrix can be derived as:  q02  q12  q22  q32 2  q1q2  q3 q0  2  q1q3  q0 q2     2  q2 − q2 − q2 2 (2q1 q22 − q32 q0 ) 2 2 (q1 q3 + q0 q2 ) Rq + Cb 0  2 1 q1q2 2 q0 q33  q0  q1  q2  q3 2  q2 q3  q0 q1   ,   CbR =  2 (q1 q2 + q0 q3 ) q20 − q21 + q22 − q23 22 (q2 2q3 −2q0 q12) , 2  q1q3  q0 q2  2  q2 q3  q0 q1  q0  q1  q2  q3 2 (q q − q q ) 2 (q q + q q ) q2 − q2 − q2 + q2 1 3

0 2

2 3

0 1

0

1

2

3

the gravity in the sensor coordinate system can be calculated by rotating the gravity in the reference the gravitysystem: in the sensor coordinate system can be calculated by rotating the gravity in the reference coordinate coordinate system: R  RCb× aaRR gb g=b C b     gbx 2 ( q1 q3 − q0 q2 ) g 2 q q  q q    bx   1 3 0 2     gb =  gby   =  2 (q2 q3 + q0 q1 ) . gb   gby    22  q2 q23 +q0 q21   . 2 gbz   q0 2− q21 −2 q22+ q3  gbz   q0 -q1 -q2 +q3  The raw acceleration from sensor readings can be described as:

g R  ax

T

az  ,

ay

which can be normalized as: g R 

gR 2

2

2

.

Sensors 2016, 16, 1809

7 of 24

The raw acceleration from sensor readings can be described as: gR =

h

ax

which can be normalized as: g0 R = q

ay

az

iT

gR a2x + a2y + a2z

,

.

Therefore, we can derive the quaternion as: Qn (qn0 + qn1 + qn2 + qn3 ) = vn P,

Sensors 2016, 16, 1809

7 of 23

where2016, P is16,the solution process of the quaternion, vn = a R × g0 R is the vector product of, and (R7=of1,232, Sensors 1809 calculate the of the gravitational vector. The posture be initialized by repeating g 3, . . . , n). Theb calculated quaternionacceleration of Q expression approaches the can acceleration of gravity, so we can calculate the thegravitational gravitationalacceleration accelerationvector. vector. The posture can initialized repeating the above steps. calculate the ggbb ofofthe The posture can bebe initialized byby repeating the The above calculation process is convergent. The estimated coordinate system of the inertial above steps. the above steps. sensorThe will converge to the real attitude. Figure 7 shows an example of the convergent of Theabove above calculation process convergent. Theestimated estimated coordinate systemofofprocess theinertial inertial calculation process isisconvergent. The coordinate system the deriving the quaternion while we conducting experiments. We find that our method will converge sensor will converge to the real attitude. Figure 7 shows an example of the convergent process sensor will converge to the real attitude. Figure 7 shows an example of the convergent process ofof inderiving about 4 sthe (i.e., 400 points). deriving the quaternion while we we conducting conducting experiments. We quaternion while We find find that thatour ourmethod methodwill willconverge convergein

inabout about4 4s s(i.e., (i.e.,400 400points). points).

(a)

(b)

(c)

(d)

(d) in Figure 7.(a)Convergent process. (a–d)(b)describes the convergent (c) process of the four elements quaternion. Figure 7. Convergent process. (a–d) describes the convergent process of the four elements in Figure 7. Convergent process. (a–d) describes the convergent process of the four elements in quaternion. quaternion.

Madgwick et al. [22] uses a gradient manner to eliminate the direction error. Following this Madgwick al.[22] [22] uses gradient manner tothe eliminate the direction error. Following this method, the acceleration while the foot is stepping on floor can bedirection used as aerror. reference value this to Madgwick etetal. uses aagradient manner to eliminate the Following method, the acceleration while the foot is stepping on the floor can be used as a reference value estimate betweenwhile the sensor coordinate system andfloor the reference coordinate system, sincetoto method,the theerror acceleration the foot is stepping on the can be used as a reference value estimate theerror error between thecan sensor coordinate system and the reference system, since itestimate is more the stable [22]. Thus, we correct the drift error and of the by using the estimated between the sensor coordinate system thegyroscope reference coordinate coordinate system, sinceit is more stable [22]. Thus, we can correct the drift error of the gyroscope by using the estimated error. error. it is more stable [22]. Thus, we can correct the drift error of the gyroscope by using the estimated error. Gravity matrix expression Gravity matrix expression Calculate the gravity vector Calculate the gravity vector Acceleration Acceleration

Acceleration of normalization Acceleration of normalization

Angular velocity drift Angular velocity drift

Angular velocity Angular correction velocity correction Gait information Gait information Angular velocity Angular velocity

Quaternion attitude Solver Quaternion attitude Solver

attitude attitude

Figure 8. Posture estimate based on on gaitgait information. Figure 8. Posture estimate based information. Figure 8. Posture estimate based on gait information.

Figure 8 shows the work flow of gait information based posture estimation. The gait information Figure 8 shows the work flow of gait information based posture estimation. The gait information based Figure posture8estimation the attitude initialization similar. The vector product and g  g information shows the and work flow of gait informationare based posture estimation. Theofgait based posture estimation and the attitude initialization are similar. The vector product ofbgb and g0RR is posture estimation and the attitude initialization are similar. The vector product ofCombined isbased angular velocity error eeis,is,the greater velocity error will be.be. Combined g R g b andwith angular velocity errore.e.The Thelarger larger the greaterthe theangular angular velocity error will with gait information, thethe gyro velocity cancan be expressed as: as:velocity error will be. Combined with isgait angular velocity error e.angular The larger e is, the greater the angular information, gyro angular velocity be expressed gait information, the gyro angular velocity as: , gyro =can gyrobe expressed Kp  e  stationary

gyro = gyro  Kp  e  stationary , where gyro is the angular velocity vector, Kp is the gain error coefficient of system. We use the fourthorder Runge–Kutta method to update the quaternion. The differential of quaternion is defined as: where gyro is the angular velocity vector, Kp is the gain error coefficient the of system. We use the fourthorder Runge–Kutta method to update the quaternion. The differential of the quaternion is defined as: 1

Sensors 2016, 16, 1809

8 of 24

gyro 0 = gyro − Kp · e · stationary, where gyro is the angular velocity vector, Kp is the gain error coefficient of system. We use the fourth-order Runge–Kutta method to update the quaternion. The differential of the quaternion is defined as: 1 Q 0 = w ( t ) ∗ Q ( t ). 2 Sensors 16,t1809 8 of 23 At2016, time 0 it is: Q ( t0 ) = Q0 .

 q0  tequation   0 is: wx wy wz   q0  t  The corresponding differential    0 wz  wy  q1 t   q t   .  1    1  wx .  q0 (t)  q2  t   20 wy −w xwz −w0y −wwx z  q2  t q 0 (t)  q t   q. t   1 0 wz −0wy   1( )   1( )   wx    q= 3 t  .  q3 t    .   wz wy  wx 2  wy − wz  q2 ( t )  0 w x   q2 ( t )  . The quaternion can qbe by using q3 (t) method as follows: t) wz the wyfourth-order −wx 0Runge–Kutta 3 (derived The quaternion can be derived by thehfourth-order method as follows: Qnusing +k4   k1  2k2  3k3Runge–Kutta 1  Qn  6 h Qn+1 = Qn +k1 (kf1 + wn2k , Q2n+  3k3 + k4 ) 6 k1 = f (wnh, Qn ) h  k2  f  wn  , Qn  k1    h2  h2 k 2 = f wn + , Qn + k 1 2h h2   k3 f  wn  , Qn  k2   2h  h2  k 3 = f wn + , Qn + k 2 2 2 k4  f  wn  h, Qn  hk3  , k4 = f (wn + h, Qn + hk3 ),

, wz are where ww,xw, w, w the raw angular velocities of inertial is thesampling actual sampling where the raw angular velocities of inertial sensorsensor and h isand thehactual interval. x y y z are Updating the quaternion in real time result in losing specification properties. interval. Updating the quaternion in will real gradually time will gradually resultquaternion in losing quaternion specification So we mustSo normalize quaternion follows: as follows: properties. we mustthe normalize the as quaternion qi qi qi = qq i  2 2 2 2 qˆ0qˆ+qˆ+ +qˆ22 +qˆ 22 qˆ0 + 1 1 qˆ22 + q3ˆ3

1,2,3 2, 3,), (i i =1, 

where qˆ0qˆ,0qˆ, qˆ, 1qˆ,2qˆ, 2qˆ,3qˆare quaternion values of updating. 3 are where thethe quaternion values of updating. 1 3.3. Eliminate Cumulative 3.3. Eliminate Cumulative Error Error Based Based on on Gait Gait Information Information The asas thethe basis of of gyroscope error elimination, but but alsoalso can The gait gait information informationcan cannot notonly onlybebeused used basis gyroscope error elimination, be point for eliminating cumulative error. As shown in Figure 9, Following thethe idea of canthe be reference the reference point for eliminating cumulative error. As shown in figure 9, Following idea Yun and Han et al. [20,33], the accumulated error in acceleration can be eliminated based on the zero of Yun and Han et al. [20,33], the accumulated error in acceleration can be eliminated based on the velocity reference pointpoint and the drift characteristics of inertial sensors. zero velocity reference andlinear the linear drift characteristics of inertial sensors . landing time Acceleration

velocity

Velocity Gait Information

corrected

distance

moving time Figure 9. 9. Accumulated error elimination. elimination. Figure Accumulated error

In our experiments, we fix the sensor on the foot of the user with its x axis along the user’s toe direction, which is also the moving direction of the user. Thus, the moving speed of the user can be expressed as: Tk 1 v T   v  0    accx  i  , i 0 k

Sensors 2016, 16, 1809

9 of 24

In our experiments, we fix the sensor on the foot of the user with its x axis along the user’s toe direction, which is also the moving direction of the user. Thus, the moving speed of the user can be expressed as: Tk 1 v ( T ) = v (0) + ∑ acc x (i ), k i =0 where k is the sample rate of the sensors. In our experiments, we found that the sampling rate can range from 40 to 100 Hz without affecting the system performance significantly. This range covers most of the maximum accelerometer sampling rate of current smartphones. Therefore, we focus on other factors that are apropos to the performance of the system and set the sample rate at 100 Hz during our experiments. As shown in Figure 10, the accumulated velocity error from t a to tb can be calculated as follows: ve = ∆v2 − ∆v1 , then the gradient of accumulated velocity error during this period is: ∆v2 − ∆v1 . tb − t a

e e=

Sensors 2016, 16, 1809

9 of 23

Figure errorgait gait elimination. Figure10. 10.Based Based on on cumulative cumulative error elimination.

previous if the gradient of accumulated velocity period as t a toerror tb during As According shown in toFigure 10,work the [33], accumulated velocity error from can bethis calculated is constant, then velocity at time T can be derived as: follows:

v 1 v  v ∆v , 2 − ∆v1 × i − ak .

bk

V ( T ) = V ( ak − 1) +

∑ e k accx2(i) − 1

i = ak

tb − t a

k

then the gradient of accumulated velocity error during this period is: When the foot is landing, the velocity should be zero:

v2  v1 . V ( ak − 1tb)  =ta0, e

According to previous work [33], then the velocity can be expressed as: if the gradient of accumulated velocity error during this period is constant, then velocity at time T  can be derived as: bn k

−e e i−kan k stationary ∑ 1k acc x (b ik) 1 v  v1 =i 1 ak VV , . ( T)T= i = a k n V  ak  1  accx  i   2     t  t = 0k stationary i0 a k k  



b

a

Wheneˆ the foot is landing,error the gradient velocityof should be zero: where is the accumulated each step, an is the start time of every step, and bn is the ending time of every step. The human walking distance can be calculated by integrating the corrected V  ak  1  0 , velocity V(T). Assuming the initial distance is zero, the distance of a certain axis can be expressed as: then the velocity can be expressed as:

 bn k 1 i  an k   accx  i   e V T    i  an k k k 

stationary  1

,

Sensors 2016, 16, 1809

10 of 24

TK

S (T ) =

1

∑ k V ( i ).

i =0

3.4. Eliminate Vertical Distance Error Based on Gait Information 3.4.1. Build and Design a Model of State In this section, we focus on how to distinguish between the body’s normal walking upstairs, downstairs, and along a plane (normal walking, jogging). We found an effective way to distinguish between the different walking patterns based on the characteristics of human walking. Figure 11 shows Sensors 16, 1809 an2016, example of the walking model while the user is walking upstairs and downstairs, respectively. 10 of 23

(a)

(b)

Figure 11. 11. Walking state model:(a) (a)downstairs; downstairs; upstairs. Figure Walking stateclassification classification model: (b)(b) upstairs. →





EachEach stepstep can can be abstracted as as a normal which isis the thesum sumofofH Handand be abstracted a normalwalking walkingvector vector SS ,, which Z. Z . While walking in the horizontal canbebe expressed While walking in the horizontalplane, plane, θ can expressed as: as: →Z

  arccosZ θ = arccos , , →S S

where  stands for the degree of change on the z axis while the user takes a random step. However, θ stands for the degree users, of change onthe the walking z axis while the user takes a random step. However, different between different since pattern varies greatly among different users.  iswhere θ is different between different users, since the walking pattern varies greatly among different users. In order to distinguish between walking along a plane, upstairs, and downstairs, we design a novel In order to distinguish between walking along a plane, upstairs, and downstairs, we design a novel mathematical model: mathematical model:

θ θ 0 = → ,    , H H where theangle anglechange change of of aa unit unit horizontal purpose of this method is to is to where the horizontaldistance. distance.The Themain main purpose of this method  isθ is eliminate the error caused by individual walking characteristics by normalizing the distance changes eliminate the error caused by individual walking characteristics by normalizing the distance changes on the z axis. on the z axis. Figure 12 shows an example of θ 0 when a user walks in different environments. We can observe  when Figure anwhen example of walks a auser walks in different environments. can observe that the 12 θ 0 isshows different the user along plane, upstairs, or downstairs. It is easy toWe distinguish 0   of that the the state is human different whenbythe user walks along a plane,θ ,upstairs, or downstairs. walking using a threshold of the average which is shown in Table 1. It is easy to  distinguish the state of human walking by using a threshold of the average  , which is shown in Table 1.

on the z axis. Figure 12 shows an example of   when a user walks in different environments. We can observe that the   is different when the user walks along a plane, upstairs, or downstairs. It is easy to distinguish the state of human walking by using a threshold of the average   , which is shown in Table Sensors 2016, 16, 1809 11 of 24 1.

Figure 12. θ 0 angle contrast. Figure 12.   angle contrast. Table 1. The average and standard deviation of θ 0 . Table 1. The average and standard deviation of   . Types Average Value Types Average Value (°)(◦ ) Sensors 2016, 16, 1809 plane 3.935 plane 3.935 upstairs 107.904 upstairs 107.904 3.4.2. Eliminate Vertical Distance Error downstairs −89.464 downstairs −89.464

Standard Deviation(° (◦) ) Standard Deviation 6.435 6.435 36.465 36.465 34.907 34.907

11 of 23

Sensors 2016, 16, 1809 we Although

of 23 can eliminate the accumulated error from the speed level, the error in the11vertical direction cannot be completely eliminated, which will lead to height calculation errors. Figures 13– 3.4.2. Eliminate Vertical Distance Error 3.4.2. Eliminate Vertical Distance Error 15 show the height error from 100 sets of walking and running data, respectively. While the user is Although we can eliminate the accumulated error the speed level, the in error in m, theand vertical walking, the maximum verticalthe distance error iserror 0.291 m,from the absolute error is 0.1186 Although we can eliminate accumulated from theaverage speed level, the error the vertical direction cannot be completely eliminated, which will lead to height calculation errors. Figures the variance of the error is 0.0582 m. direction cannot be completely eliminated, which will lead to height calculation errors. Figures 13– 13–15

show the height error from 100 andrunning running data, respectively. theis user is 15 show the height error from 100sets setsof of walking walking and data, respectively. WhileWhile the user walking, the maximum distanceerror error 0.291 average absolute error is 0.1186 walking, the maximumvertical vertical distance is is 0.291 m, m, the the average absolute error is 0.1186 m, and m, and variance theerror erroris is 0.0582 0.0582 m. thethe variance ofofthe m.

(a)

(b)

Figure 13. The vertical distance error while walking: (a) 100 sets of walking data; (b) vertical distance error.

(a) (b) While the user is running, the maximum vertical distance error is 0.66 m, the average absolute Figure 13. The vertical distance error while walking: (a) 100 sets of walking data; (b) vertical distance Figure 13. The vertical distance error while walking: (a)m. 100 sets of walking data; (b) vertical distance error. error is 0.17 m, and the mean square deviation 0.22 error. While the user is running, the maximum vertical distance error is 0.66 m, the average absolute error is 0.17 m, and the mean square deviation 0.22 m.

(a)

(b)

Figure 14. The vertical distance error while running: (a) 100 sets of running data; (b) vertical distance error.

Figure 14. The vertical distance error while running: (a) 100 sets of running data; (b) vertical distance error.

(a) (b) The error of indoor 3D localization is partly introduced by the accumulated error on the Z axis. Figure 14. The vertical distance error while running: (a) 100 sets of running data; (b) vertical distance Figure 15 shows an example of the displacement on the Z axis while the user walks on a flat floor. error. We can observe that the displacement on the Z axis accumulates after every step. Inerror orderofto eliminate the foregoing accumulated error, weaccumulated propose a strategy on the The indoor 3D localization is partly introduced by the error onbased the Z axis.

Sensors 2016, 16,16, 1809 Sensors 2016, 1809

1212 of of 2423

Sensors 2016, 16, 1809 Sensors 2016, 16, 1809

12 of 23 12 of 23

Figure 15.15. Vertical error accumulation. Figure Vertical error accumulation.

While the user is running, the maximum vertical distance error is 0.66 m, the average absolute error is 0.17 m, and the mean square deviation 0.22 m. Gait Theinformation error of indoor 3D localization iswalking partlystate introduced by the accumulated errorthe onvertical the Z axis. step of the plane calculating start classification distance errorfloor. Figure 15 shows an example ofand theend displacement on the movement Z axis while the user walks on a flat error slope positions model correction location We can observe that the displacement on the Z axis accumulates after every step. information In order to eliminate the foregoing accumulated error, we propose a strategy based on the average human walking pattern and the gait information. Figure 16. Plane vertical distance error elimination process. Figure 16 shows the workflow of our error elimination strategy. The start and end point of each step can be derived from the gait information. In addition, we can determine whether the user is walking on a flat floor by using theFigure walking patternerror model. Then we can eliminate the accumulated 15. Vertical accumulation. Figure 15. Vertical error accumulation. error on the Z axis through our strategy as shown in Figure 17.

Gait Gait information information location location information information

step of the step of the start and end start and end positions positions

walking state walking state classification classification model model

plane plane movement movement

calculating calculating error slope error slope

the vertical the vertical distance error distance error correction correction

Figure 16. Plane vertical distance error elimination process. Figure 16. Plane vertical distance error elimination process. Figure 17. The vertical distance error elimination.

The accumulated error from t a to tb is:

se  s2  s1 . Then we can calculate the distance error gradient as:

es 

s2  s1 . tb  ta

In this step, we assume the distance error gradient is constant, thus the moving distance on the Z axis while taking a step can be derived as: Figure The vertical distance error elimination. Figure 17.17.The vertical distance error elimination. Figure 17. bk

i  ak  1 S z T   S z  ak  1    Vz  i   es  , k  The accumulated error from t to tb is: i a k  k The accumulated error from t at atoatotb tis: is: b

and the moving distance on Z axis can be estimated se  s2by  using: s se ∆s  2s− s11.. 1 . 2 ∆s se = bn k  distance error i  an k  1 gradient Then we can calculate the  as: es   gradient Then we can calculate the  Sdistance z  an k  1error  k Vz  i as: k  S z T    i  an k  s2  s1  S  a k  1 e s2  s1 . es s  z n t t . tb b ta a

stationary  1

.

stationary  0

In this step, we assume the distance error gradient is constant, thus the moving distance on the In this step, we assume the distance error gradient is constant, thus the moving distance on the Z axis while taking a step can be derived as: Z axis while taking a step can be derived as: bk bk

1

i  ak 

Sensors 2016, 16, 1809

13 of 24

Then we can calculate the distance error gradient as: e es =

∆s2 − ∆s1 . tb − t a

In this step, we assume the distance error gradient is constant, thus the moving distance on the Z axis while taking a step can be derived as: bk

Sz ( T ) = Sz ( ak − 1) +





i = ak

 i − ak 1 Vz (i ) − e es , k k

and the moving distance on Z axis can be estimated by using: Sensors 2016, 16, 1809

4. Evaluation

     S ( a k − 1 ) + bn k 1 V ( i ) − e es i−kan k ∑ z n z k Sz ( T ) = i = an k   S ( a k − 1) z n

stationary = 1

13 of 23

.

stationary = 0

4. Evaluation 4.1. Building a System Platform and Experimental Settings 4.1. Building a System Platform and Experimental Settings

4.1.1. Building a System Platform

4.1.1. Building a System Platform The acquisition and network nodes are the main hardware modules in our system. The collection

nodes focus on packing and sending data a gyroscope and an in accelerometer tocollection the network The acquisition and network nodes are from the main hardware modules our system. The nodes packing and sending data from gyroscope and an accelerometer to port the network node. by node. Thenfocus the on data will be delivered to the PCa monitoring client via the serial after parsing Then the data will be delivered to the PC monitoring client via the serial port after parsing by the and the network node for further processing and displaying. Figure 18 shows the collection node network node for further processing displaying. 18 shows node and network network node of our system. The sizeand of the nodes isFigure designed as 5 the cmcollection × 5 cm for convenience. node of our system. The size of the nodes is designed as 5 cm × 5 cm for convenience.

(a)

(b)

Figure 18. 18. Collection node and (a) collection collectionnode; node; network node. Figure Collection node andnetwork network node: node: (a) (b)(b) network node.

4.1.2.4.1.2. Experimental Environment Experimental EnvironmentSettings Settings Since the MEMS inertial sensor is hardly affected byby thethe external environment, Since the MEMS inertial sensor is hardly affected external environment,we wedo donot notneed to consider the effect other external factors. Considering thethe difference and 3D need to consider theofeffect of other external factors. Considering differencebetween between 2D and 3D localization is height information, main experimentalscene scene can into twotwo classes: localization is height information, the the main experimental can be bedivided divided into classes: moving in the horizontal plane (normal walking and jogging) and climbing stairs. Figure 19 shows moving in the horizontal plane (normal walking and jogging) and climbing stairs. Figure 19 shows our experimental scene. our experimental scene.

experimental person

scene: upstairs acquisition node acquisition node

Since the MEMS inertial sensor is hardly affected by the external environment, we do not need to consider the effect of other external factors. Considering the difference between 2D and 3D localization is height information, the main experimental scene can be divided into two classes: moving in the horizontal plane (normal walking and jogging) and climbing stairs. Figure 19 shows Sensors 2016, 16, 1809 14 of 24 our experimental scene.

experimental person

scene: upstairs acquisition node acquisition node

scene: downstairs

network node

acquisition node scene: plane PC side

Figure 19. Experimental scene.

4.2. Experimental 4.2. Experimental Results Results and and Analysis Analysis 4.2.1. Gait Information Extraction Experiments

Sensors 2016, 16, 1809

14 of 23

accuracyofof information extraction is critical for the of accuracy of localization. 3D indoor The accuracy thethe gaitgait information extraction is critical for the accuracy 3D indoor localization. We focus on extracting gait information of four statuses, normal walking, running, going We focus on extracting gait information of four statuses, normal walking, running, going upstairs, and upstairs, and going downstairs. Then we will verify the accuracy of the gait information extraction going Thenenergy we will accuracy of the gait information extraction based on the based ondownstairs. the short-time ofverify these the statuses. short-time energy of these statuses. In order to accurately extract the gait information and reduce the decision delay, the threshold accurately extract the gait information and reduce decision the of threshold shouldIn beorder set astosmall as possible while using both methods. Figure the 20 shows andelay, example extracting should be set as small as possible while using both methods. Figure 20 shows an example of extracting gait information. We can observe that the extracted gait information while walking on a flat floor is gaitwith information. We can observe the extracted gait information walking on a flat floor is fine our short-time energy that method, yet abnormal with thewhile acceleration magnitude-based fine with ourproblem short-time energy method, yet abnormal the acceleration magnitude-based method. method. The arises when extracting the gait with information of the other three statuses. The problem arises when extracting the gait information of the other three statuses.

Figure20. 20. Comparative Comparative information Figure informationextraction extractiongaits. gaits.

Forthe thefour fourdifferent differentstates, states, in in this this paper we scenarios. For we perform performexperiments experimentsunder underdifferent different scenarios. Theexperimental experimental field walking and and jogging is conducted in the corridor a schoolof building. The fieldofofnormal normal walking jogging is conducted in the of corridor a school For the benchmark experiments, normal walking jogging activities are conducted a straight building. For the benchmark experiments, normaland walking and jogging activities arealong conducted along line with a distance of 19.2 m. Normal walking and jogging are performed in 20 rounds each (i.e., back a straight line with a distance of 19.2 m. Normal walking and jogging are performed in 20 rounds and(i.e., forth). The tests of walking upstairs and downstairs were conductedwere on the stairs of the school each back and forth). The tests of walking upstairs and downstairs conducted on the stairs building. To facilitate testing and statistical analysis of the experimental data, walking upstairs and of the school building. To facilitate testing and statistical analysis of the experimental data, walking downstairs were also conducted in 20 rounds. upstairs and downstairs were also conducted in 20 rounds. Figure 21 shows the accuracy of gait information extraction for all four statuses with our shorttime energy method and the acceleration amplitude-based method. We can see that our method provides sufficient accuracy to extract those four common moving activities.

The experimental field of normal walking and jogging is conducted in the corridor of a school building. For the benchmark experiments, normal walking and jogging activities are conducted along a straight line with a distance of 19.2 m. Normal walking and jogging are performed in 20 rounds each (i.e., back and forth). The tests of walking upstairs and downstairs were conducted on the stairs of theSensors school building. 2016, 16, 1809 To facilitate testing and statistical analysis of the experimental data,15walking of 24 upstairs and downstairs were also conducted in 20 rounds. Figure 21 shows the accuracy of gait information extraction for all four statuses with our shortFigure 21 shows the accuracy of gait information extraction for all four statuses with our short-time time energy method and the acceleration amplitude-based method. We can see that our method energy method and the acceleration amplitude-based method. We can see that our method provides provides sufficient accuracy to extract those four common moving activities. sufficient accuracy to extract those four common moving activities.

Figure Gait informationextraction extraction accuracy. Figure 21.21. Gait information accuracy.

Walking Classification Model Experiment 4.2.2. 4.2.2. Walking StateState Classification Model Experiment According to judgement the judgement model walkingstate state presented presented in it can distinguish According to the model ofofwalking inthis thispaper, paper, it can distinguish three kinds of states, a horizontal plane of movement (normal walking, jogging), going upstairs, three2016, kinds states, a horizontal plane of movement (normal walking, jogging), going upstairs, 15 and Sensors 16, of 1809 of 23 and going downstairs. We mainly verify the accuracy of the judgment mentioned before with the going downstairs. We mainly verify the accuracy of the judgment mentioned before with the collected data containing normal walking, jogging, going upstairs, and going downstairs as four states. states. We regard both jogging and normal walking assame the same moving state. we extract collected data containing walking, jogging, upstairs, andSpecifically, goingSpecifically, downstairs We regard both joggingnormal and normal walking as the going moving state. we extractas thefour the experimental data of each eachstate state analyze statistics of walking state classification. experimental dataofof100 100steps steps of toto analyze the the statistics of walking state classification. It can be seen the figure 22 that using model designed in this paper, It can befrom seen from the Figure 22 that usingthe themathematical mathematical model designed in this paper, the the judgement for three achieve above95% 95%accuracy accuracy and model can judge judgement for three statesstates can can achieve above andthe themathematical mathematical model can judge the walking state effectively. Thus, it provides a reliable precondition for eliminating the error of the walking state effectively. Thus, it provides a reliable precondition for eliminating the error of vertical distance. vertical distance.

Figure 22. Traveling statedetermining determining statistical Figure 22. Traveling state statisticalaccuracy. accuracy.

4.2.3.Elimination Error Elimination in Vertical the Vertical Direction 4.2.3. Error in the Direction The method designed in this paper eliminates errors of the vertical direction when moving in the

The method designed in this paper eliminates errors of the vertical direction when moving in the perpendicular plane based on the judgement model of walking state. We assessed the two states of perpendicular plane based on thebyjudgement modeldata of walking state. assessed the23,two states of normal walking and jogging getting statistical for 100 steps. AsWe shown in Figure for the normal walking walkingactivity, and jogging by getting statistical data Asofshown in Figure for the the maximum distance error is 0.26 m, for the 100 meansteps. distance the absolute value 23, error walkingisactivity, thethe maximum distance error m, the mean the 0.02 m, and distance mean square errorisof0.26 the absolute value distance is around of 0.06 m.absolute value error is 0.02 m, and the distance mean square error of the absolute value is around 0.06 m.

The method designed in this paper eliminates errors of the vertical direction when moving in the perpendicular plane based on the judgement model of walking state. We assessed the two states of normal walking and jogging by getting statistical data for 100 steps. As shown in Figure 23, for the walking activity, the maximum distance error is 0.26 m, the mean distance of the absolute value error Sensors 2016, the 16, 1809 16 of 24 is 0.02 m, and distance mean square error of the absolute value is around 0.06 m.

(a)

(b)

Figure 23. Walking normally: vertical errorelimination: elimination: distance error; (b) error percentage. Figure 23. Walking normally: verticaldistance distance error (a)(a) distance error; (b) error percentage.

As shown in Figure 24,24, the largest errorofofjogging jogging plane is 0.61 m, the As shown in Figure the largestvertical vertical distance distance error in in thethe plane is 0.61 m, the average error of absolute value is is0.01 ofabsolute absolute value is 0.06 m.can It can be seen average error of absolute value 0.01m, m,and and the the variance variance of value is 0.06 m. It be seen fromfrom the statistical results of the two states that we can efficiently reduce the vertical distance the statistical results of the two states that we can efficiently reduce the vertical distance error byerror Sensors 2016, 16, 1809 16 of 23 by leveraging gait andcreating creating a judgement model the walking leveraging gait information information and a judgement model of theofwalking state. state. 4.2.4. Experimental Estimation Step For inertial sensors-based 3D localization, one of the important metrics is the accuracy of step length estimation. In this paper, we continuously collect data from four common walking states, normal walking, jogging, going upstairs, and going downstairs. Figure 25a shows an example of walking in a straight line. The real and estimated distance of the path is 19.2 m and 17.3 m, respectively, which means that our system can achieve an accuracy of 90.1%. Figure 25b shows an example of walking along a rectangle with a length of 6 m and a width of 7 m. It can be seen from the figure that these substantially coincide with the actual length and width. The following figure gives detailed statistics on the walking accuracy in the plane. (a)

(b)

Figure 24. 24. Jogging: vertical (a)distance distance error; error percentage. Figure Jogging: verticaldistance distanceerror error elimination: elimination: (a) error; (b)(b) error percentage.

4.2.4. Experimental Estimation Step For inertial sensors-based 3D localization, one of the important metrics is the accuracy of step length estimation. In this paper, we continuously collect data from four common walking states, normal walking, jogging, going upstairs, and going downstairs. Figure 25a shows an example of walking in a straight line. The real and estimated distance of the path is 19.2 m and 17.3 m, respectively, which means that our system can achieve an accuracy of 90.1%. Figure 25b shows an example of walking along a rectangle with a length of 6 m and a width of 7 m. It can be seen from the figure that these substantially coincide with the actual length and width. (a) The following figure gives detailed statistics on the walking accuracy in the plane. Figure 26 shows the statistics of step error while normal walking. In the condition of normal walking, the average length of each step is 1.20 m. We pick 100 steps randomly from our experimental data for statistical analysis. The result shows that the maximum error is 0.34 m, the average error of absolute value is 0.11 m, the mean square error of step length error is 0.08 m, and the accuracy of step length estimation is 90.83% while walking normally along a horizontal plane.

(b) Figure 25. Track of walking on a horizontal plane: (a) walking straight; (b) walking along square.

(a)

Sensors 2016, 16, 1809

(b)

17 of 24

Figure 24. Jogging: vertical distance error elimination: (a) distance error; (b) error percentage. (a)

(a)

(b) Figure 25. Track of walking on a horizontal plane: (a) walking straight; (b) walking along square.

Figure 26 shows the statistics of step error while normal walking. In the condition of normal walking, the average length of each step is 1.20 m. We pick 100 steps randomly from our experimental data for statistical analysis. The result shows that(b)the maximum error is 0.34 m, the average error of absolute value Figure is 0.1125.m, theofmean errorplane: of step lengthstraight; error (b) is 0.08 m,along andsquare. the accuracy of step Track walkingsquare on a horizontal (a) walking walking Figure 25. Track of walking on a horizontal plane: (a) walking straight; (b) walking along square. length estimation is 90.83% while walking normally along a horizontal plane. Figure 26 shows the statistics of step error while normal walking. In the condition of normal walking, the average length of each step is 1.20 m. We pick 100 steps randomly from our experimental data for statistical analysis. The result shows that the maximum error is 0.34 m, the average error of absolute value is 0.11 m, the mean square error of step length error is 0.08 m, and the accuracy of step length estimation is 90.83% while walking normally along a horizontal plane.

(a)

(b)

Figure 26. 26. Normal Normal(a) walking step step statistics: statistics: (a) (a) distance distance error; error;(b) (b) Figure walking (b) error error percentage. percentage. Figure 26. Normal walking step statistics: (a) distance error; (b) error percentage.

Figure 27 shows the statistics of step errors while jogging. In the condition of jogging, the average length of each step is 1.60 m. We pick 100 steps randomly from our experimental data for statistical analysis. The result shows that the maximum error is 0.49 m, the average error of absolute value is 0.13 m, and the mean square error is 0.11 m; the accuracy of step length estimation is 91.87% while Sensors 2016, 16, 1809 17 of 23 jogging along a horizontal plane.

(a)

(b)

Figure 27. Jogging step statistics: (a) distance error; (b) error percentage. Figure 27. Jogging step statistics: (a) distance error; (b) error percentage.

Figure 27 shows the statistics of step errors while jogging. In the condition of jogging, the average length of each step is 1.60 m. We pick 100 steps randomly from our experimental data for statistical analysis. The result shows that the maximum error is 0.49 m, the average error of absolute value is 0.13 m, and the mean square error is 0.11 m; the accuracy of step length estimation is 91.87% while jogging along a horizontal plane.

Figure 27 shows the statistics of step errors while jogging. In the condition of jogging, the average length of each step is 1.60 m. We pick 100 steps randomly from our experimental data for statistical analysis. The result shows that the maximum error is 0.49 m, the average error of absolute value is 0.13 m, and the mean square error is 0.11 m; the accuracy of step length estimation is 91.87% while Sensors 2016, 16, 1809 18 of 24 jogging along a horizontal plane. Figure 28 shows an example of the trajectory of going upstairs or downstairs. The width of each step is 0.3Figure m and height of each of step 0.16 m. When going up and down normally, weofcan regard 28the shows an example theistrajectory of going upstairs or downstairs. The width each step is 0.3 m once and the of each is 0.16distance m. Whenofgoing andisdown normally, can regard alternating feet as height one step; the step walking eachup step 0.60 m and thewe vertical height is feet oncefrom as one step; the walking distance eachthe step is 0.60 mabove and the vertical height is 0.32 alternating m. It can be seen the trajectory in the figureofthat statistics substantially coincide 0.32 m. It can be seen from the trajectory in the figure that the statistics above substantially coincide with the trajectory of going upstairs or downstairs in reality. with the trajectory of going upstairs or downstairs in reality.

(a)

(b)

Figure 28.28. Upstairs tracks:(a) (a)upstairs; upstairs; downstairs. Figure Upstairsand anddownstairs downstairs tracks: (b)(b) downstairs.

Every step of walking upstairs can be regarded as both horizontal and vertical movement. Figure Every step of walking upstairs can be regarded as both horizontal and vertical movement. 29 shows the statistical analysisanalysis of horizontal steps steps whilewhile going upstairs. It can be seen Figurean 29example shows anof example of the statistical of horizontal going upstairs. It can frombethe figure the maximum error iserror 0.32 ism,0.32 them,average error of absolute value seen from that the figure that the maximum the average error of absolute valueisis0.09 0.09 m, m, and the mean error iserror 0.06ism. accuracy of the horizontal whengoing going upstairs is and thesquare mean square 0.06The m. The accuracy of the horizontalstep step length length when upstairs 90.83%. Sensors 16, 1809 18 of 23 is 2016, 90.83%.

(a)

(b)

Figure 29.29. Horizontal (a)distance distance error; error percentage. Figure Horizontalstep stepstatistics statistics (upstairs): (upstairs): (a) error; (b)(b) error percentage.

Figure 30 shows the the statistical analysis of of vertical movements Figure 30 shows statistical analysis vertical movementsper perstep stepwhile whilegoing goingupstairs. upstairs. The maximum error iserror 0.14 is m,0.14 them, average errorerror of absolute value the mean meansquare square error is The maximum the average of absolute valueisis0.04 0.04m, m,and and the error 0.03 is m.0.03 Them.accuracy of vertical step length when going upstairs is 87.5%. The accuracy of vertical step length when going upstairs is 87.5%.

Figure Figure 29. 29. Horizontal Horizontal step step statistics statistics (upstairs): (upstairs): (a) (a) distance distance error; error; (b) (b) error error percentage. percentage.

Figure 30 shows the statistical analysis of vertical movements per step while going upstairs. The maximum error is 0.14 m, the average error of absolute value is 0.04 m, and the mean square error is Sensors 2016, 16, 1809 19 of 24 0.03 m. The accuracy of vertical step length when going upstairs is 87.5%.

(a)

(b)

Figure 30. Vertical (a) distance error; error percentage. Figure 30.30. Vertical step statistics (upstairs): (a)distance distanceerror; error; (b) error percentage. Figure Verticalstep stepstatistics statistics (upstairs): (upstairs): (a) (b)(b) error percentage.

Similarly, Figure analysis horizontal downstairs. Similarly, Figure3131gives gives aa statistical statistical analysis of of horizontal stepssteps whilewhile goinggoing downstairs. We can We can find that the the maximum maximumerror errorisis0.38 0.38m,m,the the average error absolute value is 0.12 m, and the mean find that average error of of absolute value is 0.12 m, and the mean square error is 0.08 m;m; thethe accuracy steplength lengthis is 80.0%. square error is 0.08 accuracyofofeach each horizontal horizontal step 80.0%.

(a)

(b)

Figure 31. Horizontal (a) distance error; error percentage. Figure 31.31. Horizontal step statistics (downstairs): (a)distance distanceerror; error; (b) error percentage. Figure Horizontalstep stepstatistics statistics (downstairs): (downstairs): (a) (b)(b) error percentage.

Figure 32 shows thethe statistical stepswhile whilegoing going downstairs. maximum Figure 32 shows statisticalanalysis analysis of of vertical vertical steps downstairs. TheThe maximum error is 0.30 m, average error of absolute value is is 0.07 m; Sensors 2016, 16,the 1809the 19 ofm; 23 the error is 0.30 m, average error of absolute value 0.07m,m,and andthe themean meansquare square error error is is 0.06 0.06 accuracy of eachofvertical step length is 79.1%. the accuracy each vertical step length is 79.1%.

(a)

(b)

Figure 32. Vertical step statistics (downstairs): (a) distance error; (b) error percentage. Figure 32. Vertical step statistics (downstairs): (a) distance error; (b) error percentage.

Here we compare the performance of our tracking system with a method used in the literature [8]. First of all, we note that [8] was using high-precision fiber optic gyro sensors (DSP-1750) and the white noise is less than 18 h / Hz at a normal temperature. In this paper we use low-cost inertial sensors (MPU-6050) and the white noise is 18 h / Hz at the same temperature. The accuracy of the two sensors differ 22.5-fold. In order to ensure the parity of the experimental platform in the

Sensors 2016, 16, 1809

20 of 24

Here we compare the performance of our tracking system with a method used in the literature [8]. First of all, we note that [8] was √ using high-precision fiber optic gyro sensors (DSP-1750) and the ◦ white noise is less than 18 h/ Hz at a normal temperature. In this paper we use low-cost inertial √ ◦ sensors (MPU-6050) and the white noise is 18 h/ Hz at the same temperature. The accuracy of the two sensors differ 22.5-fold. In order to ensure the parity of the experimental platform in the comparison, we perform a normalization process for the measurement error of both. The contrasting performance of the two systems is shown in Table 2. Table 2. Step estimation normalized error comparison.

walking jogging upstairs downstairs

Error (DSP-1750 [8])

Error (MPU-6050)

0.19% 6.25% 0.30% 0.90%

0.40% 0.36% 0.56% 0.88%

The step estimation accuracy of our system is close to [8] most of the time, yet our system will achieve better results in the case of jogging. 4.2.5. Heading Verification Experiment Course accuracy is one of the significant indicators of inertial sensors-based indoor 3D positioning. We evaluated this indicator by asking three participants to walk and jog straight along a 19.2-m path (forth and back 10 times). The statistical results are shown in Figure 33. We find that the mean value of the course error for every step is close to 0◦ in normal conditions, the maximum error of the yaw angle is 15.46◦ , the mean value of absolute error of the course angle is 5.65◦ , and the mean square error Sensors 2016,◦16, 1809 20 of 23 is 3.88 .

(a)

(b)

Figure 33.33. Walking (a)course courseerror; error; error percentage. Figure Walkingheading headingangle angle error: error: (a) (b)(b) error percentage.

Figure 34 shows the jogging heading angle error. We find that the maximum error is 39.13◦ , the average of the absolute value heading angle error is 7.09◦ , and the mean square error of the heading angle is 6.94◦ .

(a)

(b)

(a)

(b)

(a) (b) Sensors 2016, 16,Figure 1809 33. Walking heading angle error: (a) course error; (b) error percentage.

21 of 24

Figure 33. Walking heading angle error: (a) course error; (b) error percentage.

(a)

(b)

(a)

(b)

Figure 34. Jogging heading angle error: (a) course error; (b) error percentage.

Figure 34.34. Jogging heading (a)course courseerror; error;(b)(b) error percentage. Figure Jogging headingangle angle error: error: (a) error percentage.

4.2.6. Overall Effect of Indoor 3D Positioning

4.2.6.4.2.6. Overall Effect of Indoor 3D3DPositioning Overall Effect of Indoor Positioning

This section shows the overall effect of 3D positioning. Figure 35a shows the structure of every ThisThis section shows the overall 3D positioning. Figure 35a shows structure of every section shows overall effectof of 3Dthe positioning. 35a shows the structure of every floor of the building. Asthe shown ineffect Figure 35b, trajectory Figure of 3D positioning forthe 5 min is basically floor of the building. As shown in Figure 35b, the trajectory of 3D positioning for 5 min is basically in in conformity with the actual trajectory, which means that our system can provide accurate 3D floor of the building. As shown in Figure 35b, the trajectory of 3D positioning for 5 min is basically conformity trajectory, whichwhich means that our system cansystem provide accurate 3D positioning. positioning.with in conformity withthe theactual actual trajectory, means that our can provide accurate 3D

positioning.

0.39m

0.39m

25.8m

3.20m

2.0m

7.6m

3.20m

2.0m

25.8m

7.3m 7.3m

7.6m

stair area

stair area

(a)

(a)

(b) Figure 35. 35. 3D (b) 3D 3D positioning positioning trajectory trajectory in in 3D 3D modeling. modeling. Figure 3D positioning: positioning: (a) (a) corridor corridor structure; structure; (b)

(b) 5. Conclusions Figure 35. 3D positioning: (a) corridor structure; (b) 3D positioning trajectory in 3D modeling. In this paper, we propose a shoe sensing-based 3D tracking solution aiming to achieve real-time indoor 3D positioning leveraging low-cost inertial sensors that can be easily attached to the user’s shoes. The proposed system reduces the cumulative errors caused by the sensors’ internal bias using the linear property of the cumulative acceleration error drift. In addition, we also propose a walking state classification model that is able to distinguish different moving status including normal walking/jogging, going upstairs, and going downstairs. A real-time 3D trajectory dynamic map is also built relying on unity3D and the system’s tracking results. Extensive experiments have been conducted using a low-cost sensor module MPU6050, which demonstrates that the proposed system could accurately track users’ 3D indoor positions and moving trajectories.

Sensors 2016, 16, 1809

22 of 24

Large-scale deployment of the shoe sensing-based real-time localization requires careful consideration of a set of key networking metrics (e.g., throughput, delay, and energy efficiency). Several studies (e.g., [34,35]) redesigned the networking scheme in terms of these critical metrics for large-scale WSN and IOT networks. We leave the study of large-scale shoe sensing deployment to our future work. Acknowledgments: This work is supported by the National Science Foundation of China and the Changsha Science and Technology Foundation, under grant nos. 61373042 and 61502361. We are extremely grateful to the editors and anonymous reviewers for comments that helped us improve the quality of the paper. Author Contributions: Fangmin Li conceived the project and obtained the required funding. Xiaolin Ma and Xiaochuang Chen did the project management, obtained all the materials, and helped with analysis and dissemination of the project results. Jian Liu and Guo Liu provided technical feedback and suggestions for each of the experiments, as well as editing of the manuscript. The literature review, design of experiments, script development, accuracy assessment, and write-up of the original version of the manuscript were done by Fangmin Li and Xiaochuang Chen. Conflicts of Interest: The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References 1.

2. 3. 4.

5. 6.

7.

8. 9. 10.

11.

12.

13.

Shih, W.Y.; Chen, L.Y.; Lan, K.C. Estimating walking distance with a smart phone. In Proceedings of the IEEE 2012 Fifth International Symposium on Parallel Architectures, Algorithms and Programming (PAAP), Taipei, Taiwan, 17–20 December 2012; pp. 166–171. Borriello, G.; Chalmers, M.; LaMarca, A.; Nixon, P. Delivering real-world ubiquitous location systems. Commun. ACM 2005, 48, 36–41. [CrossRef] Gu, Y.; Lo, A.; Niemegeers, I. A survey of indoor positioning systems for wireless personal networks. IEEE Commun. Surv. Tutor. 2009, 11, 13–32. [CrossRef] Wang, Y.; Liu, J.; Chen, Y.; Gruteser, M.; Yang, J.; Liu, H. E-eyes: Device-free location-oriented activity identification using fine-grained WiFi signatures. In Proceedings of the 20th Annual International Conference on Mobile Computing and Networking, Maui, HI, USA, 7–11 September 2014; pp. 617–628. Diaz, E.M. Inertial pocket navigation system: Unaided 3D positioning. Sensors 2015, 15, 9156–9178. [CrossRef] [PubMed] Diaz, E.M.; Gonzalez, A.L.M. Step detector and step length estimator for an inertial pocket navigation system. In Proceedings of the IEEE 2014 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Busan, Korea, 27–30 October 2014; pp. 105–110. Jahn, J.; Batzer, U.; Seitz, J.; Patino-Studencka, L.; Boronat, J.G. Comparison and evaluation of acceleration based step length estimators for handheld devices. In Proceedings of the IEEE 2010 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Zurich, Switzerland, 15–17 September 2010; pp. 1–6. Harle, R. A survey of indoor inertial positioning systems for pedestrians. IEEE Commun. Surv. Tutor. 2013, 15, 1281–1293. [CrossRef] Goyal, P.; Ribeiro, V.J.; Saran, H.; Kumar, A. Strap-down pedestrian dead-reckoning system. Measurements 2011, 2, 3. Li, J.; Wang, Q.; Liu, X.; Zhang, M. An autonomous waist-mounted pedestrian dead reckoning system by coupling low-cost MEMS inertial sensors and FPG receiver for 3D urban navigation. J. Eng. Sci. Technol. 2014, 7, 9–14. Shin, S.H.; Park, C.G.; Hong, H.S.; Lee, J.M. MEMS-based personal navigator equipped on the user’s body. In Proceedings of the 18th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS 2005), Long Beach, CA, USA, 13–16 September 2005. Pratama, A.; Hidayat, R. Smartphone-based pedestrian dead reckoning as an indoor positioning system. In Proceedings of the 2012 International Conference on System Engineering and Technology (ICSET), Bandung, Indonesia, 11–12 September 2012; pp. 1–6. Foxlin, E. Pedestrian tracking with shoe-mounted inertial sensors. IEEE Comput. Graph. Appl. 2005, 25, 38–46. [CrossRef] [PubMed]

Sensors 2016, 16, 1809

14.

15.

16. 17. 18.

19.

20.

21.

22.

23. 24. 25.

26.

27.

28.

29.

30. 31. 32.

23 of 24

Nilsson, J.O.; Skog, I.; Handel, P.; Hari, K.V.S. Foot-mounted INS for everybody—An open-source embedded implementation. In Proceedings of the 2012 IEEE/ION Position Location and Navigation Symposium (PLANS), Myrtle Beach, SC, USA, 23–26 April 2012; pp. 140–145. Li, Y.; Wang, J.J. A robust pedestrian navigation algorithm with low cost IMU. In Proceedings of the IEEE 2012 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Sydney, Australia, 13–15 November 2012; pp. 1–7. Feliz Alonso, R.; Zalama Casanova, E.; Gómez García-Bermejo, J. Pedestrian tracking using inertial sensors. J. Phys. Agents 2009, 3, 35–43. [CrossRef] Placer, M.; Kovaˇciˇc, S. Enhancing indoor inertial pedestrian navigation using a shoe-worn marker. Sensors 2013, 13, 9836–9859. [CrossRef] [PubMed] Jian, L.; Wang, Y.; Chen, Y.; Yang, J.; Chen, X.; Cheng, J. Tracking vital signs during sleep leveraging off-the-shelf wifi. In Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing, Hangzhou, China, 22–25 June 2015; pp. 267–276. Jiménez, A.R.; Seco, F.; Prieto, J.C.; Guevara, J. Indoor pedestrian navigation using an INS/EKF framework for yaw drift reduction and a foot-mounted IM. In Proceedings of the IEEE 2010 7th Workshop on Positioning Navigation and Communication (WPNC), Dresden, Germany, 11–12 March 2010; pp. 135–143. Yun, X.; Calusdian, J.; Bachmann, E.R.; McGhee, R.B. Estimation of human foot motion during normal walking using inertial and magnetic sensor measurements. IEEE Trans. Instrum. Meas. 2012, 61, 2059–2072. [CrossRef] Madgwick, S.O.H.; Harrison, A.J.L.; Vaidyanathan, R. Estimation of IMU and MARG orientation using a gradient descent algorithm. In Proceedings of the IEEE 2011 IEEE International Conference on Rehabilitation Robotics (ICORR), Zurich, Switzerland, 29 June–1 July 2011; pp. 1–7. Madgwick, S.O.H. An Efficient Orientation Filter for Inertial and Inertial/Magnetic Sensor Arrays. Report x-io and University of Bristol (UK), 2010. Available online: https://www.samba.org/tridge/UAV/ madgwick_internal_report.pdf (accessed on 27 October 2016). OpenShoe [EB/OL]. Available online: http://www.openshoe.org/ (accessed on 27 October 2016). Nilsson, J.O.; Zachariah, D.; Skog, I.; Händel, P. Cooperative localization by dual foot-mounted inertial sensors and inter-agent ranging. EURASIP J. Adv. Signal Process. 2013, 2013, 1–17. [CrossRef] Rantakokko, J.; Rydell, J.; Stromback, P.; Händel, P.; Callmer, J.; Törnqvist, D.; Gustafsson, F.; Jobs, M.; Grudén, M. Accurate and reliable soldier and first responder indoor positioning: multisensor systems and cooperative localization. IEEE Wirel. Commun. 2011, 18, 10–18. [CrossRef] House, S.; Connell, S.; Milligan, I.; Austin, D.; Hayes, T.L.; Chiang, P. Indoor localization using pedestrian dead reckoning updated with RFID-based fiducials. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Boston, MA, USA, 30 August–3 September 2011; pp. 7598–7601. Ojeda, L.; Borenstein, J. Personal dead-reckoning system for GPS-denied environments. In Proceedings of the IEEE International Workshop on Safety, Security and Rescue Robotics (SSRR 2007), Rome, Italy, 27–29 September 2007; pp. 1–6. Zampella, F.; De Angelis, A.; Skog, I.; Zachariah, D.; Jimenez, A. A constraint approach for UWB and PDR fusion. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation, Sydney, Australia, 13–15 November 2012. Khan, M.I.; Syrjarinne, J. Investigating effective methods for integration of building’s map with low cost inertial sensors and wifi-based positioning. In Proceedings of the International IEEE 2013 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Montbeliard-Belfort, France, 28–31 October 2013; pp. 1–8. Chen, Z.; Zou, H.; Jiang, H.; Zhu, Q.; Soh, Y.; Xie, L. Fusion of WiFi, smartphone sensors and landmarks using the Kalman filter for indoor localization. Sensors 2015, 15, 715–732. [CrossRef] [PubMed] Woodman, O.J. An introduction to inertial navigation. Tech. Rep. 2007, 14, 15. Jalil, M.; Butt, F.A.; Malik, A. Short-time energy, magnitude, zero crossing rate and autocorrelation measurement for discriminating voiced and unvoiced segments of speech signals. In Proceedings of the IEEE 2013 International Conference on Technological Advances in Electrical, Electronics and Computer Engineering (TAEECE), Konya, Turkey, 9–11 May 2013; pp. 208–212.

Sensors 2016, 16, 1809

33.

34. 35.

24 of 24

Han, H.; Yu, J.; Zhu, H.; Chen, Y.; yang, J.; Zhu, Y.; Xue, G.; Li, M. Senspeed: Sensing driving conditions to estimate vehicle speed in urban environments. In Proceedings of the IEEE Conference on Computer Communications (NFOCOM 2014), Toronto, ON, Canada, 27 April–2 May 2014; pp. 727–735. Dong, M.; Ota, K.; Liu, A. RMER: Reliable and energy efficient data collection for large-scale wireless sensor networks. IEEE Internet Things J. 2016, 3, 511–519. [CrossRef] Xie, R.; Liu, A.; Gao, J. A residual energy aware schedule scheme for WSNs employing adjustable awake/sleep duty cycle. Wirel. Pers. Commun. 2016, 90, 1859–1887. [CrossRef] © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

3D Tracking via Shoe Sensing.

Most location-based services are based on a global positioning system (GPS), which only works well in outdoor environments. Compared to outdoor enviro...
15MB Sizes 2 Downloads 14 Views