sensors Article

Design and Implementation of Foot-Mounted Inertial Sensor Based Wearable Electronic Device for Game Play Application Qifan Zhou 1,2, *, Hai Zhang 1 , Zahra Lari 2 , Zhenbo Liu 2 and Naser El-Sheimy 2 1 2

*

School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China; [email protected] Geomatics Engineering Department, University of Calgary, Calgary, AB T2N 1N4, Canada; [email protected] (Z.L.); [email protected] (Z.L.); [email protected] (N.E.-S.) Correspondence: [email protected]; Tel.: +86-10-8233-9189

Academic Editor: Vittorio M. N. Passaro Received: 18 September 2016; Accepted: 18 October 2016; Published: 21 October 2016

Abstract: Wearable electronic devices have experienced increasing development with the advances in the semiconductor industry and have received more attention during the last decades. This paper presents the development and implementation of a novel inertial sensor-based foot-mounted wearable electronic device for a brand new application: game playing. The main objective of the introduced system is to monitor and identify the human foot stepping direction in real time, and coordinate these motions to control the player operation in games. This proposed system extends the utilized field of currently available wearable devices and introduces a convenient and portable medium to perform exercise in a more compelling way in the near future. This paper provides an overview of the previously-developed system platforms, introduces the main idea behind this novel application, and describes the implemented human foot moving direction identification algorithm. Practical experiment results demonstrate that the proposed system is capable of recognizing five foot motions, jump, step left, step right, step forward, and step backward, and has achieved an over 97% accuracy performance for different users. The functionality of the system for real-time application has also been verified through the practical experiments. Keywords: wearable electronic device; foot moving direction; game play

1. Introduction In recent years, with the rapid development of MEMS (Micro-Electro-Mechanical System) technology, the inertial sensor production has made a leap forward in terms of chip-size minimization, low-cost manufacturing, low-power consumption, and simplification in operation. Due to these advancements, various types of inertial MEMS sensors have been adapted for multiple applications, such as vehicles and personal navigation [1], motion tracking systems [2], and consumer electronic devices (smartphones) [3]. The wearable electronic devices, which emerged during the last few years, also utilize the low-cost MEMS inertial sensor, and are becoming more attractive in the consumer market. Wearable electronic devices refer to electronic technologies or devices that are incorporated into items of clothing and accessories which can be comfortably worn [4]. Generally, these devices can perform communications and allow the wearers to access their activity and behavior information. The foot-mounted inertial sensor based electronic device is one commonly existing type and has attracted attention for further study, development and implementation. The application fields of foot-mounted wearable device can mainly be categorized into: pedestrian navigation, human daily or sports activity recognition, and medical field. Sensors 2016, 16, 1752; doi:10.3390/s16101752

www.mdpi.com/journal/sensors

Sensors 2016, 16, 1752

2 of 24

The foot-mounted personal navigation system is the most universal usage of this kind of device and has been reported several times before [5–7]. It makes use of the self-contained and autonomous attributes of inertial sensor to derive the navigation solution, which is capable of effectively avoiding the negative effect of environmental elements. This system is meaningful to be utilized for firefighters, tracking military personnel, first responders and offenders, visually impaired and blind people [8]. The fundamental of such system is to apply Inertial Navigation System (INS) mechanization equation to calculate navigation parameters (i.e., position, velocity, and attitude), and combines the Zero velocity update (ZUPT) technology to mitigate accumulated error and estimate the sensor error [9,10]. In order to improve the positioning performance, several other technical algorithms or estimation approaches, such as particle filter [11], integration with RFID (Radio Frequency Identification) measurements [12], map matching [13], and improvement on hardware structure [14] are also employed in the foot-mounted navigation devices. However, due to the requirement of initial position and the INS accumulated error caused by integral computation [15], the foot-mounted navigation system is rarely capable of acquiring long-term stable solution and is limited for further industrial utilization. Additionally, the foot-mounted wearable device is applied in the field of sports exercises or daily activities tracking. With the inertial sensor attached on shoe, gait cycle, daily energy expenditure for activities (i.e., running, and walking) and sportive activities can be fed back to the users [16,17]. The activity recognition process basically segments data, extracts features and classifies human motions [18]. Wang designs a walking pattern classifier to determine the phases of a walking cycle: stance, push-off, swing, and heel-strike. Chen introduces employing Hidden Markov Model (HMM) to pre-process the inertial sensor data and classify common activities: standing, walking, going upstairs/downstairs, jogging and running, and other similar research work can be found in literatures [19,20]. In industry, miCoach [21] and Nike+ [22] are examples of foot-mounted commercial wearable products for monitoring sportive activities. These fitness products provide the user with information on speed, distance, and energy cost, and have achieved a tremendous popularity among users. However, they are limited to the provision of the users’ general motion information, such as discrimination of movement activity from rest, classification of activities (i.e., running, walking, and sleeping), and the quantization of general movement intensity (i.e., percentage of time spent moving, sleeping, and sitting) [23]. These systems are merely data recorders or monitors, and their performance does not have a direct impact on the user experience because the user cannot identify whether the motion identification result is accurate or not. In medical field, since gait disturbances are very common factors in patients with Parkinson’s disease, several research works concern with Parkinson patients who suffer from walking abnormality, and aim to help them in aspects of diagnosing, monitoring, and rehabilitating. Filippo [24] proposes a system that is able to provide real-time computation of gait features, and feeds back to the user in the purpose of helping him/her execute the most effective gait pattern. Joonbum [25] makes use of pressure sensor to monitor patients’ gait by observing the ground reaction force (GRF) and the center of GRF to give the quantitative information of gait abnormality. He [26] improves the work by integrating Inertial Measurement Unit (IMU), employing the HMM to identify gait phases and developing it into a tele-monitoring system. Moreover, Strohrmann [27] utilizes the motion data measured by inertial sensor for runner’s kinematic analysis to avoid risks provoked by fatigue or improper technique. Chung [28] compares the motion data of Alzheimer patient and healthy people collected during walking, and concludes that the Alzheimer patients exhibited a significantly shorter mean stride length and slower mean gait speed than those of the healthy controls. The foot-mounted device, which provides continuous physical monitoring in any environment, is beneficial in shortening patent’s hospital stay, improving both recovery and diagnosis reliability, and raising patients’ quality of life. This paper aims to introduce a novel application of foot-mounted wearable electronic device: game play. The inertial sensor has been successfully applied in game play scenario before and received massive attentions. The most popular and famous example is Wii remote controller, which integrates

Sensors 2016, 16, 1752

3 of 24

infra-red and three-axis accelerometer information to capture users’ hand motion and enables them to play games such as Golf, Tennis ball, Balling, etc. [29,30]. The MotionCore controller, proposed by Movea, can play the role of an air mouse and is employed to play Fruit Ninjia, or Shooting games. Shum [31] introduces a fast accelerometer based motion recognition approach and applies this technology to plays boxing game. Ernst [32] attempts an initial experiment of using wearable inertial sensor in game of material arts. These proposed or stated inertial sensor based game applications are all expressed in hand operating mode that users need to shake and swing their hands to interact with game operation in real time. To the best knowledge of the authors, attaching inertial sensor on shoe and using foot motion to play game has not been discussed in previous academic work or industry product; hence, it is a novelty of suggesting such game play manner. The main idea behind the proposed system is to identify human foot stepping directions in real-time and coordinate these directions to control the virtual player actions in game. In our work, one IMU configuration is selected to make it convenient and suitable for user to wear. The proposed motion identification procedure is then implemented in three successive steps: (1) the collected dynamic data are initially preprocessed to compensate sensor error (i.e., bias, scale factor error, and non-orthogonality error) and correct the inertial sensor misalignment during placement; (2) the peak points of acceleration norm are detected for acceleration data segmentation and the selected features (e.g., mean, variance, position change, etc.) in the vicinity of each peak point are extracted; and (3) the extracted features are finally fed into a machine learning process to train the classifier. Notably, to improve the robustness of the proposed system, each stepping motion type has its own corresponding classifier. The advantages of proposed system can be described as follows: (1) it extends the current foot-mounted electronic wearable devices beyond pedestrian navigation or human activity recognition and monitoring to the game play field, (2) some of the kinetic games will not be limited to a confined space (i.e., living room) and specific game boxes (i.e., XBOX or Wii) anymore, and, on the contrary, will be playable almost anywhere or anytime on various terminals (i.e., smartphones or tablets); and (3) it introduces the possibility of building the low-cost, portable, real-time wearable exercising and entertainment platforms, with which people are able to conveniently perform virtual sports or exercise in an interesting manner without environmental constrains. The paper is structured as follows: Section 2 introduces the main concept of the proposed system. Section 3 describes the system overview with the introduction of both hardware and software platforms. Section 4 illustrates the specific implementation of the foot motion detection algorithm. Section 5 shows the experimental results and analysis. Section 6 presents conclusions and provides recommendations for future research work. 2. Main Concept One of the commonly-used game operating modes is that user controls the character’s movements (i.e., forward, backward, left or right) to avoid obstacles or achieve more points in popular smartphone running games, such as Temple Run, Surway Surf. According to this operation mode, the main concept of the foot-mounted systems is to utilize user’s steps to control the virtual player in game, instead of using the conventional manner (i.e., finger sliding, and button press). Specifically, the inertial sensor is attached to the user’s foot and the sensor data are collected during moving phase; then, the stepping direction is derived from collected data and used to control the character moving in game. Figure 1 illustrates the main concept of the proposed system. This illustration shows the human foot’s kinetic motions, detected by an inertial sensor, will substitute the traditional game controller. Concretely, the user’s stepping forward or jumping correlates to the press of up button (or finger slides up); stepping backward correlates to the press of down button (or finger slides down). Similarly, a person’s walking left or right correlates to the press of left or right buttons (or finger slides right or left). This system has high real-time and detection accuracy requirements because any lag or false detection of steps will cause the user to discontinue playing game normally and contribute to a poor

Sensors 2016, 16, 1752

4 of 24

user experience. Hence, the main challenge of this system is to correctly determine step motion and moving directions when the step event happens with little delay, and synchronize those motions to 2016, 16, 1752 4 of 24 gameSensors controls, which is meaningful to provide a favorable feedback to user. Moreover, due to the diversity of shoe styles, sensor mounted manners and user habits, the system robustness and algorithm Sensors 2016, of 16, 1752 24 diversity shoe styles, sensor mounted manners and user habits, the system robustness4 ofand compatibility other difficult challenges overcome. algorithmare compatibility are other difficultto challenges to overcome. diversity of shoe styles, sensor mounted manners and user habits, the system robustness and algorithm compatibility are other difficult challenges to overcome.

Figure 1. The main concept of proposed system.

Figure 1. The main concept of proposed system.

3. System Architecture

Figure 1. The main concept of proposed system.

3. System Architecture

The proposed system architecture is shown in Figure 2. In this system, the foot moving dynamic 3. System Architecture The system architecture is shown Figure 2. In this system, the foot dynamic data proposed are captured by inertial sensor, and is then in wirelessly transmitted to various kindsmoving of terminals The proposed system architecture isisshown in Figure 2. In this system, footsoftware, moving (i.e.,captured smartphone, tablets, computer, and smart TV) through Bluetooth 4.0. The which is data are by inertial sensor, and then wirelessly transmitted tothe various kindsdynamic of terminals data are captured by inertial sensor, and is then wirelessly transmitted to various kinds of terminals compatible in different platforms, plays the role of receiving data, performing the step motion (i.e., smartphone, tablets, computer, and smart TV) through Bluetooth 4.0. The software, which is (i.e., smartphone, tablets, computer,with and smart TV) through Bluetooth 4.0. The software, is detection algorithm, and interacting hardware software platforms are which included compatible in different platforms, plays the games. role of Both receiving data,and performing the step motion detection compatible in different platforms, plays the role of receiving data, performing the step motion in this system, and are described as follows. algorithm, and interacting with games. Both hardware and software platforms are included in this detection algorithm, and interacting with games. Both hardware and software platforms are included system, and are described as follows. in this system, and are described as follows.

Figure 2. System architecture.

3.1. Hardware Platform

Figure 2. System architecture.

Figure 2. System architecture. The system hardware platform mainly combines a CC 2450 microprocessor (Texas Instrument, 3.1. Hardware Platform Dallas, TX, USA), a MPU9150 9-axis inertial sensor (InvenSense, Sunnyvale, CA, USA), and other 3.1. Hardware Platform The system hardware platform mainly combines a CC 2450 (Texas necessary electronic components. The CC2540 [33] processor has microprocessor a high performance andInstrument, low-power Dallas, TX, USA), a MPU9150 9-axis inertial sensor (InvenSense, Sunnyvale, CA, USA), and other The hardware CCenergy 2450 System microprocessor (Texas Instrument, 8051 system microcontroller andplatform includes amainly 2.4 GHzcombines Bluetooth a low on Chip (SOC). It can run necessary electronic components. The CC2540 [33] processor has a high performance and low-power both application BLE (Bluetooth Low Energy) stack, soSunnyvale, it is compatible multiple Dallas, TX, USA), a and MPU9150 9-axis inertial sensorprotocol (InvenSense, CA,with USA), and other 8051 microcontroller and includes aand 2.4 GHz Bluetooth low energy System on Chip (SOC). It can run mobile devices (i.e., Smartphone, tablets). Theprocessor MPU9150 [34] a ishigh an integrated nine-axis MEMS necessary electronic components. The CC2540 [33] has performance and low-power both application and BLE (Bluetooth Low Energy) protocol stack, so it is compatible with multiple tracking device combines a three-axis gyroscope, accelerometer, and a three-It can 8051 motion microcontroller and that includes a 2.4 GHz Bluetooth lowa three-axis energy System on Chip (SOC). mobile devices (i.e., Smartphone, and tablets). The MPU9150 [34] isIn anour integrated nine-axis MEMS axis magnetometer. Figure 3 shows the system hardware platform. system, the three tasks run both application and BLE (Bluetooth Low Energy) protocol stack, so it is compatibleof with motion trackingplatform device that three-axis gyroscope, a three-axis accelerometer, threethe hardware arecombines to derivea inertial sensor data through II2 (I2C) interface and in aapre-set multiple mobile devices (i.e., 3Smartphone, and tablets). The MPU9150 [34] is anthe integrated nine-axis axis magnetometer. shows the system platform. In our system, three data tasks of sampling frequencyFigure (200 Hz), to package datahardware in a pre-defined user protocol and send via MEMS motion tracking device that combines a three-axis gyroscope, a three-axis accelerometer, the hardware platform Bluetooth to the host. are to derive inertial sensor data through II2 (I2C) interface in a pre-set and asampling three-axis magnetometer. 3 shows theinsystem hardware platform. oursend system, frequency (200 Hz),Figure to package data a pre-defined user protocol In and data the via three tasksBluetooth of the hardware to the host.platform are to derive inertial sensor data through II2 (I2C) interface in a

pre-set sampling frequency (200 Hz), to package data in a pre-defined user protocol and send data via Bluetooth to the host.

Sensors 2016, 16, 1752

5 of 24

Sensors 2016, 16, 1752 Sensors 2016, 16, 1752

5 of 24 5 of 24

Antenna Antenna

MPU9150 MPU9150 3 Gyro 33Gyro Acc Acc 33 Mag 3 Mag

I2C I2C

RAM RAM

FLASH FLASH

CC2540 2.4GHzCC2540 Bluetooth LE 2.4GHz Bluetooth Proprietary SoCLE Proprietary SoC Low-Power 8051 Core Low-Power 8051 Core Crystal 32MHz Crystal 32MHz

Balun Balun

Network Network Matching Matching

Crystal 32.768kHz Crystal (Optional) 32.768kHz (Optional)

Figure 3. System hardware platform. Figure 3. 3. System System hardware hardware platform. platform. Figure

3.2. Software Platform 3.2. Software Platform The system software platform is developed in C++ programming language in Visual Studio. The systemsoftware softwareplatform platformisisdeveloped developed C++ programming language in Visual Studio. The system in in C++ programming language in Visual Studio. The main functions of the software include the following: receive and decode data, log user’s motion data, The main functions the software include the following: receive and decode log user’s motion main functions of theofsoftware include the following: receive and decode data, data, log user’s motion data, calculate the attitude, run the human foot detection algorithm and interact the human motion with data, calculate the attitude, run the human foot detection algorithm and interact the human motion calculate the attitude, run the human foot detection algorithm and interact the human motion with game control. For real-time processing, a multi-threaded program is designed to simultaneously with control. For real-time processing, a multi-threaded programisisdesigned designedto to simultaneously simultaneously gamegame control. For real-time processing, a multi-threaded program implement the listed tasks. Multithreading is a widespread programming and execution model that implement the the listed listedtasks. tasks.Multithreading Multithreading a widespread programming execution model implement is aiswidespread programming andand execution model that allows multiple threads to exist within the context of a single process. These threads share the that allows multiple threads to exist within context singleprocess. process.These Thesethreads threads share share the allows multiple threads to exist within thethe context of of a asingle processor's resources, but execute their functions independently. This multi-threaded software can processor's resources, but execute their functions independently. This multi-threaded software can guarantee the whole system’s real-time application, and moreover introduces a clear structure, which guarantee the whole system’s real-time application, and moreover introduces a clear structure, which is beneficial for further revision or development. is beneficial for further revision or development. 4. Methodology 4. Methodology Methodology 4. The inertial sensor is attached on human foot and the measured rotation and acceleration The inertial inertial sensor sensor is is attached attached on on human human foot foot and The and the the measured measured rotation rotation and and acceleration acceleration information is applied for stepping direction classification. The motion recognition process of the information is applied for stepping direction classification. The motion recognition process of the information is applied for stepping direction classification. The motion recognition process of the proposed system is illustrated in Figure 4. proposed system is illustrated in Figure 4. proposed system is illustrated in Figure 4.

Figure 4. Overview of foot motion identification process. Figure Figure 4. 4. Overview Overview of of foot foot motion motion identification identification process. process.

As shown in Figure 4, the identification process is executed as: first, the collected raw inertial As shown in Figure 4, the identification process is executed as: first, the collected raw inertial data As is pre-processed for error reduction and misalignment elimination; shown in Figure 4, thecompensation, identification noise process is executed as: first, the collected raw second, inertial data is pre-processed for error compensation, noise reduction and misalignment elimination; second, the peak points of the norm of 3-axis acceleration are detected to segment data; and, finally, the data is pre-processed for error compensation, noise reduction and misalignment elimination; second, the peak points of the norm of 3-axis acceleration are detected to segment data; and, finally, the selected divided dataacceleration segment are and put into theand, classifier derive the the peak features points ofin thethe norm of 3-axis areextracted detected to segment data; finally,tothe selected selected features in the divided data segment are extracted and put into the classifier to derive the foot motion types. The detailed description of each procedure is provided in the following features in the divided data segment are extracted and put into the classifier to derive the foot motion foot motion types. The detailed description of each procedure is provided in the following subsections. types. The detailed description of each procedure is provided in the following subsections. subsections. 4.1. Preprocessing 4.1. Preprocessing inertialsensor sensorhas hasthe the advantages of being small-size, affordable; however, MEMS inertial advantages of being small-size, lowlow cost,cost, affordable; however, they MEMS inertial sensor has the advantages of being small-size, low cost, affordable; however, they they suffer various error sources, which cause negative effectson ontheir theirperformance. performance. Therefore, Therefore, suffer fromfrom various error sources, which cause negative effects suffer from various error sources, which cause negative effects on their performance. Therefore, calibration experiments are indispensable to remove the deterministic errors, such as bias, scale calibration experiments are indispensable to remove the deterministic errors, such as bias, scale

Sensors 2016, 16, 1752

6 of 24

Sensors 2016, 16, 1752

6 of 24

factor, misalignment before using MEMS sensor. The inertial sensor error model [35] is described as follows and it is employed for the error compensation: calibration experiments are indispensable to remove the deterministic errors, such as bias, scale factor, b b misalignment before using MEMS  b sensor.   bThe  Sinertial N  error  model () [35] is described as follows    sensor and it is employed for the error compensation: (1) b

f  f  bf  S f  f  N f  f   ( f )

e b = ω + bω + Sω · ω b + Nω · ω b + ε(ω ) ω (1) feb = f + b f +ofSspecific ε( rotation f) denote the measurements force and and f b ,  b denote the true where f · f +N f · f + specific force and angular velocity. b a , b , respectively, denote the accelerometer and gyroscope e b denote the measurements of specific force and rotation and f b , ω b denote the true specific where feb , ω instrument bias; S b , S  separately denote the matrices of the linear scale factor error of gyroscope force and angular velocity. ba , bω , respectively, denote the accelerometer and gyroscope instrument bias; and accelerometer; and N b , N  denote the matrices of representing axes non-orthogonality. S of the linear scale factor error of gyroscope and accelerometer; b , Sω separately denote the matrices stochastic error of theaxes sensors. The parameters  (N),b ,N( ωf denote ) denote b a , b ,the N b stochastic , N  can and thethe matrices of representing non-orthogonality. ε(ω S),bε,(Sf ) ,denote

f b ,  b

error of the through sensors. a The parameters Sb , Sω , bbefore be derived through a calibration a , bω , Nsensor be derived calibration experiment usage [36,37]. With a hand rotating b , Nω can experiment before sensor usage [36,37]. a hand rotating calibrationone scheme, the experiment calibration scheme, the experiment can beWith accomplished in approximately minute [38]. can be approximately minute [38]. Inaccomplished the proposedinsystem, the IMUone is attached on shoes to detect the user’s foot motions and In the proposed system, the IMU is attached on shoes to detect the user’s motions control the game. However, due to the difference between various shoefoot styles and and the control sensor the game. However, due to the difference various styles and the sensor placement, placement, the IMU orientation (pitch andbetween roll) varies whenshoe mounting on different users’ shoes, the IMU orientation (pitch and roll) varies when mounting on different users’ shoes, which causes the which causes the misalignment with different users. misalignment different users. a satisfactory identification result for different shoe styles or Hence, inwith order to achieve Hence,manners, in order to achieve a satisfactory identification result for different shoe styles or placement placement the data should be collected under various attachment conditions and put into manners, the data should be collected under various attachment conditions and put into the training the training process to derive the classifier. However, this process is time-consuming and the process to derive the classifier. However, thisisprocess time-consuming and the is not performance is not guaranteed if the sensor attachediswith a new placement thatperformance is not included in guaranteed if the sensor is attached with a new placement that is not included in the training set. the training set. To propose to project the the measured acceleration and rotation data from To avoid avoidsuch suchdrawbacks, drawbacks,wewe propose to project measured acceleration and rotation data the sensor frameframe (shoe (shoe frame)frame) to theto user wherewhere the user is defined as the as user’s right, from the sensor theframe, user frame, theframe user frame is defined the user’s left and upand directions as three toaxes construct the right coordinate system.system. Thus, no matter right, left up directions asaxes three to construct thehanded right handed coordinate Thus, no how thehow inertial sensor issensor placedison the shoes (sensor is frame alwaysisdifferent), the measured data can matter the inertial placed on the shoes frame (sensor always different), the measured be unified be expressed in the same coordinate. During During the sensor the forward axis of data can betounified to be expressed in the same coordinate. theinstallation, sensor installation, the forward the IMU (y-axis in proposed system), is always aligned with the foot moving forward direction, so we axis of the IMU (y-axis in proposed system), is always aligned with the foot moving forward only need so to we consider the misalignment ofmisalignment pitch and roll of angles. Thisroll proposed direction, only need to consider the pitch and angles.data Thistransformation proposed data from sensor frame to user frame is able to effectively the misalignment by different transformation from sensor frame to user frame is eliminate able to effectively eliminatecaused the misalignment shoes and sensor placement because aligns all the collected data in frame. Figure 5 causedstyles by different shoes styles and sensoritplacement because it aligns allthe thesame collected data in the shows this process. same frame. Figure 5 shows this process.

Figure 5. 5. Inertial sensor measurement measurement alignment. alignment. Figure Inertial sensor n

Figure 5 shows the alignment alignment process process with with the the rotation rotation matrix matrix C Cbbn , where the inertial data, collected under different misalignment conditions, are scaled in the same same frame frame (Right-Forward-Up). (Right-Forward-Up). More data expressed in this can directly reflect the actual moving More importantly, importantly,the the data expressed in frame this frame can directly reflect theuser actual userdirection moving in horizontal plane, which provides a better data basis for the consequent signal process, and is direction in horizontal plane, which provides a better data basis for the consequent signal process, beneficial to achieve a morearobust result. result. and is beneficial to achieve more robust

Sensors 2016, 16, 1752

7 of 24

Therefore, a reliable and accurate attitude result is very significant and necessary since it can be used to correctly project the inertial measurement onto user frame with the rotation matrix to perform the data standardization process (align in the same frame), and consequently derive a dependable feature extraction section. Considering the given initial attitude and gyroscope measurement, the orientation results can be derived by integrating the angular velocity measured by 3-axis gyroscope. However, due to the error of MEMS gyroscope, the attitude result drifts quickly with time and is not able to provide long term solution. On the other side, the accelerometer can provide attitude angles without suffering from long term drift which is complementary with gyroscope, and is effective to compensate the attitude drift error. Hence, an attitude filter is used to integrate the gyroscope and accelerometer measurement together and derive the non-drift attitude solution. The Kalman filter is then used to blend the information in a feature-level fusion [39]. The dynamic model, measurement model of the filter and an adaptive measurement noise tuning strategy implemented are subsequently described as follows. 4.1.1. Dynamic Model The attitude angle error model, which is the angle difference between true navigation frame and the computed navigation frame, is employed as the dynamic model [40]. This model is expressed in linear form, and easy to implement. The 3-axis gyro biases are also included in the dynamic model; and they are estimated in the filter and work in the feedback loop to mitigate the error from raw measurement. The equation of dynamic model is written as: .

n + C n εb ψ = ψ × ωin b

.b

.b

ε = (−1/τb ) ε + ωb

(2)

n denotes the n-frame rotation angular rate vector relative to the where ψ denotes the attitude error. ωin inertial frame (i-frame) expressed in the n-frame. Cbn denotes the Direction Cosine Matrix (DCM) from b-frame (i.e., the body frame) to n-frame (i.e., the navigation frame). The symbol “×” denotes cross product of two vectors. εb denotes the gyros output error. Here, we only consider the effect of gyro bias and it is modeled as first order Gauss-Markov process. Finally, τb denotes the correlation time of the gyro biases and ωb is the driving noise vector.

4.1.2. Measurement Model The acceleration residuals in the body frame are used to derive the system measurement model. In our model, instead of using attitude difference separately derived by accelerometer and gyroscope, the acceleration difference is applied to avoid the singularity problem when the pitch angle is ±90◦ [41]. The acceleration residuals in body frame are defined as the difference between accelerometer direct measurements and the projection of local gravity on the body frame. δa = abm − abnc abnc = Cnb c an

(3)

where abm denotes the accelerometer measurement. abnc denotes the local gravity acceleration project on the body frame using the gyros derived rotation matrix Cnb c . The subscript nc denotes the computed frame. According to the DCM chain rule, Cnb is expressed as: Cnb = Cnb c Cnnc Cnnc = I − [ψ×]

(4)

Sensors 2016, 16, 1752

8 of 24

where [ψ×] denotes the skew matrix of attitude error. Substituting Equation (4) into Equation (3), the relationship between acceleration residuals in body frame and attitude error is written as: δa

b n = abm − abnc =Cnb an −  Cnc a  b b n b = Cn − Cnc a = Cnc Cnnc − Cnb c an

= Cnb c ( I − [ψ×] − I ) an = Cnb c (−[ψ×]) an = Cnb c ([ an ×]) ψ

(5)

Then, the measurement model can be obtained by Equation (5). The measurement Z is the acceleration in body frame [δa x δay δaz ] T , and the measurement matrix H is expressed as: 

 − gCnb c (1, 2) gCnb c (1, 1) 0 0 0 0   H =  − gCnb c (2, 2) gCnb c (2, 1) 0 0 0 0  − gCnb c (3, 2) gCnb c (3, 1) 0 0 0 0

(6)

This attitude filter works effectively under stationary or low acceleration conditions. In these situations, the specific force measured by accelerometer equals to local gravity acceleration, so the pitch and roll angles derived through accelerometer are accurate and can be positive in fixing the accumulated attitude error caused by gyroscope error, while, in high dynamic situation, the accelerometer will sense the external dynamic acceleration, which is undesirable in the filter. Hence, if the contribution of measurement update remains a same weight as that in low dynamic situation, a side effect will be introduced and lead to a degraded performance. Hence, to achieve an optimal attitude estimation result, we propose to adaptively tune the measurement covariance matrix R according to a system dynamic index ε [42] and is designed as: ε = | f − g|

(7)

where f denotes the norm of measured acceleration and g denotes the local gravity acceleration. Then the specific tuning strategy of covariance matrix R is described as follows: 1

2

3

Stationary mode: If the scalar subjects to ε < Thres1, the system is considered to be stationary. Correspondingly, the covariance matrix R is set as R = diag[σx2 σy2 σz2 ], where σx2 , σy2 , σz2 denote the velocity random walk of three-axis accelerometer. In our approach, the Thres1 is set as 3 · (σx2 + σy2 + σz2 ) . Low acceleration mode: If the index satisfies the condition Thres1 < ε < Thres2, the system suffers from low acceleration and is treated as measurement noise. The covariance matrix R is set as R = diag[σx2 σy2 σz2 ] + kε2 , where k is the scale. Thres2 is set as 2g High dynamic mode: If the scalar subjects to ε > Thres2, norm of the three accelerations is far from the specific force, which equals to gravity acceleration. The acceleration residuals are not reliable. In this situation, we only use the angular velocity to calculate attitude, and the filter only performs the prediction loop without measurement update.

4.2. Data Segmentation Data segmentation is carried out to divide the continuous stream of collected sensor data into multiple subsequences, and retrieve the important and useful information for the activity recognition. The sliding windows algorithms are commonly used to segment data in various applications because they are simple, intuitive and online algorithms. However, this approach is not suitable here because an entire human stepping motion signal may not be included in the current detected window, and is separated in two adjacent windows, which is possible to cause poor result in some cases. Moreover, this algorithm works with a complexity of O(nL), where L is the average length of a segment, and it affects the system real-time capability.

The sliding windows algorithms are commonly used to segment data in various applications because they are simple, intuitive and online algorithms. However, this approach is not suitable here because an entire human stepping motion signal may not be included in the current detected window, and is separated in two adjacent windows, which is possible to cause poor result in some cases. Moreover, this algorithm works with a complexity of O(nL), where L is the average length of a segment, and it Sensors 2016, 16, 1752 9 of 24 affects the system real-time capability. Hence, the relationship between gait cycle and acceleration signal is analyzed to derive a Hence, the relationship between cycle andaacceleration signal analyzed to derive a practical practical approach to segment data.gait Generally, gait cycle can be isdivided into four phases [43], approach to segment data. Generally, a gait cycle can be divided into four phases [43], namely: namely: (1) Push-off, heel off the ground and toe on the ground; (2) Swing, both heel and toe off the (1) Push-off, heel off the ground toe on the ground; both off the ground; ground; (3) Heel Strike, heel on and the ground and toe off (2) theSwing, ground; andheel (4) and Foottoe stance phase, heel (3) Heel on the and toe off thethese ground; (4) Foot heelacceleration and toe on and toe Strike, on theheel ground at ground rest. Figure 6 shows fourand phases andstance their phase, correlated the ground at rest. Figure 6 shows these four phases and their correlated acceleration signal. signal.

Figure 6. Stepping acceleration signal and gait phases. Figure 6. Stepping acceleration signal and gait phases.

As shown in Figure 6, the blue line is the norm of three accelerations and red line denotes the As shown in Figuresignal 6, the by blue is theaverage norm ofalgorithm, three accelerations redepoch, line denotes the smoothed acceleration a line moving where, forand each a window smoothed signal by a moving algorithm, where, for each epoch, a window containingacceleration the previous N sample points isaverage averaged to produce the acceleration value; and the containing the previous N sample points is averaged to produce the acceleration value; and thepoints. reason reason is to derive a smoother form of signal, deduce noise, and eliminate unexpected peak is to derive form signal, deduceacceleration noise, and eliminate unexpected peak points. Figurea6smoother illustrates thatofthe smoothed signal during one walking cycle generally Figure 6 illustrates that the smoothed acceleration signal during one walking cycle generally features two peak points, one is in the push-off phase that the foot is leaving the ground and another features two peak points, one is in the push-off phase that the foot is leaving the ground and one is in the heel-strike phase that the foot hits the ground. Although it may not hold for each another walking one is in themore heel-strike phase that the foot hits the ground. Although it may not hold eachofwalking cycle, that than two peak points are available in one cycle, due to different userfor habits motion cycle, that more than two peak points are available in one cycle, due to different user habits motion strength, these two points are always available in each gait cycle. Here, the utilization ofofthe peak strength, these two points are always available in each gait cycle. Here, the utilization of the peak point point for triggering the date segmentation process is proposed. Once one peak point is detected, the for triggering date segmentation is proposed. Once onethe peak is detected, the feature feature in thethe vicinity of this point isprocess extracted and consequently footpoint motion type is identified. in theThe vicinity of this point is extracted and consequently the foot motion type is identified. reason for using peak point is that one peak point is always available in the push-off phase The reason for using peak point is not thatvary onefor peak point users is always available in theThis push-off when the foot leaves ground, which will different or stepping patterns. point phase when the foot leaves ground, which will not vary for different users or stepping patterns. This point facilitates the detection of the beginning phase of each step, and ensures the reliable real time performance. On the other side, the foot motion detection algorithm works with the O (peak point number) complexity. Therefore, the classification process is only performed when the peak point is detected, which decreases the computation burden. Moreover, the specific phase of each walking circle do not need to be classified, as it simplifies the identification process Additionally, the length of data for feature extraction also needs to be ascertained. A tradeoff is available here between discrimination accuracy of motion types and real-time applicability. Involving

facilitates the detection of the beginning phase of each step, and ensures the reliable real time performance. On the other side, the foot motion detection algorithm works with the O (peak point number) complexity. Therefore, the classification process is only performed when the peak point is detected, which decreases the computation burden. Moreover, the specific phase of each walking circle do not need to be classified, as it simplifies the identification process Sensors 2016, 16, 1752 10 of 24 Additionally, the length of data for feature extraction also needs to be ascertained. A tradeoff is available here between discrimination accuracy of motion types and real-time applicability. Involving in the segmentation to correctly identify human motion more data more in the data segmentation procedure isprocedure beneficialistobeneficial correctly identify human motion and achieve and achieve reliable results, will cause a lag response, whereas less data can achieve a quick more reliablemore results, but will causebut a lag response, whereas less data can achieve a quick and less delay and less delay judgment on human motion. However, there is not enoughincluded information included for judgment on human motion. However, there is not enough information for classification. classification. Hence, theofdistribution of three axis signal acceleration signalmotions of different motions to is Hence, the distribution three separate axis separate acceleration of different is analyzed analyzed figure out length of data segmentextraction. for feature extraction. figure outto the length ofthe data segment for feature ofof thethe peak points in Figure 7 draws draws the the collected collectedthree threeaxes axesacceleration accelerationsignals signalsininthe thevicinity vicinity peak points thethe initial stage of aofstep, andand Figure 7a–e,7a–e, respectively, represents acceleration signals collected from in initial stage a step, Figure respectively, represents acceleration signals collected forward, backward, left, left, rightright andand jump motions. TheThe blue, red from forward, backward, jump motions. blue, redand andgreen greensolid solid lines lines separately denote the acceleration signals represented in user frame. The green dashed line, which drawn from peak points. The peak points lineline suffers a shift in right side top to bottom, bottom,denotes denotesthe theposition positionofofthe the peak points. The peak points suffers a shift in right and itand is due theto implementation of the mean algorithm, but it will anycause negative side it istodue the implementation of theaverage mean average algorithm, butnot it cause will not any effect to the identification process.process. It is suggested to use to theuse acceleration signalssignals to invest the data negative effect to the identification It is suggested the acceleration to invest the segment length because theythey experience different performances during data segment length because experience different performances duringthe theprocess process of of human stepping ininvarious various directions, and are they able toanprovide intuitive, direct, and easy stepping directions, and they ableare to provide intuitive,andirect, and easy understanding understanding manner recognize the moving directions. For example, Figure illustrates the left manner to recognize thetomoving directions. For example, Figure 7c illustrates the7c left motion and the motion and the acceleration (red line) in user’s right direction features an obvious difference acceleration (red line) in user’s right direction features an obvious difference compared with the other compared with theforother two axes. Similarly,motions, for the the forward and backward the two axes. Similarly, the forward and backward accelerations in forwardmotions, or backward accelerations in forward or backward directions exhibit more diversity. directions exhibit more diversity.

(a)

(b)

(c)

(d) Figure 7. Cont.

Sensors 2016, 16, 1752 Sensors 2016, 16, 1752

11 of 24 11 of 24

(e) Figure 7. 7. Acceleration Acceleration signal signal of motions and and data data segmentation: segmentation: (a) Figure of different different motions (a) Forward; Forward; (b) (b) Backward; Backward; (c) Left; (d) Right; and (e) Jump. (c) Left; (d) Right; and (e) Jump.

Additionally, each figure illustrates the acceleration distribution of 500 motion samples Additionally, each figure illustrates the acceleration distribution of 500 motion samples performed performed by different testers where for each motion 500 data groups are collected. Acceleration data by different testers where for each motion 500 data groups are collected. Acceleration data in the in the vicinity of first peak point are extracted, and the mean and standard deviation of these vicinity of first peak point are extracted, and the mean and standard deviation of these segments are segments are calculated. The solid lines and dashed lines represent the mean and standard deviation, calculated. The solid lines and dashed lines represent the mean and standard deviation, respectively. respectively. The acceleration distribution shown in the Figure 7 provides an intuitive statistical The acceleration distribution shown in the Figure 7 provides an intuitive statistical result of acceleration result of acceleration in the initial phase of a step and is helpful to confirm the data segment length. in the initial phase of a step and is helpful to confirm the data segment length. The data segmentation The data segmentation length selected for feature extraction is 31 samples and it is presented in length selected for feature extraction is 31 samples and it is presented in orange rectangle in the figure, orange rectangle in the figure, where it includes 20 samples before the peak point with 10 samples where it includes 20 samples before the peak point with 10 samples after the peak point and the peak after the peak point and the peak point itself. The main justifications to choose this length of data are: point itself. The main justifications to choose this length of data are: first, the extracted feature within first, the extracted feature within the selected interval is able to provide enough distinguished the selected interval is able to provide enough distinguished information for the motion identification; information for the motion identification; and, second, it ensures a reliable real-time applicability. and, second, it ensures a reliable real-time applicability. The data shown in Figure 7 is sampled in The data shown in Figure 7 is sampled in 200 Hz and the first 30 samples of a gait cycle are utilized 200 Hz and the first 30 samples of a gait cycle are utilized for classification, which means that the for classification, which means that the motion type can be decided in approximately 0.15 s after this motion type can be decided in approximately 0.15 s after this motion occurs. motion occurs. 4.3. Feature Extraction 4.3. Feature Extraction Generally, features can be defined as the abstractions of raw data. The objective of feature Generally, features can be defined as the abstractions of raw data. The objective of feature extraction is to find the main characteristics of a data segment which can accurately represent the extraction is to find the main characteristics of a data segment which can accurately represent the original data and identify valid, useful and understandable patterns. Basically, the features can original data and identify valid, useful and understandable patterns. Basically, the features can be be divided into various categories, time domain and frequency domain are the most commonly divided into various categories, time domain and frequency domain are the most commonly ones ones used for recognition. The feature selection is an extremely important step because a used for recognition. The feature selection is an extremely important step because a good feature good feature space can lead to a clear and easy classification and poor feature space may be space can lead to a clear and easy classification and poor feature space may be time-consuming, time-consuming, computationally-expensive, and cannot lead to good result. In our system, not all computationally-expensive, and cannot lead to good result. In our system, not all of the commonlyof the commonly-used features in activity recognition field in our system are selected; however, used features in activity recognition field in our system are selected; however, the collected signal is the collected signal is analyzed and the foot moving physical discipline is considered to choose the analyzed and the foot moving physical discipline is considered to choose the features, which are not features, which are not only effective to discriminate motion types but also has less computation only effective to discriminate motion types but also has less computation complexity. In this system, complexity. In this system, the selected features for foot motion classification are described as follows. the selected features for foot motion classification are described as follows.

4.3.1. Mean and Variance The mean and variance value of the three axis accelerometer and gyroscope measurements are derived from the data segment to consider as the feature, according to the following equations:

Sensors 2016, 16, 1752

12 of 24

4.3.1. Mean and Variance The mean and variance value of the three axis accelerometer and gyroscope measurements are derived from the data segment to consider as the feature, according to the following equations: N

   

∑ xi

x=

1

N

N

(8)

   σ 2 = ∑ ( x i − x )2 i =1

where xi denotes the signal, N denotes the data length, and x, σ2 denote the mean and variance value of the data sequence. 4.3.2. Signal Magnitude Area The signal magnitude area (SMA or sma) is a statistical measure of the magnitude of a varying quantity, and actually is the absolute values of the signal. SMA is calculated according to Equation (9). f SMA =

Z t2 t1

| x | dt

(9)

where x denotes the signal and (t1 , t2 ) denotes the integration time period. 4.3.3. Position Change Position change is an intuitive feature for the foot direction identification because different foot moving directions cause various position changes. For example, jumping features a larger change in vertical direction, and stepping right and left lead to an obvious position change in horizontal plane. The Inertial Navigation System (INS) mechanization equation is able to provide the trajectory of a moving object in three dimensions with the measured rotations and accelerations [44], while, due to the double integration strategy of INS mechanization and the noise of sensor, the accumulated errors will be involved in the trajectory estimation and lead to a drift of position, especially when using a MEMS sensor. Hence, it is not feasible to calculate the position during the whole identification process, the position is only derived in the data segmentation, with an initial velocity (0,0,0), initial position (0,0,0), and a zero azimuth during the calculation process. The inertial sensor has the characteristic of keeping accurate in short term, so the position result computed in the 31 samples interval is reliable and trustworthy. The position calculation equation is described as follows:   

an = Cnb ab R v = v0 + an dt   p = p + R v dt 0

(10)

where ab denotes the measured acceleration in body frame, Cbn is the rotation matrix that projects the acceleration from body frame to the navigation frame (local-level frame,) and an denotes the projected acceleration in navigation frame. v, p denote the computed velocity and position and v0 , p0 denote the initial velocity and position. 4.3.4. Ratio The ratio feature is used to calculate the proportion of feature in single axis and the norm of features in three axes. The aim of introducing the ratio metric is to normalize the feature of three axes to best deal with the motions performed in different strength performed by different users. For example, for the jump motion, the position change in up direction (jump height) is more than that in horizontal plane and is dominant in the position change, though the jump height is different for

Sensors 2016, 16, 1752

13 of 24

horizontal plane and is dominant in the position change, though the jump height is different for various users, the proportion of jump height in position change still occupy significantly. Specifically, the position feature (the position change) derived from a heavy jump motion maybe (0.2, 0.2, 0.5), Sensors 2016, 16, 1752 of 24 and is (0.05, 0.05, 0.2) from a slight jump motion; though the jump height amplitude 13 varies significantly depending on different user habits, the ratio of jump height occupies over 50% of whole positionusers, change the both groups. ratio feature of position change in different directions various thefor proportion of jumpHence, height the in position change still occupy significantly. Specifically, is a good metric to distinguish and evaluate the motion types with different strengths. the position feature (the position change) derived from a heavy jump motion maybe (0.2,The 0.2, ratio 0.5), feature introduced here is calculated as in Equation (11): and is (0.05, 0.05, 0.2) from a slight jump motion; though the jump height amplitude varies significantly depending on different user habits, the ratio of jump height occupies over 50% of whole position change for the both groups. Hence, the ratio feature of position change in different directions is a  FeatureNorm  FeatureX 2  FeatureY 2  FeatureZ 2 good metric to distinguish  and evaluate the motion types with different strengths. The ratio feature FeatureX  as in Equation (11): introduced here is calculated ratioFeatureX   FeatureNorm   √  2 + FeatureY 2 + FeatureZ2 FeatureY  FeatureX FeatureNorm =  ratioFeatureY     FeatureX (11) FeatureNorm ratioFeatureX = FeatureNorm  (11) FeatureY  FeatureZ  ratioFeatureY = FeatureNorm   ratioFeatureZ    FeatureNorm ratioFeatureZ = FeatureZ FeatureNorm

ratioFeature denotesthe FeatureX, Y , Z denotethe thecalculated calculated features features in the where, FeatureX, where, Y , Z denote in different different axes axesand and ratioFeature denotes ratio. In In our our proposed proposed system, system, the the position, position, mean, mean, variance, variance, and and SMA SMA features features calculated calculated in in three three ratio. directions or axes are all considered to derive the ratio feature. directions or axes are all considered to derive the ratio feature. 4.4. Classification Classification 4.4. The classification classification is is aa process process to to predict predict or or reorganize reorganize the the motions motions with with the the extracted extracted features. features. The Inorder orderto toachieve achieveaagood goodmotion motionclassification classificationperformance, performance,three threepopular popularsupervised supervisedclassification classification In approaches are are employed employed in in our our research research work work for for the the validation validation and and these these three three classifiers classifiers are are approaches describedas asfollows. follows. described 4.4.1. 4.4.1. Decision Decision Tree Tree A Adecision decision tree treeisisaadecision decisionsupport supporttool toolthat thatuses usesaatree-like tree-likegraph graphor ormodel modelof ofdecisions decisionsand and their Generally, internal node, branch, and leaf included in a decision theirpossible possibleconsequences. consequences. Generally, internal node, branch, andnodes leaf are nodes are included in a tree classifier, thewhere, internalthe node represents test on thea selected feature, branch denotes the decision tree where, classifier, internal node arepresents test on the selected feature, branch outcome test, andofthethe leaftest, nodes classrepresent labels (different moving 8 denotes of thetheoutcome andrepresent the leafthe nodes the class labelsdirections). (different Figure moving graphically illustrates the decision tree model. directions). Figure 8 graphically illustrates the decision tree model.

Figure8.8.Decision Decisiontree treegraphical graphicalmodel. model. Figure

Figure 8 draws the graphical model of the decision tree. The blue circles denote the internal node Figure 8 draws the graphical model of the decision tree. The blue circles denote the internal node that executes the test on the feature (comparison of the feature with trained parameter), the green that executes the test on the feature (comparison of the feature with trained parameter), the green arrows denote the test outcomes and the rectangles denote the different labels or classes. The red arrows denote the test outcomes and the rectangles denote the different labels or classes. The red dashed lines from top nodes to the leaf nodes represent a decision process or classification rule. dashed lines from top nodes to the leaf nodes represent a decision process or classification rule. The tree generation is the training stage of this classifier and it works in a recursive procedure. The tree generation is the training stage of this classifier and it works in a recursive procedure. The general tree generation process is that, for each feature of the samples, a metric (the splitting The general tree generation process is that, for each feature of the samples, a metric (the splitting measure) is computed from splitting on that feature. Then, the feature that generates the optimal index (highest or lowest) is selected and a decision node is created to split the data based on that feature. The recursion procedure stops when the samples in a node belong to the same class (majority),

Sensors 2016, 16, 1752

14 of 24

measure) is computed from splitting on that feature. Then, the feature that generates the optimal index (highest or lowest) is selected and a decision node is created to split the data based on that Sensors 2016, 16, 1752 14 of 24 feature. The recursion procedure stops when the samples in a node belong to the same class (majority), or when there are no remaining features on which to split. Depending on different splitting measures, decisionfeatures tree can categorized as: ID3 (Iterative Dichotomiser 3), Quest or when there are nothe remaining onbe which to split. Depending on different splitting measures, (Quick, Unbiased, Efficient, Statisticalas: Tree), (Classification And3),Regression Tree),Unbiased, C4.5, etc. the decision tree can be categorized ID3CART (Iterative Dichotomiser Quest (Quick, [45,46]. Statistical Tree), CART (Classification And Regression Tree), C4.5, etc. [45,46]. Efficient, 4.4.2. 4.4.2. K-Nearest K-Nearest Neighbors Neighbors K-nearest onon the closest training samples in K-nearest neighbors neighborsalgorithm algorithm(kNN) (kNN)[47] [47]isisananapproach approachbased based the closest training samples the feature space, where k denotes the number of classes. In the kNN approach, an object an is classified in the feature space, where k denotes the number of classes. In the kNN approach, object is by a majority vote of its neighbors, with the object being assigned to the most common class among its classified by a majority vote of its neighbors, with the object being assigned to the most common class nearest neighbors. Similarity Similarity measures measures are fundamental components in this algorithm and different among its nearest neighbors. are fundamental components in this algorithm and distance can be used thetodistance points. Figure 9 illustrates the main differentmeasures distance measures cantobefind used find thebetween distancedata between data points. Figure 9 illustrates concept kNN algorithm. the mainofconcept of kNN algorithm.

Figure 9. 9. K-nearest neighbors algorithm algorithm (kNN) (kNN) algorithm algorithm concept. concept. Figure K-nearest neighbors

As shown in Figure 9, the test sample (blue circle) is classified by the neighbors class, either As shown in Figure 9, the test sample (blue circle) is classified by the neighbors class, either green green square or green triangle. If k is selected as 3, the test sample is assigned to the red square square or green triangle. If k is selected as 3, the test sample is assigned to the red square because two because two of its k neighbors belong to red square. In the same way, if k = 5, the test sample is of its k neighbors belong to red square. In the same way, if k = 5, the test sample is assigned to green assigned to green triangle class. Hence, the main idea of kNN is that the category of the predicted triangle class. Hence, the main idea of kNN is that the category of the predicted object is decided by object is decided by the labels of the neighbors’ majority. Additionally, the votes of these neighbors the labels of the neighbors’ majority. Additionally, the votes of these neighbors could be weighted could be weighted based on the distance to overcome the problem of non-uniform densities of the based on the distance to overcome the problem of non-uniform densities of the neighbor classes. neighbor classes. 4.4.3. Support Vector Machine 4.4.3. Support Vector Machine The support vector machine (SVM) is used to construct a hyperplane or set of hyperplanes in a The support vector machine (SVM) is used to construct a hyperplane or set of hyperplanes in a high- or infinite-dimensional space for classification, regression, or other tasks. Since several available high- or infinite-dimensional space for classification, regression, or other tasks. Since several available hyperplanes are able to classify the data, the SVM is employed to use the one that represents the hyperplanes are able to classify the data, the SVM is employed to use the one that represents the largest separation, or margin, between the two classes to classify. The hyperplane chosen in SVM largest separation, or margin, between the two classes to classify. The hyperplane chosen in SVM maximize the distance between the plane and the nearest data point on each side. Figure 10 draws the maximize the distance between the plane and the nearest data point on each side. Figure 10 draws SVM classifier. the SVM classifier. As shown in this figure, the optimal separating hyper-plane (solid red line) locates the samples As shown in this figure, the optimal separating hyper-plane (solid red line) locates the samples with different labels (blue circles 1, red square −1) in the two sides of the plane, and the distances of with different labels (blue circles 1, red square −1) in the two sides of the plane, and the distances of the closest samples to the hyper-plane in each side become maxima. These samples are called support the closest samples to the hyper-plane in each side become maxima. These samples are called support vectors and the distance is optimal margin. The specific illustration of the SVM classifier can be found vectors and the distance is optimal margin. The specific illustration of the SVM classifier can be found in the literature [48–50]. in the literature [48–50].

Sensors 2016, 16, 1752

15 of 24

Sensors 2016, 16, 1752

15 of 24

  x  b  1

xb  0

Sensors 2016, 16, 1752

15 of 24

  x  b  1   x  b  1

xb  0   x  b  1

Negative object y=-1

Negative object y=1

Figure10. 10. Support Support vector (SVM) classifier. Figure vectormachine machine (SVM) classifier. Negative object y=-1

Negative object y=1

5. Experiments and Results

5. Experiments and Results

The experiment is designed include two parts. In the(SVM) first part, different testers are invited to Figureto10. Support vector machine classifier. The experiment is designed include two parts. In the first part, different testersdata are to invited perform the five foot motions into their own manners. Then, we collect the data, preprocess to perform the five footResults motions in their own extract manners. collect the data, preprocess 5. Experiments and remove error, divide the data into segments, the Then, featureswe and put them into the training data process error, of introduced machine learning algorithms to the derive the classifiers. Additionally, to remove divide is the data into segments, extract features and puttesters them intoinvited thethe training The experiment designed to include two parts. In the first part, different are to classifiers are tested by two cross validation approaches. In the second part, the data processing process of introduced machine learning to derive Additionally, classifiers perform the five foot motions in theiralgorithms own manners. Then, the we classifiers. collect the data, preprocessthe data to procedure is transformed in our software platformthe and is programmed C++,processing the program is also are tested byerror, two cross approaches. part,and theinput data is remove dividevalidation the data into segments, In extract second the features them into the procedure training connected to the game control interface to perform the practical game playing experiment. transformed softwaremachine platformlearning and is programmed C++, the the program also connected process in of our introduced algorithms to inderive classifiers.isAdditionally, theto the classifiers are testedtoby two cross validation approaches. In experiment. the second part, the data processing game control interface perform the practical game playing 5.1. Date Set procedure is transformed in our software platform and is programmed in C++, the program is also In sufficient amount of data for testers—two females and eight connected to to theobtain game acontrol interface to perform the training, practical ten game playing experiment. 5.1. Date Setorder males—are invited to participate in experiments. All testers are in good health condition without any In toin obtain a sufficient amount of data training, tentesters’ testers—two females and eight 5.1.order Date Set abnormality their gait cycles. The IMU sensor wasfor attached on the shoes, and they were males—are invited to participate in experiments. All testers are in good health condition without guided to perform the five stepping motions in their natural manners. In order to have diverse any In order to obtain a sufficient amount of data for training, ten testers—two females and eight abnormality in their gait cycles. Theactions IMU of sensor was were attached on the and they were characteristic of each motion, some the testers conducted at testers’ differentshoes, strengths (heavy males—are invited to participate in experiments. All testers are in good health condition without any or slight), different frequencies (fast or slow), and different scopesmanners. (large or small amplitude), guided to perform the five stepping motions in their natural In order to haveand diverse abnormality in their gait cycles. The IMU sensor was attached on the testers’ shoes, and they were some actions were performed by the actions same tester on different days. The data collected during strengths this characteristic of each motion, some of the testers were conducted at different guided to perform the five stepping motions in their natural manners. In order to have diverse experiment were stored to form the training dataset. Figure 11 shows the system hardware platform. (heavy or slight), different frequencies (fast of or slow), and different scopes (large or small amplitude), characteristic of each motion, some actions the testers were conducted at different strengths (heavy In this platform, a 3.7 V lithium battery (blue one) is used to provide the power supply. The IMU and some actions werefrequencies performed(fast by or theslow), sameand tester on different days.orThe data collected during or slight), different different scopes (large small amplitude), and module has a small size, and is very convenient to mount on user’s shoe. Table 1 provides a summary some actions were were performed by thethe same tester on differentFigure days. The data collected duringhardware this this experiment stored to form training dataset. 11 shows the system of the collected training dataset, where the quantitative information of the collected human stepping experiment were stored toaform trainingbattery dataset.(blue Figureone) 11 shows the to system hardware platform. platform. In this platform, 3.7 Vthe lithium is used provide the power supply. motions is listed in this table. The second row lists the actual motion numbers collected in the In this platform, a 3.7 V lithium battery (blue one) is used to provide the power supply. The IMU The experiment, IMU module has a small size, and is very convenient to mount on user’s shoe. Table 1 provides a and they include 895 jump, 954 stepping left, 901 stepping right, 510 moving forward module has a small size, and is very convenient to mount on user’s shoe. Table 1 provides a summary summary of the collected training dataset, where the quantitative information of the collected human and 515 moving backward. of the collected training dataset, where the quantitative information of the collected human stepping stepping motions is listed in this table. The second row lists the actual motion numbers collected in motions is listed in this table. The second row lists the actual motion numbers collected in the the experiment, and they include 895 jump, 954 stepping left, 901 stepping right, 510 moving forward experiment, and they include 895 jump, 954 stepping left, 901 stepping right, 510 moving forward and 515 backward. andmoving 515 moving backward.

(a)

(b)

(a)

(b) Figure 11. Cont.

Sensors 2016, 16, 1752

16 of 24

Sensors 2016, 16, 1752

16 of 24

(c)

(d)

(e)

Figure 11. Hardware platform and the sensor placement on shoe. (a) Hardware platform of proposed

Figure 11. Hardware platform and the sensor placement on shoe. (a) Hardware platform of proposed system; (b–e) Sensor placement on different testers’ shoes. system; (b–e) Sensor placement on different testers’ shoes. Table 1. Stepping motion training set.

Action numbers

Table 1. Stepping motion training set. Jump Left Right Forward Backward 895 954 901 510 515 Jump Left Right Forward Backward

Action numbers 5.2. Classification Results

895

954

901

510

515

In our proposed system, a corresponding classifier is trained for each motion instead of using a

5.2. Classification Results single classifier for the five motions. This training strategy is beneficial to improve the robustness and

decrease the complexity of this system, since one classifierisonly needsfor to recognize two classes In our proposed system, a corresponding classifier trained each motion insteadinstead of using a of five. Moreover, it offers the possibility of selecting typical features for each motion based on motion single classifier for the five motions. This training strategy is beneficial to improve the robustness and principle or data analysis in future work. decrease the complexity of this system, since one classifier only needs to recognize two classes instead In order to have a better evaluation of the classification performance, two cross-validation of five. Moreover, offers thechosen: possibility of cross selecting typicaland features forvalidation. each motion based crosson motion approaches forit test were k-fold validation holdout In k-fold principle or data analysisthe in original future work. validation approach, sample is randomly partitioned into k equal sized subsamples. A In order to have a is better of the classification performance, two cross-validation approaches single subsample thenevaluation retained from these k subsamples as the validation data for testing the for test were and chosen: k-fold cross and holdout In k-fold cross-validation approach, model, the remaining (k −validation 1) subsamples are used validation. as training data. The cross-validation process is then repeated 1 times, with each of theinto k −k 1 subsamples exactly once the validation the original sample kis −randomly partitioned equal sizedused subsamples. A as single subsample is data. The k − 1 results from these folds can then be averaged to produce a single estimation. The then retained from these k subsamples as the validation data for testing the model, and the remaining advantage of this method over repeated random sub-sampling is that all of the observations are used (k − 1) subsamples are used as training data. The cross-validation process is then repeated k − 1 for both training and validation, and each observation is used for validation exactly once. Here, a times, with each of the k − 1 subsamples used exactly once as the validation data. The k − 1 results commonly used 10-fold test is employed. In holdout validation, a subset of observations is chosen from these folds can then be averaged to produce a single estimation. The advantage of this method randomly from the initial samples to form a validation or testing set, and the remaining observations over are repeated random sub-sampling is that allpercent of theofobservations are used for both retained as the training data. Twenty-five the initial samples are chosen for training test and and validation, and each observation is used for validation exactly once. Here, a commonly used 10-fold validation. The two cross validation approaches are also performed for the three classifiers and the test isclassification employed. results In holdout validation, a subset of observations is chosen randomly from the initial are listed in Table 2 and 3. samples to form a validation or testing set, and the remaining observations are retained as the training Table Classification result of 10-fold data. Twenty-five percent of the2.initial samples are chosencross forvalidation. test and validation. The two cross validation approaches are Decision also performed for the three classifiers and the classification Tree kNN SVM results are listed in Tables 2 and 3. Class 1 Class 0 Class 1 Class 0 Class 1 Class 0 Jump Left Right Forward Backward Jump Left Right Forward Backward

813/895 75/2880 870/895 38/2880 868/895 36/2880 Table 2. Classification result of 10-fold34/2821 cross validation. 914/954 36/2821 939/954 936/954 19/2821 806/901 84/2874 881/901 77/2874 885/901 62/2874 Decision Tree 470/510 47/3265 498/510 kNN 39/3265 493/510 SVM 20/3265 445/515 60/3260 502/515 27/3260 498/515 22/Class 3260 0 Class 1 Class 0 Class 1 Class 0 Class 1 813/895 914/954 806/901 470/510 445/515

75/2880 36/2821 84/2874 47/3265 60/3260

870/895 939/954 881/901 498/510 502/515

38/2880 34/2821 77/2874 39/3265 27/3260

868/895 936/954 885/901 493/510 498/515

36/2880 19/2821 62/2874 20/3265 22/ 3260

Sensors 2016, 16, 1752

17 of 24

Table 3. Classification result of 25% held out validation. Decision Tree

Jump Left Right Forward Backward

kNN

SVM

Class 1

Class 0

Class 1

Class 0

Class 1

Class 0

212/223 226/238 198/225 16/127 114/128

15/718 6/703 18/716 9/814 7/813

219/223 234/238 219/225 125/127 126/128

8/718 15/703 16/716 13/814 4/813

216/223 233/238 217/225 123/127 126/128

8/718 10/703 13/716 7/814 4/813

For each motion, the column tagged with Class 1 shows the correct detection result of actual motions and the column tagged with Class 0 denotes the undesired jump motion detected from other motions. Specifically, for the jump motion detected by decision tree classifier, 813 motions are successfully identified out of 895 jump motions (where 82 actual jump motions are missed or falsely detected), and 75 motions of totally 2880 other motions (the sum of left, right, forward and backward) are falsely considered as jump motions. Additionally, in order to have a quantitative evaluation of the classifier performance, the Accuracy, Precision, and Recall metrics are also introduced. The definition of these metrics and their calculation equations are described below.



Accuracy: The accuracy is the most standard metric to summarize the overall classification performance for all classes and it is defined as follows: Accuracy =



TP + TN TP + TN + FP + FN

Precision: Often referred to as positive predictive value, it is the ratio of correctly classified positive instances to the total number of instances classified as positive: Precision =



(12)

TP TP + FP

(13)

Recall: Also called true positive rate, it is the ratio of correctly classified positive instances to the total number of positive instances: Recall =

TP TP + FN

(14)

where TP (True Positive) indicates the number of true positive or correctly classified results, TN (True Negatives) is the number of negative instances that were classified as negative, FP (False Positives) is the number of negative instances that were classified as positive and FN (False Negatives) is the number of positive instances that were classified as negative. According to the evaluation metrics, the accuracy, precision, recall, for the test result of each motion are calculated and listed in Tables 4–6. Table 4. Evaluation of Decision tree classifier. 10-Fold Cross Validation

Jump Left Right Forward Backward

25% Hold Out Validation

Accuracy

Precision

Recall

Accuracy

Precision

Recall

95.84% 97.98% 95.25% 97.69% 96.55%

91.55% 96.21% 90.56% 90.90% 88.11%

90.83% 95.80% 89.45% 92.15% 86.40%

97.24% 98.09% 95.23% 97.35% 97.77%

93.41% 97.41% 91.67% 92.53% 94.25%

95.08% 94.96% 88.01% 87.45% 89.12%

Sensors 2016, 16, 1752

18 of 24

Sensors 2016, 16, 1752

18 of 24

Table 5. Evaluation of kNN classifier. Table 5. Evaluation of kNN classifier.

Jump Jump Left Left Right Right Forward Forward Backward

Backward

10-Fold Cross Cross Validation 10-Fold Validation Accuracy Recall Accuracy Precision Precision Recall 98.33% 95.81% 97.20% 98.33% 95.81% 97.20% 98.70% 96.50% 98.42% 98.70% 96.50% 98.42% 97.43% 91.96% 97.78% 97.43% 91.96% 97.78% 98.64% 92.73% 97.64% 98.64% 92.73% 97.64% 98.94% 94.89% 97.47% 98.94% 94.89% 97.47%

25%Hold HoldOut Out Validation Validation 25% Accuracy Precision Recall Accuracy Precision Recall 98.72 % 96.47% 98.20% 98.72 % 96.47% 98.20% 97.98% 93.97% 98.31% 97.98% 93.97% 98.31% 97.66% 93.19% 97.33% 97.66% 93.19% 97.33% 98.40% 90.57% 98.42% 98.40% 90.57% 98.42% 99.36% 96.92% 98.43% 99.36% 96.92% 98.43%

Table 6. Evaluation of SVM classifier. Table 6. Evaluation of SVM classifier.

Jump Jump Left Left Right Right Forward Forward Backward

Backward

10-Fold Cross Cross Validation 10-Fold Validation Accuracy Recall Accuracy Precision Precision Recall 98.33%% 96.01% 96.98%% 98.33 96.01% 96.98 99.09% 98.01% 98.11% 99.09% 98.01% 98.11% 97.93% 93.45% 98.22% 97.93% 93.45% 98.22% 99.09% 96.10% 96.66% 99.09% 96.10% 96.66% 98.96% 95.76% 96.69% 98.96% 95.76% 96.69%

25%Hold HoldOut Out Validation Validation 25% Accuracy Precision Recall Accuracy Precision Recall 98.40% 96.42% 96.86% 98.40% 96.42% 96.86% 98.40% 95.88% 97.89% 98.40% 95.88% 97.89% 97.76% 94.34% 96.44% 97.76% 94.34% 96.44% 98.83% 94.61% 96.85% 98.83% 94.61% 96.85% 99.36% 96.92% 98.43% 99.36% 96.92% 98.43%

Based on evaluation metrics listedlisted in Tables 4–6, and4–6, according to the graphically Based onthe the evaluation metrics in Tables and according to the comparison graphically of accuracy and precision shown in Figures 12 and 13, the SVM classifier has an has overall better comparison of accuracy and precision shown in Figures 12 and 13, the SVM classifier an overall performance than the other approaches. Moreover, the average time time for each classifier to make the better performance than the other approaches. Moreover, the average for each classifier to make decision on the motion type is: decision tree classifier 0.0056 ms; kNN, 0.53 ms; and SVM, 0.0632 ms. the decision on the motion type is: decision tree classifier 0.0056 ms; kNN, 0.53 ms; and SVM, 0.0632 Although the decision tree classifier has the least response time for identification, its performance on ms. Although the decision tree classifier has the least response time for identification, its performance thethe motion typetype is not satisfied. TheThe response timetime for SVM is 0.06 ms and is in acceptable time on motion is not satisfied. response for SVM is 0.06 ms it and it an is in an acceptable frame because this lag level will not cause an observable delay on user experience. Hence, combined time frame because this lag level will not cause an observable delay on user experience. Hence, with the performance and the decision time of each time classifier, the classifier, SVM classifier achieves the best result combined with the performance and the decision of each the SVM classifier achieves and is selected in our proposed system to classify the stepping motions. Additionally, we analyze the best result and is selected in our proposed system to classify the stepping motions. Additionally, the analyze misclassified events of each motion to give the profile errors, aiming to avoid thattoone specific we the misclassified events of each motion to giveofthe profile of errors, aiming avoid that stepping motion always contributes to the wrong recognition, which is potentially due to unsuitable one specific stepping motion always contributes to the wrong recognition, which is potentially due feature selection or data segmentation. The statisticalThe result is listedresult in Table 7. to unsuitable feature selection or data segmentation. statistical is listed in Table 7.

Accuracy Comparison 100.00% 99.00% 98.00% 97.00% 96.00% 95.00% 94.00% 93.00% Jump

Left DT

Right kNN

Forward

Backward

SVM

Figure 12. Accuracy comparison of three classifiers. Figure 12. Accuracy comparison of three classifiers.

Sensors 2016, 16, 1752

19 of 24

Sensors 2016, 16, 1752

19 of 24

Precision Comparison 100.00% 95.00% 90.00% 85.00% 80.00% Jump

Left DT

Right kNN

Forward

Backward

SVM

Figure 13. Precision comparison of three classifiers. Figure 13. Precision comparison of three classifiers. Table 7. Foot motion identification error of SVM. Table 7. Foot motion identification error of SVM.

10-fold cross validation 10-fold cross validation

25 hold out validation 25 hold out validation

Jump Left Jump Right Left Forward Right Forward Backward Backward Jump Left Jump Right Left Forward Right Forward Backward Backward

Jump Jump 27 4 27 13 4 6 13 76 7 37 23 12 21 2

Left Right Forward Backward Left Right Forward Backward 8 7 9 12 188 5 7 7 9 3 12 16 5 16 7 18 3 1518 17 16 5 18 515 4 16 17 5 65 5 4 2 17 16 4 5 2 2 1 17 51 2 4 1 2 4 1 8 2 4 1 2 4 55 4 4 1 2 35 2 8 2 1 13 0 2 1 4 1 0 1 2

Corresponding Corresponding Motion Error Motion 42.86%Error

Other Motions Other Error Motions Error 57.14%

48.65% 42.86% 20.51% 48.65% 45.95% 20.51% 45.95% 43.59% 43.59% 46.67% 33.33% 46.67% 38.10% 33.33% 36.36% 38.10% 36.36% 33.33% 33.33%

51.35% 57.14% 79.49% 51.35% 54.05% 79.49% 54.05% 56.41% 56.41% 53.33% 66.67% 53.33% 61.90% 66.67% 63.64% 61.90% 63.64% 66.67% 66.67%

Table 7 provides the false identification of each motion in the two cross-validation approaches. For example, in 10-fold 27 true jump motions are two missing or mistakenly classified, Table 7 provides thecross falsevalidation, identification of each motion in the cross-validation approaches. which occupies the misclassified however, eight sevenor right, nine forward, and For example, in 42.86% 10-foldof cross validation, 27events; true jump motions areleft, missing mistakenly classified, 12 backward are42.86% wrongly as jump motion classifier, which of which occupies of treaded the misclassified events;by however, eight left,totally sevencontributes right, nine57.15% forward, the misclassified events. In each classifier, the identification error of its corresponding motion type and 12 backward are wrongly treaded as jump motion by classifier, which totally contributes (i.e., theof wrong categorization of jump in the jump occupies 33% to 57.15% the misclassified events. Inmotion each classifier, the classifier) identification errorapproximately of its corresponding 48%, andtype the misclassified percentage of other motion fromin51% 66%. Moreover, the error motion (i.e., the wrong categorization of jumpvaries motion thetojump classifier) occupies result also shows the and misclassified events are averagely distributed each from motion, approximately 33%that to 48%, the misclassified percentage of other motioninvaries 51%and to demonstrates that oneresult specific errorthe is misclassified predominantevents duringare theaveragely motion determination 66%. Moreover, theno error alsomotion shows that distributed in process. each motion, and demonstrates that no one specific motion error is predominant during the motion determination process. 5.3. Practical Experiment Result 5.3. Practical Experiment Result A running game we programmed in Unity is used to practically test the algorithm. In this game, A is running game we forest programmed in Unity obstacles is used toand practically test the algorithm. In this game, a man running in the with numerous the traditional play manner is that the auser manneeds is running in the forest with numerous obstacles and play the manner is thatHere, the user to control the object to jump, go left, go right or the get traditional down to avoid obstacles. we needs to control the object to jump, go left, go right or get down to avoid the obstacles. Here, we use use foot movement direction to control the man and the result is shown in following figures. foot movement direction to control man and the result is shown in following As shown in Figure 14, the redthe rectangle shows the virtual player presentedfigures. in game, the arrow As shown in Figure 14, the red rectangle shows the virtual player the presented in game, the arrow denotes the player’s moving direction, the green rectangle illustrates step motion identification denotes the player’s moving direction, the green rectangle illustrates the step motion identification result, and the orange rectangle shows the person moving direction. result, and the orangepractical rectangletest shows thein person moving direction. Figure 15 shows result the game Subway Surfers. Figure 15a illustrates that a Figure 15 shows practical testto result in theofgame Surfers. Figure 15aside illustrates that a person walking forward correlates the jump kid inSubway game operation. In the left of this figure, person walking forward correlates to the jump of kid in game operation. In the left side of this figure, the person steps forward and the red arrow presents the stepping direction. The right side shows the the person steps forward and presents the stepping The rightup side game environment, where wethe canred seearrow that the kid selected in the direction. green circle jumps toshows avoid the front obstacle. In the same way, Figure 15b shows the person stepping left and it correlates to the kid moving to left.

Sensors 2016, 16, 1752

20 of 24

game environment, where we can see that the kid selected in the green circle jumps up to avoid the front obstacle. In the same way, Figure 15b shows the person stepping left and it correlates to the kid moving to left. Sensors 2016, 1752 Sensors 2016, 16,16, 1752

20 of20 24of 24

(a)

(b)

(a)

(b)

(c)

(d)

(c)

(d)

(e) Figure 14. Practical game play test in running game: (a) Jump motion; (b) Forward motion; (c)

Figure 14. Backward Practical game playmotion; test in motion; (d) Right and running (e) Left motion. (e) game: (a) Jump motion; (b) Forward motion; (c) Backward motion; (d) Right motion; and (e) Left motion. Figure 14. Practical game play test in running game: (a) Jump motion; (b) Forward motion; (c) Backward motion; (d) Right motion; and (e) Left motion.

(a)

(a)

Figure 15. Cont.

Sensors 2016, 16, 1752

21 of 24

Sensors 2016, 16, 1752

21 of 24

(b) Figure 15 Practical game play test in Game Subway Surfers: (a) Step forward; and (b) Step left.

Figure 15. Practical game play test in Game Subway Surfers: (a) Step forward; and (b) Step left. 6. Conclusions

6. Conclusions This paper introduces a novel application of foot-mounted inertial sensor based wearable electronic devices—game play. The main contributions of this paper can be summarized as: (1) This

This paper paperpresents introduces novel to application of stepping foot-mounted sensor the firsta attempt employ user’s direction inertial for controlling the based player wearable electronicoperation devices—game play. The main contributions of a this can be summarized as: in game play. (2) This paper proposes and implements novelpaper computationally-efficient, real-time algorithm theattempt identification of foot moving In the proposed system, the the player (1) This paper presents thefor first to employ user’sdirection. stepping(3)direction for controlling and gyroscope measurements are fused to derive the attitude and use it to correct the operationacceleration in game play. (2) This paper proposes and implements a novel computationally-efficient, misalignment error. This makes the proposed algorithm compatible with various shoe styles and real-time sensor algorithm for the identification of type footcan moving direction. (3) In the proposed system, placements. (4) The stepping motion be recognized in the beginning phase of one the acceleration and gyroscope measurements are fused to derive attitudetoand usethe it to correct step cycle, which guarantees the system real-time applicability. (5) It the is suggested design corresponding classifier each motion where each classifier only needs to identify two classes the misalignment error. This for makes the proposed algorithm compatible with various shoe styles instead of using one classifier to recognize all five motions. This is beneficial to acquire a more precise and sensor placements. (4) The stepping motion type can be recognized in the beginning phase of and reliable identification result. (6) Three commonly-used classifiers in the aspects of cross one step cycle, which guarantees the system real-time applicability. (5) It is suggested to design the validation performance and response time are compared. Based on this comparison, it is concluded corresponding classifier for each motion where each classifier only needs to identify two game classes instead that the SVM classifier achieves the best performance. (7) It extends the inertial sensor based playclassifier scenario toto therecognize foot motionall control whichThis introduces the possibility of playing running of using one fivemode, motions. is beneficial to acquire a more precise and game indoor or anywhere and is potentially beneficial to encourage the user to exercise more for reliable identification result. (6) Three commonly-used classifiers in the aspects of cross validation good health. Practical experiments of different users illustrate that the proposed system reaches a performance response time are compared. Based onexperience, this comparison, it is concluded highand accuracy classification result and excellent user and it effectively broadens that the the SVM classifier achieves best performance. (7)electronic It extends the inertial sensor based game play scenario to applicationthe of current available wearable devices. the foot motion control mode, the possibility playingCouncil running Acknowledgments: The workwhich of Qifanintroduces Zhou was supported by the Chinaof Scholarship undergame Grant indoor or 201306020073. anywhere and is potentially beneficial to encourage the user to exercise more for good health. Practical Author Contributions: Hai illustrate Zhang and Naser conceived the ideareaches and supervised thisaccuracy research work. experiments of different users that El-Sheimy the proposed system a high classification Qifan Zhou implemented the proposed system, performed the experiment and wrote this paper. Zahra Lari result and excellent user experience, and it effectively broadens the application of current available helped reviewed this paper and provided much valuable revising advice. Zhenbo Liu helped with the wearable electronic devices. experimental data collection, tested the system and proposed several suggestions. Conflicts of Interest: The authors declare no conflict of interest.

Acknowledgments: The work of Qifan Zhou was supported by the China Scholarship Council under Grant 201306020073. References

Author Contributions: Hai Zhang and Naser El-Sheimy conceived the idea and supervised this research work. 1. Susi, M. Gait Analysis for Pedestrian Navigation Using MEMS Handheld Devices. Master’s Thesis, Qifan Zhou implemented the proposed system, performed the experiment and wrote this paper. Zahra Lari helped Department of Geomatics Engineering, University of Calgary, Calgary, AB, Canada, 2012. reviewed this paper and provided much valuable revising advice. Zhenbo Liu helped with the experimental data 2. Junker, H.; Amft, O.; Lukowicz, P.; Tröster, G. Gesture spotting with body-worn inertial sensors to detect collection, tested the system and proposed several suggestions. user activities. Pattern Recognit. 2010, 41, 2010–2024.

Li, Y.; Georgy, J.; Niu, declare X.; Li, Q.;no El-Sheimy, Conflicts of3. Interest: The authors conflictN. of Autonomous interest. calibration of MEMS Gyros in consumer 4.

References 1. 2. 3. 4.

portable devices. IEEE Sens. J. 2015, 15, 4062–4072. Park, S.; Jayaraman, S. Smart textiles: Wearable electronic systems. MRS Bull. 2003, 28, 585–591.

Susi, M. Gait Analysis for Pedestrian Navigation Using MEMS Handheld Devices. Master’s Thesis, Department of Geomatics Engineering, University of Calgary, Calgary, AB, Canada, 2012. Junker, H.; Amft, O.; Lukowicz, P.; Tröster, G. Gesture spotting with body-worn inertial sensors to detect user activities. Pattern Recognit. 2010, 41, 2010–2024. [CrossRef] Li, Y.; Georgy, J.; Niu, X.; Li, Q.; El-Sheimy, N. Autonomous calibration of MEMS Gyros in consumer portable devices. IEEE Sens. J. 2015, 15, 4062–4072. [CrossRef] Park, S.; Jayaraman, S. Smart textiles: Wearable electronic systems. MRS Bull. 2003, 28, 585–591. [CrossRef]

Sensors 2016, 16, 1752

5.

6.

7. 8. 9. 10. 11. 12.

13.

14.

15. 16.

17.

18.

19.

20.

21.

22.

23.

22 of 24

Nilsson, J.O.; Skog, I.; Händel, P.; Hari, K.V.S. Foot-mounted INS for everybody—An open-source embedded implementation. In Proceedings of the IEEE Position Location and Navigation Symposium (PLANS), Myrtle Beach, SC, USA, 23–26 April 2012; pp. 140–145. Ruppelt, J.; Kronenwett, N.; Scholz, G. High-precision and robust indoor localization based on foot-mounted inertial sensors. In Proceedings of the IEEE/ION Position, Location and Navigation Symposium (PLANS), Savannah, GA, USA, 11–16 April 2016; pp. 67–75. Abdulrahim, K.; Hide, C.; Moore, T.; Hill, C. Rotating a MEMS inertial measurement unit for a foot-mounted pedestrian navigation. J. Comput. Sci. 2014, 10, 2619. [CrossRef] Fischer, C.; Gellersen, H. Location and navigation support for emergency responders: A survey. IEEE Pervasive Comput. 2010, 9, 38–47. [CrossRef] Skog, I.; Händel, P.; Nilsson, J.-O.; Rantakokko, J. Zero-velocity detection—An algorithm evaluation. IEEE Trans. Biomed. Eng. 2010, 57, 2657–2666. [CrossRef] [PubMed] Norrdine, A.; Kasmi, Z. Step detection for ZUPT-aided inertial pedestrian navigation system using foot-mounted. IEEE Sens. J. 2016, 16, 6766–6773. [CrossRef] Gu, Y.; Song, Q.; Li, Y.; Ma, M. Foot-mounted pedestrian navigation based on particle filter with an adaptive weight updating strategy. J. Navig. 2014, 68, 23–38. [CrossRef] Ruiz, A.R.J.; Granja, F.S.; Honorato, J.C.P.; Guevara Rosas, J.I. Accurate pedestrian indoor navigation by tightly coupling foot-mounted IMU and RFID measurements. IEEE Trans. Instrum. Meas. 2012, 61, 178–189. [CrossRef] Ascher, C.; Kessler, C.; Wankerl, M.; Trommer, G.F. Dual IMU indoor navigation with particle filter based map-matching on a smartphone. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation (IPIN), Zürich, Switzerland, 15–17 September 2010; pp. 15–17. Nilsson, J.O.; Gupta, A.K.; Handel, P. Foot-mounted inertial navigation made easy. In Proceedings of the 5th International Conference on Indoor Positioning and Indoor Navigation, Busan, Korea, 27–30 October 2015; pp. 24–29. Harle, R. A survey of indoor inertial positioning systems for pedestrians. IEEE Commun. Surv. Tutor. 2013, 15, 1281–1293. [CrossRef] Yun, X.; Calusdian, J.; Bachmann, E.R.; McGhee, R.B. Estimation of human foot motion during normal walking using inertial and magnetic sensor measurements. IEEE Trans. Instrum. Meas. 2012, 61, 2059–2072. [CrossRef] Bancroft, J.B.; Garrett, D.; Lachapelle, G. Activity and environment classification using foot mounted navigation sensors. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation (IPIN), Sydney, Australia, 13–15 November 2012. Avci, A.; Bosch, S.; Marin-perianu, M.; Marin-perianu, R.; Havinga, P. Activity recognition using inertial sensing for healthcare, wellbeing and sports applications: A survey. In Proceedings of the 2010 23rd International Conference on Architecture of Computing Systems (ARCS), Hannover, Germany, 22–25 February 2010. Choi, I.; Ricci, C. Foot-mounted gesture detection and its application in virtual environments. In Computational Cybernetics and Simulation, Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Orlando, FL, USA, 12–15 October 1997; IEEE: New York, NY, USA, 1997; Volume 5, pp. 4248–4253. Mannini, A.; Sabatini, A.M. Gait phase detection and discrimination between walking-jogging activities using hidden Markov models applied to foot motion data from a gyroscope. Gait Posture 2012, 36, 657–661. [CrossRef] [PubMed] Porta, J.P.; Acosta, D.J.; Lehker, A.N.; Miller, S.T.; Tomaka, J.; King, G.A. Validating the adidas miCoach for estimating pace, distance, and energy expenditure during outdoor over-ground exercise accelerometer. Int. J. Exerc. Sci. 2012, 2, 23. Tucker, W.J.; Bhammar, D.M.; Sawyer, B.J.; Buman, M.P.; Gaesser, G.A. Validity and reliability of Nike+ fuelband for estimating physical activity energy expenditure. BMC Sports Sci. Med. Rehabil. 2015, 7, 1. [CrossRef] [PubMed] Fong, D.T.; Chan, Y.; Kong, H.; Hong, T.; Jockey, K.; Sports, C.; Centre, H.S.; Kong, H.; Ho, A.; Ling, M.; et al. The use of wearable inertial motion sensors in human lower limb biomechanics studies: A systematic review. Sensors 2010, 10, 11556–11565. [CrossRef] [PubMed]

Sensors 2016, 16, 1752

24. 25. 26. 27.

28.

29.

30.

31. 32.

33. 34. 35. 36. 37. 38. 39. 40.

41. 42.

43. 44. 45.

23 of 24

Casamassima, F.; Ferrari, A.; Milosevic, B.; Ginis, P.; Farella, E.; Rocchi, L. A wearable system for gait training in subjects with Parkinson’s disease. Sensors 2014, 14, 6229–6246. [CrossRef] [PubMed] Bae, J.; Kong, K.; Byl, N.; Tomizuka, M. A mobile gait monitoring system for abnormal gait diagnosis and rehabilitation: A pilot study for Parkinson disease patients. J. Biomech. Eng. 2011, 133, 41005. Bae, J.; Tomizuka, M. A tele-monitoring system for gait rehabilitation with an inertial measurement unit and a shoe-type ground reaction force sensor. Mechatronics 2013, 23, 646–651. [CrossRef] Strohrmann, C.; Harms, H.; Tröster, G.; Hensler, S.; Müller, R. Out of the lab and into the woods: Kinematic analysis in running using wearable sensors. In Proceedings of the 13th International Conference on Ubiquitous Computing, Beijing, China, 17–21 September 2011; pp. 119–122. Chung, P.-C.; Hsu, Y.-L.; Wang, C.-Y.; Lin, C.-W.; Wang, J.-S.; Pai, M.-C. Gait analysis for patients with Alzheimer’s disease using a triaxial accelerometer. In Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), Seoul, Korea, 20–23 May 2012; pp. 1323–1326. Schou, T.; Gardner, H.J. A Wii remote, a game engine, five sensor bars and a virtual reality theatre. In Proceedings of the 19th Australasian Conference on Computer-Human Interaction: Entertaining User Interfaces, Adelaide, Australia, 28–30 November 2007; pp. 231–234. Schlömer, T.; Poppinga, B.; Henze, N.; Boll, S. Gesture recognition with a Wii controller. In Proceedings of the 2nd International Conference on Tangible and Embedded Interaction, Bonn, Germany, 18–21 February 2008; pp. 11–14. Shum, H.P.H.; Komura, T.; Takagi, S. Fast accelerometer-based motion recognition with a dual buffer framework. Int. J. Virtual Real. 2011, 10, 17–24. Heinz, E.A.; Kunze, K.S.; Gruber, M.; Bannach, D.; Lukowicz, P. Using wearable sensors for real-time recognition tasks in games of martial arts—An initial experiment. In Proceedings of the IEEE Symposium on Computational Intelligence and Games, Reno, NV, USA, 22–24 May 2006; pp. 98–102. CC2540. Bluetooth® Low Energy Software Developer’s Guide v1.4. Available online: http://www.TI_BLE_ Software_Developer\T1\textquoterights_Guide.pdf (accessed on 10 Feburary 2010). Board, E.V.; Guide, U. MPU-9150 9-Axis Evaluation Board User Guide; InvenSense: Sunnyvale, CA, USA, 2011; Volume 1. Noureldin, A.; Karamat, T.B.; Georgy, J. Fundamentals of Inertial Navigation, Satellite-Based Positioning and Their Integration; Springer Science & Business Media: Berlin, Germany, 2012. Syed, Z.F.; Aggarwal, P.; Goodall, C.; Niu, X.; El-Sheimy, N. A new multi-position calibration method for MEMS inertial navigation systems. Meas. Sci. Technol. 2007, 18, 1897–1907. [CrossRef] Shin, E.H.; El-Sheimy, N. A new calibration method for strapdown inertial navigation systems. Z. Vermess. 2002, 127, 41–50. Li, Y.; Niu, X.; Zhang, Q.; Zhang, H.; Shi, C. An in situ hand calibration method using a pseudo-observation scheme for low-end inertial measurement units. Meas. Sci. Technol. 2012, 23, 105104. [CrossRef] Gravina, R.; Alinia, P.; Ghasemzadeh, H.; Fortino, G. Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges. Inf. Fusion 2017, 35, 68–80. [CrossRef] Foxlin, E. Inertial head-tracker sensor fusion by a complementary separate-bias Kalman filter. In Proceedings of the IEEE Virtual Reality Annual International Symposium, Santa Clara, CA, USA, 3 March–3 April 1996; pp. 185–195. Li, W.; Wang, J. Effective adaptive Kalman filter for MEMS-IMU/magnetometers integrated attitude and heading reference systems. J. Navig. 2012, 66, 99–113. [CrossRef] Wang, M.; Yang, Y.; Hatch, R.R.; Zhang, Y. Adaptive filter for a miniature MEMS based attitude and heading reference system. In Proceedings of the IEEE Position Location and Navigation Symposium Monterey, CA, USA, 26–29 April 2004; pp. 193–200. Godha, S.; Lachapelle, G. Foot mounted inertial system for pedestrian navigation. Meas. Sci. Technol. 2008, 19, 075202. [CrossRef] El-Sheimy, N. Inertial Techniques and INS/DGPS Integration; Engo 623-Course Notes; Department of Geomatics Engineering, University of Calgary: Calgary, AB, Canada, 2003; pp. 170–182. Safavian, S.R.; Landgrebe, D. A survey of decision tree classifier methodology. In Proceedings of the International Conference on Machine Learning, Amsterdam, The Netherlands, 7–12 August 1990.

Sensors 2016, 16, 1752

46.

47. 48. 49. 50.

24 of 24

Chawla, N.V. C4.5 and imbalanced data sets: Investigating the effect of sampling method, probabilistic estimate, and decision tree structure. In Proceedings of the International Conference on Machine Learning, Washington, DC, USA, 21–24 August 2003; Volume 3. Larose, D.T. k-Nearest Neighbor Algorithm. In Discovering Knowledge in Data: An Introduction to Data Mining; Wiley-Interscience: Hoboken, NJ, USA, 2005; pp. 90–106. Chang, C.-C.; Lin, C.-J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 27. [CrossRef] Hsu, C.-W.; Chang, C.-C.; Lin, C.-J. A Practical Guide to Support Vector Classification; Department of Computer Science, National Taiwan University: Taipei, Taiwan, 2003. Suykens, J.A.K.; Vandewalle, J. Recurrent least squares support vector machines. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 2000, 47, 1109–1114. [CrossRef] © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

Design and Implementation of Foot-Mounted Inertial Sensor Based Wearable Electronic Device for Game Play Application.

Wearable electronic devices have experienced increasing development with the advances in the semiconductor industry and have received more attention d...
9MB Sizes 2 Downloads 15 Views