IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,

VOL. 10,

NO. 6,

NOVEMBER/DECEMBER 2013

1491

EEG/ERP Adaptive Noise Canceller Design with Controlled Search Space (CSS) Approach in Cuckoo and Other Optimization Algorithms M.K. Ahirwal, Anil Kumar, and G.K. Singh Abstract—This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/event-related potential noise cancellation or extraction. A new approach is proposed in the form of controlled search space to stabilize the randomness of swarm intelligence techniques especially for the EEG signal. Swarm-based algorithms such as Particles Swarm Optimization, Artificial Bee Colony, and Cuckoo Optimization Algorithm with their variants are implemented to design optimized adaptive noise canceler. The proposed controlled search space technique is tested on each of the swarm intelligence techniques and is found to be more accurate and powerful. Adaptive noise canceler with traditional algorithms such as least-mean-square, normalized least-mean-square, and recursive least-mean-square algorithms are also implemented to compare the results. ERP signals such as simulated visual evoked potential, real visual evoked potential, and real sensorimotor evoked potential are used, due to their physiological importance in various EEG studies. Average computational time and shape measures of evolutionary techniques are observed 8.21E-01 sec and 1.73E-01, respectively. Though, traditional algorithms take negligible time consumption, but are unable to offer good shape preservation of ERP, noticed as average computational time and shape measure difference, 1.41E-02 sec and 2.60E+00, respectively. Index Terms—Adaptive noise canceler, EEG/ERP, COA, PSO, ABC, evolutionary techniques

Ç 1

INTRODUCTION

T

O

handle the nonstationary and nonlinear signals, adaptive filters are the best and simplest structure based on very simple idea of adaptation and filtering. Adaptation of filter weights or coefficients is the backbone of adaptive filter empowered with a particular algorithm, which is able to track the changes in signals, and then updated or adapted weights are used for filtering. Basically, adaptive filters are invented with gradient-based algorithms like LMS, and after achieving popularity among realtime signal processing, several variants were also developed for specific applications. NLMS and RLS are the most famous algorithms applied in adaptive signal processing to design adaptive filters [1], [3]. Bioelectric potentials (biomedical signal processing) are the best means to analyze or observe various phenomena and changes that occur in human or animal bodies. Biomedical signal processing research field covers the entire steps involved beginning from the recording of bioelectric potentials to obtain final results or conclusion that explores or serves some new information or finding within it. EMG, EEG, and ECG are the most significant biosignals. Signal acquisition, filtering or noise reduction,

. M.K. Ahirwal and A. Kumar are with the Pandit Dwarka Prasad Mishra Indian Institute of Information Technology, Design & Manufacturing, Jabalpur, Dumna Airport Road, Khamaria, Jabalpur, Madhya Pradesh 482005, India. E-mail: {ahirwalmitul, anilkdee}@gmail.com. . G.K. Singh is with the Indian Institute of Technology Roorkee, Roorkee, Uttarakhand 247667, India. E-mail: [email protected]. Manuscript received 24 May 2013; revised 7 Aug. 2013; accepted 9 Sept. 2013; published online 20 Sept. 2013. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference IEEECS Log Number TCBB-2013-05-0152. Digital Object Identifier no. 10.1109/TCBB.2013.119. 1545-5963/13/$31.00 ß 2013 IEEE

feature extraction (any specific signal processing technique), and classification (in case of large data set) are the basic and necessary steps for the analysis of biomedical signals [4], [5]. Brain is the most powerful and complicated organ in the body, whose whole working process is not explored until now. To analyze the brain, the simplest and cheapest mean is the recording of EEG signals as compared to other welldeveloped techniques like computer tomography scan or functional magnetic resonance imaging, and so on. EEG signals are nonlinear as well as nonstationary by nature from their origin [4]. Combined neural information transferred from one group of neuron to other produces electric potentials, which are recorded over scalp by a noninvasive EEG recording technique; otherwise, invasive (surgical) techniques are used to monitor any particular area or to record very low potentials [4], [6]. Nowadays, EEG signal processing covers and serves areas that have their major significance in clinical (analysis and diagnosis of various brain diseases) as well as technological usage, like devices that are operated directly through brain responses (BCI, robotic arm or limb control) [7]. ERP is a very important class of EEG signals, and it is a response of the brain corresponding to any event or task performed, originated at particular area for very short time. ERP characteristics vary task-to-task and also depend upon the subject (person whose EEG is recorded) [7], [8]. Detection of these ERPs is the base idea of BCI. P300 ERP and steady-state visually evoked potentials are the two widely used signals in the BCI research. P300 potentials have been widely investigated in the design of BCI spellers. Recent studies show hybrid implementation of BCI by Published by the IEEE CS, CI, and EMB Societies & the ACM

1492

IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,

incorporating the steady-state visually evoked potential into the conventional P300 paradigm [9], [10], but it is not an easy process. EEG/ERP is contaminated with other biosignals because of the overlapping frequency spectrum. ERP is not only contaminated with other biosignals, but also with the ongoing EEG generated nearby the source of ERP [7], [8], [11]. Detailed discussion is provided in a later section. Various application and methods are developed and successfully implemented that explore the EEG research field; some of them are discussed below. Application-based EEG research works also motivate the detection of perfect ERP signals through advanced signal processing techniques. Some of them use the spatiotemporal filtering method for the single-trial estimation of ERP subcomponents as proposed in [12] and are able to estimate temporarily the correlated ERP subcomponents such as P3a and P3b. Particle filtering has been proposed in [13] to track the variation of P3a and P3b parameters of P300 ERP within consecutive time-locked trials of the stimulated EEGs. A point-of-interest-based image retrieval system is implemented using rapid image triage, which is the latest example for the utilization of ERP generated in response of different images. Performance of the rapid image triage system is also related to SNR of the recorded ERP which leads to the need of a denoising method to improve ERP [14]. In [15], optimal filtering is designed with independent component analysis, which improves the overall results as well as helps in dimension reduction of the EEG data. Source localization with spatial notch filter is proposed [16], which is able to localize the source accurately within the high-noise environment. Discrete wavelet transform has been applied [17], [18] to improve the EEG signals through decomposition of the noisy EEG signal into several levels to detect noise components in the EEG signal and remove it. Generally, these algorithms are called wavelet denoising. The problem in conventional static filtering occurs due to the nonlinear nature of EEG and it becomes more complicated due to the overlapping spectrum of noise signals (EMG, ECG, etc.), and the static filters fails to give accurate separation that affects further steps. Application of AF changes the whole scenario of EEG signal processing, and extraction or detection of ERP is easily done by the ANC model. Several techniques have been developed for the EEG signal analysis using adaptive algorithms [4], [19], [20], [21], [22], [23]. In these algorithms, various error estimation methods have been exploited in adaptive filters to adjust the weights of filter according to EEG signals and noise property [4], [21]. The most efficient gradient-based algorithms used for adaptive filtering are LMS, RLS, and their different variants. Presently, various optimization algorithms are also employed to optimize the performance of adaptive filters. Optimization techniques are very useful to find the best solution set of different problems. These techniques can be classified into two major classes: one is the gradient based and second is the population/swarm-based technique also known as ET. PSO, ABC, ACO, and COA are some among ETs. Adaptive filters are also implemented with swarmbased/evolutionary techniques to improve their performance. Application of these optimized adaptive filters improves the results in various fields of engineering. Application of evolutionary techniques has increased in

VOL. 10,

NO. 6,

NOVEMBER/DECEMBER 2013

designing of adaptive filters and to solve specific problems [24], [25], [26]. Increase in popularity has subsequently produced several variants of ABC and PSO according to the application and requirement. However, only few references have been reported in the literature in which PSO and ABC have been employed for adaptive noise cancellations, system identification, and channel equalization [27], [28], [29]. Adaptive noise canceler based on the COA algorithm for EEG/ERP filtering is not reported until now. Therefore, in this paper, ANC is proposed, implemented, and compared with PSO, ABC (along with their several variants), and gradient-based algorithms. The concept behind ET is random solution generation within search space, evaluating their fitness, selecting best, and updating the other using a specific methodology. These steps are repeated until fixed iteration or acceptable error value is achieved. A new finding to control the search space of ET is also proposed as CSS to stabilize the random processing of ET, especially in case of EEG processing. This paper is mainly divided into 11 sections. Section 1 gives the overview of adaptive filters, EEG/ERP, and various filtering, noise, and enhancement techniques applied to it. Section 2 is all about the basic principle of adaptive filtering, adaptive noise cancellation, and its application details in the field of EEG, and Section 3 contains the theory and implementation details of ETs. The methodology, problem definition, and solution approach are presented in Section 4. Section 5 contains the proposed theory of search space control. All details about the data set used for analysis and comparison are presented in Section 6. Simulation and final results with their subconclusion are given in Sections 7 and 8, respectively. At the end, justification of the proposed method, conclusion, discussion, and some direction toward future work as extension of this study are included in Sections 9, 10, and 11, respectively.

2

ADAPTIVE FILTER/ADAPTIVE NOISE CANCELER

Adaptive filtering, clearly understood as the task of adaptation, refers to the process through which the system parameters get changed from time index n to time index n þ 1. Types as well as the number of system parameters depend on the computational structure of the system and algorithm used for the adaptation process. General representation of any system having a finite number of system parameters shows that how output signal yðnÞ is calculated from input signal xðnÞ with the help of desired signal dðnÞ. Error between the output and desired signal calculated as eðnÞ ¼ dðnÞ  yðnÞ (negative notation is used for output to be subtracted from desired signal) could be used for the adaptive filter depicted in Fig. 1. Parameter or coefficient can be defined as vector W ðnÞ: WðnÞ ¼ ½w0 ðnÞw1 ðnÞ      wL1 ðnÞT ;

ð1Þ

where fWi ðnÞg; 0  i  L  1, are the L system parameters at time n. Relationship defined as general input-output for the adaptive filter is represented as 0 1 W ðnÞ; yðn  1Þ; yðn  2Þ; . . .; yðnÞ ¼ f @ yðn  N Þ; xðn  1Þ; xðn  2Þ;. . .; A; ð2Þ xðn  M þ 1Þ

AHIRWAL ET AL.: EEG/ERP ADAPTIVE NOISE CANCELLER DESIGN WITH CONTROLLED SEARCH SPACE (CSS) APPROACH IN CUCKOO...

1493

measured signal dðnÞ in noisy state. Iteratively, subtraction of output yðnÞ component from dðnÞ produces remaining eðnÞ as a signal of interest.

3 Fig. 1. The basic model of adaptive filter.

where fðÞ is equivalent to any linear or nonlinear function, and M and N are positive integers. Implicit in this definition is the fact that the filter is causal, such that future values of xðnÞ are not needed to compute yðnÞ, while noncausal filters can be handled in practice by suitably buffering or storing the input signal samples [30]. Equation (2) is the simplest description of an adaptive filter structure. The main motive is to find the best relationship between input and desired response signals according to the problem. Typically, finite-impulse-response (FIR) filter is taken for relationship. System parameters W ðnÞ are corresponding to the impulse response values of the filter at time n. The output signal yðnÞ is calculated as follows: yðnÞ ¼

L1 X

wðnÞi xðn  iÞ;

ð3Þ

i¼0

yðnÞ ¼ W T ðnÞXðnÞ;

ð4Þ T

where XðnÞ ¼ ½xðnÞxðn  1Þ. . . . . .xðn  L þ 1Þ represents the input signal vector and T is vector transpose. Most probably, the operation of adaptive filter occurs in real time, and one sample time is best for all the system calculations. yðnÞ can be computed in finite time through the structure illustrated above with simple arithmetic operations. In the above discussion, the linear structure is considered. Nonlinear systems do not obey the principle of superposition for fixed parameters. Nonlinear systems are needed where relationship between dðnÞ and xðnÞ is not of linear nature [30]. To resolve that system, there are possibly two ways: one is the use of Volterra and bilinear filter classes that are also known as nonlinear filters, to compute yðnÞ, and the other way is to use nonlinear models or algorithms like neural network, genetic algorithm, fuzzy logic, and others, also known as nonclassical approaches for adaptive filters design [31]. In this paper, the nonclassical approach, by implementing the recently developed Cuckoo’s Optimization Algorithm, is used. PSO and ABC algorithms are also implemented for comparative analysis. ANC for EEG. EEG channel measurements have physical constraints that often limit the ability to clearly measure the signals. These physical constraints are mainly the line disturbances, instrumental noises, or some other biological signal having the same nature. The nonlinear nature of EEG acts as a nonlinear system that needs the same type of a system for processing. Extraneous noises in EEG introduce unacceptable errors in measurement. However, if a related reference version of the extraneous noises can be observed at some other physical location nearby the signal of interest, an adaptive filter should be used to determine the relationship between noise reference signal xðnÞ and the desired

ALGORITHMS

This section is subdivided into two sections: first one gives the introduction of gradient-based approaches, while the second gives the introduction of ET. COA with its two implementation strategies and a very quick introduction of PSO and ABC with their variants have been discussed.

3.1 Gradient-Based Algorithm Gradient-based algorithms are derived on the basis of a differentiable function, easily formulated for weights updating of adaptive filter. LMS, NLMS, and RLS algorithm formulation and implementations as adaptive filter, adaptive noise canceler, and a system identification are presented in [1], [2], [3], [21]. 3.2 Swam Intelligence/Evolutionary Techniques There is a similarity in the working principle of all ETs. All use population random solution generation. Fitness of those solutions is calculated through a cost function depending upon the minimization or maximization problem. The best solution is selected from every generation of solution. The only and main difference is the updating process through which new generation is created, and that differs in each algorithm. PSO is the most famous swarm intelligence method for solving optimization problems. Originally, the PSO technique was developed by Eberhart and Kennedy in 1995, inspired by the social behavior of bird flocking or fish schooling [32], [33], [34]. ABC has been proposed by Karaboga in 2005, for optimizing the solutions of different problems. The ABC algorithm follows the principle of optimization inspired from the foraging behavior of a bee colony [35], [36], [37]. In 2009, Yang and Deb introduced an effective cuckoo search algorithm [38], also known as COA. The Cuckoo search algorithm is based on the searching strategy of cuckoo birds for laying their eggs in a suitable host bird nest. The COA model is defined with Le´vy flight (random walk) or the ELR method [39]. Theory and implementation of these algorithms with their variants were nicely reviewed in [32], [33], [34], [35], [36], [37], [38], [39], [40], [41]. Pseudocodes of these algorithms in the context of adaptive noise canceler are provided below. Algorithms 1, 2, and 3 are corresponding to PSO, ABC, and COA algorithms with their main steps. Variants of the above discussed algorithms are listed in the Table 1 with their short description and equations. Basic update equations of PSO, ABC, and COA are given in (5.a, b), (6.a, b), and (7):   ðaÞ vi ¼ I  vi þ U ð0; 1 Þ  ðPi  Xi Þ þ Uð0; 2 Þ  Pg  Xi ; ðbÞ Xi ¼ Xi þ vi ð5Þ fi ðaÞ Vi ¼ Xi þ i  ðXi  Xk Þ; ðbÞP orbi ¼ Pn

i¼1 fi

;

ð6Þ

1494

IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,

VOL. 10,

NO. 6,

NOVEMBER/DECEMBER 2013

TABLE 1 Variants of Algorithms with Their Description

Fig. 2. Illustration of ERP recording problem and solution through ANC.

food source to be selected, calculated with fi fitness values.  is the step size for product with entry wise ð Þ multiplications. Detailed descriptions of these equations are given in [32], [35], [39].

4

ðtþ1Þ

Xi

ðtÞ

¼ Xi þ  L evy;

ð7Þ

where Xi and Vi represent the population of n solution with D dimensions. vi is the velocity of particles in PSO, I represents inertia, Uð0; i Þ represents a vector of random numbers uniformly distributed in ½0; i . Pi and Pg are the particles best and global best. P robi is the probability of ith

METHODOLOGY, PROBLEM DEFINITION, AND IMPLEMENTATION

Conceptualization of adaptive noise canceler for EEG filtering is done as follows: Let sðnÞ be the EEG signal (ERP), which is corrupted by noise that can be of any specific type qðnÞ. Assume that q~ðnÞ is the correlated version of noise. It is also assumed that the corrupted signal dðnÞ is composed of the desired sðnÞ and noise qðnÞ, which is additive and not correlated with sðnÞ. Basically, ANC works based on the estimation of desired output by minimization of error between desired and actual output. Input for a typical ANC are noisy signal and desired signal, which gives estimate of the desired signal as output s~ðnÞ. Proper illustration of this filtering approach is given in Fig. 2. Adaptive filters with different PSO, ABC, and COA variants are constructed to filter out the ERP that was overlapped with the noise; and first-order FIR filter is implemented. WN and ongoing EEG signal as background noise are added to ERP at different noise levels. First-order FIR. Linearity depends upon the order of filter, and higher order filter provides more linearity. While EEG signals are nonlinear, low-order filter reduces linearity in the filter. It is also a fact that the increase in nonlinearity is achieved by using quadratic, cubic terms in filter design (nonlinear Volterra filter). Nonlinear Volterra filter are also employed to EEG noise reduction [29], [42], but it becomes complex in nature as its order increases. For least complexity, only single term of that is used which is linear in nature. White Gaussian noise. WN is exploited as background noise because WGN consists of a wide range of frequencies with equally distributed power, and also acts as the worst case for noise contamination. If ANC performs well on white noise reduction, then it can perform well with any type of noise. Ongoing EEG signal as noise. Electrical potential from source brain regions is spread and recorded across the EEG sensors due to a process called volume conduction. Volume conduction is the property of EEG that affects the potential received at the sensors (EEG electrodes). EEG is generally recorded as multichannel recordings, rather than recording activity/task at only one brain site [43]. Each electrode captures a linear superposition (or nonlinear depends upon properties of potential generated, matter density or distance) of signals from different sources inside the brain having their own potentials. Consideration of low spatial resolution of EEG in [44] is done with one electrode at fixed location (in which ERP should be anticipated), while the other is assumed to be moved at different locations and their

AHIRWAL ET AL.: EEG/ERP ADAPTIVE NOISE CANCELLER DESIGN WITH CONTROLLED SEARCH SPACE (CSS) APPROACH IN CUCKOO...

1495

instantaneous affect (ongoing or background EEG) is considered as disturbance or noise at given SNR. Volume conduction in EEG measurement is also related to the electrode spread (spatial positions of electrodes) [44]. This problem from the signal processing point of view is known as self-interference problem. In this paper, three noise levels are simulated as interference at different scalp positions. EEG signals, which are generated near the same locations from where ERP is expected, are main reason for low SNR in the recorded ERP. This is also termed as EEG self-interference [7], [11], [45]. To illustrate more, a clear vision of this problem is shown in Fig. 2, in which the problem in ERP recording and solution of the problem with adaptive filtering implemented through the evolutionary technique are shown. The main problem being addressed by this paper is the EEG self-contamination problem. Ongoing EEG modulates the ERP, and because of similar properties of EEG and embedded ERP, it is very difficult to retrieve pure ERP in single trial. Many trials are averaged to get estimated to ERP which is a time-consuming method, and also alters the subject’s attention and original response of the brain. This scenario is depicted in Fig. 2. Implementation starts with the design of the objective function (cost function or fitness function), which represents an estimate of MSE over the input samples used in that iteration [29]. At the nth iteration, this estimate of MSE for the ith solution (particle, food source or Egg) is defined as Ji ðnÞ ¼

N 1X ½eji ðnÞ2 ; N j¼1

ð8Þ

where N is the number of the samples of the input data and eji ðnÞ is the jth error for the ith solution.

5

PROPOSED THEORY OF SEARCH SPACE CONTROL

It is a well-known fact that all ETs generate random solutions within a fixed range defined as the maximum and minimum value of the solution set (predefined search space). Sometimes due to the updating process, the nextgeneration solution goes outside the defined range and then rests within the range. Range is simply as any variable with its positive value as maximum limit and negative value as minimum limit. If range is defined by “R,” then R will be the available search space to generate solutions. A new approach to define range is proposed, in which range variable “R” is controlled by a constant “C.” A clear visualization for the comparison of the difference in a conventional and proposed approach is illustrated in Fig. 3. It is noticed (see Fig. 3a) that the growing rate of the search space is large, if R is increased, then corresponding R and þR cover more space between them to generate new solutions (difficult to find optimum). In R C, the growing rate of search space is constant (see Fig. 3b); as R is increased, then corresponding R  C and R þ C cover the fixed search areas with a new solution bounded with a Constant “C” (smaller search area can be good to find optimum). This proposed approach is very effective when tested on adaptive filters designed with ET for EEG/ERP filtering.

Simulation results and advantages are presented in later sections. This modification of the search space modification in ET can be considered as the new state of the art to stabilize the performance when applied to solve problems in a nonlinear environment.

6

DATA SETS

There are three data sets used in this paper: one of which is artificially generated (simulated) and the remaining two are

1496

IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,

VOL. 10,

NO. 6,

NOVEMBER/DECEMBER 2013

Fig. 3. Illustration of the conventional and proposed controlled search space method.

the real data sets. Details of the data set are given below: Fig. 4 shows the data set in pure and noisy form of simulated and real data.

6.1 Simulated Visual Evoked Potential These data are generated through Matlab functions designed for generating the simulated EEG/ERP data according to two theories of ERP. First is the classical theory, and second is the phase-resetting theory. According to classical theory, it is assumed that an ERP-like waveform is evoked by each event. ERP with classical theory is selected and a template ERP is generated which acts as a pure ERP signal [46]. The frequency and location of peaks are taken the same as reported in [47]. A normal pattern reversal visual evoked potential constructed with the help of the Matlab function is illustrated in Fig. 4a as Simulated VEP. 6.2 Real Visual Evoked Potential These data are recorded such that the subject was instructed to ignore the nontarget stimuli and to count the number of appearances of the target ones (see [48] for more details on the experimental setup). Scalp recordings were obtained from the left occipital (O1) electrode (near to the location of the visual primary sensory area) with linked earlobes reference. Sampling rate was 250 Hz, each of them containing 512 data points (256 pre- and 256 poststimulation). The pure VEP is estimated using the average of 30 trials in this data set and is illustrated in Fig. 4b as Real VEP. 6.3 Real Sensorimotor Evoked Potential This database is recorded with different motor/imagery tasks performed by the subjects. While 64-channel EEG was recorded using the BCI2000 system with 160-Hz sampling frequency, and 10-sec length data were used having 1,600 samples. Each subject performs 14 experimental runs with four tasks. In this case, data of nine subjects are taken while performing tasks T1 (corresponds to left fist, onset of real or imagined motion) in trials 3, 4, 7, 8, 11, 12. Only, channel C4

Fig. 4. Illustration of pure ERP data sets and noisy ERP signal.

Fig. 5. Illustrations of optimal parameters values for algorithms, (a) step size for LMS, (b) forgetting factor for RLS, (c) replacement fraction for COA LF, and (d) radius coefficient for COA ELR.

is selected to get the ERP related to hand moves, also known as sensorimotor evoked potential. Performing the averaging on total of 54 trials, the pattern obtained is assumed as the signature of the left-fist movement, referred as template ERP. The time at which the task is performed is known, so samples after that time were zoomed to visualize the pattern. R-SEP and noisy SEP with white and EEG noise are illustrated in Figs. 4c, 4d, and 4e, respectively. This data set is taken from the Physionet web database [49], [50]. There are two types of noises used to contaminate the pure ERP signal, WN and ongoing EEG signal described in Section 4. Noise is added in the ratio of three levels 10, 15, and 20 dB in all the data set for testing the performance of adaptive filters at different noise levels. The unit of amplitude is microvolts.

7

SIMULATION RESULTS

Parameter settings for algorithms: Several presimulations are conducted to find out the optimal values of parameters for each algorithm. Optimal values are those which offer more than 0.80 correlation. In case of the LMS algorithm, step size () is simulated in the range of 0.005-1 with the interval of 0.005 (200 intervals are evaluated). Fig. 5a shows the set of optimal values for the LMS algorithm step size that lies between 0.025 and 0.09. The value 0.05 has been chosen for later evaluations. Since NLMS was derived and intentioned for a larger and variable step size; hence, the value 0.1 is selected [2]. The RLS algorithm is also tested for different values of forgetting factor () between 0.2 and 2 with an interval of 0.2 (10 intervals are

AHIRWAL ET AL.: EEG/ERP ADAPTIVE NOISE CANCELLER DESIGN WITH CONTROLLED SEARCH SPACE (CSS) APPROACH IN CUCKOO...

evaluated). Fig. 5b shows the optimal value of forgetting factor that lies between 0.8 and 1.6, and for later evaluations, value 1 is selected. In case of ETs, average of 10 evaluations is analyzed for parameter selection. The COA-LF parameter Pa for the probability or fraction is the replacement factor by which old nests are replaced by the new nests. Fifty intervals (in steps of 0.02) from 0.02 to 1 are evaluated. From Fig. 5c, it is observed that the value of Pa is optimal between 0.2 to 0.4 and 0.25. COA-ELR parameters are fixed to one egg layed by each cuckoo with single cluster to make similarity with other ETs. Radius coefficient () is set to 0.5, on the basis of optimal values observed from Fig. 5d which shows 20 points simulation over the range of 0.05-1. The value 0.35 to 0.75 is the set of optimal values for radius coefficient. In the ABC basic algorithm, only limit is set to 1 to make scout bees. Modified ABC in the form of ABC-MR and ABC-SF are evaluated with MR and SF at 0.7, 0.5, and 0.3. For all types of PSO, the values of 1 , 2 , and velocity limit is set to 2, 2, and 0:5, respectively. Different inertia weights are set; constant weight of inertia is set to 0.2 in case of CWI-PSO, while in case of LDI-PSO, Imax and Imin are set to 0.5 and 0.01, respectively. Parameters of CFI-PSO, like  ¼ 4:1 and k ¼ 2, are selected, and the order of NLI-PSO is chosen equal to 0.8 with same inertia as LDI-PSO. At last, D-PSO does not need to define the initial or final value of inertia. In case of the above-mentioned PSO parameter, previous work of the authors is taken as a reference in which detailed description of parameter setting and analysis is presented [29]. Simulation results are carried out to analyze the behavior of evolutionary techniques in case of normal range deceleration ( R) and proposed range deceleration (R C). In the first case, range is incremented in the manner of R to þR, i.e., new possible solutions are generated within the range of R with growing population size with the rate of Pn ¼ Pn1 þ 2 in each simulation, where Pn is size of population in nth simulation. P starts with 2 and reaches 40 at the last simulation. SNR and correlation plots are obtained from the simulation. Running time is also analyzed to make sure that the proposed modification in range deceleration does not affect the time complexity. PSO, ABC, and COA too are simulated in the same manner. Testing of the algorithm with ANC is done by the fidelity parameter analysis. Several parameters such as SNR in dB, CORR between resultant and template ERP, MSE, and MD are also computed. SNR in dB is computed as a noise reduction measure by PN1 2 i¼0 ðEEGnoisyERP  EEGpureERP Þ : SNRbB ¼ 10log10 PN1 2 i¼0 ðEEGnoisyERP  EEGfilteredERP Þ ð9Þ If SNR at output is reached to zero, then perfect reconstruction is achieved [51]. CORR, MSE, and MD are computed by (10), (11), and (12), respectively: P P P N XY  ð XÞð Y Þ ; CORR ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P P P P ½N X2  ð X2 Þ  ½N Y 2  ð Y 2 Þ ð10Þ

MSE ¼

MD ¼

1497

1 X 1N ðEEGpureERP  EEGfilteredERP Þ2 ; N i¼0

ð11Þ

X 1 N1 ðEEGpureERP  EEGfilteredERP Þ: N i¼0

ð12Þ

Simulation results are arranged according to the algorithm to compare the traditional and proposed approach. Each figure corresponds to one algorithm in which left column plots are the results of traditional range definition, and right columns are results of the proposed range definition. SNR, CORR, and running time plots are shown row wise in each figure. Figs. 6 and 7 show the behavior of the COA-LF, ABC basic, and PSO-CWI algorithm on the data set 3 (R-VEP) contaminated with 10-dB EEG noise. In the first case regarding Figs. 6(A.1, 3, 5), 6(B.1, 3, 5), and 7(A.1, 3, 5), the cuckoos, bees, and particles range are incremented in the manner of R to þR. Increment in the running time is observed with the increase in population. The running time plot illustrates that the process becomes time consuming as colony size increases, and linear increment is observed in the time plot. Figs. 6(A.2, 4, 6), 6(B.2, 4, 6), and 7(A.2, 4, 6) show the second case of the proposed “R C” search space control approach, in which cuckoos, bees, and particles range are incremented in the manner of R  C to R þ C, and new possible solutions are generated within the range of R C, where C is any small integer constant (C ¼ 3). Advantage of the proposed controlled search approach is that peak in SNR and correlation plots is clearly visible to corresponding best range (solution set). Reason behind taking C ¼ 3 is that the average difference between the range variable selected from the best correlation and best SNR comes 3. Population is handled in the same manner as in the above case. By using this type of increment or growth in the range of particles for analysis, the accurate range of solution for a particular problem can be easily found. Benefit of this type of increment is that it offers desired SNR and correlation only near prefect range, and rest of the plot goes down as range is crossed. Running time also remains the same in both the cases, as evident from plots. It is also observed that the effect of population size is also balanced for further simulation. Population size of 20 is selected for all ETs. For further analysis, results of second approach are used to define the range of solutions. Figs. 8A, 8B, and 8C are plotted to show how the intersection point of the best Correlation and SNR is used for particles range selection at different noise levels (see Fig. 8A in case of COA-LF, Fig. 8B in case of ABC basic, and Fig. 8C in case of POS-CWI). Correlation and SNR both are plotted with their values in dB, from that best ranges are identified corresponding to 10, 15, and 20 dB of noise level, respectively. All the simulations are performed on the system with the configuration consisting of Intel(R) Core (TM) 2 Quad CPU at 2.83 GHz with 3.25-GB RAM.

8

FINAL RESULTS (AVERAGE OF THREE TRIALS)

When all the simulation to find out the best range of solution sets and the population size is over, each algorithm is tested for three-trial ERP detection. The performance over

1498

IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,

VOL. 10,

NO. 6,

NOVEMBER/DECEMBER 2013

Fig. 6. Illustration of the behavior of Cuckoo and the ABC algorithm with respect to conventional (R to þR) and proposed (R  C to R þ C), cuckoo range versus the number of cuckoos are simulated to analyze the conventional approach as (A.1) SNR, (A.3) correlation, (A.5) running time, and proposed approaches (A.2) SNR, (A.4) correlation, (A.6) running time. Similarly, bee ranges versus number of bees are simulated in terms of (B.1) SNR, (B.3) correlation, (B.5) running time for conventional approach and (B.2) SNR, (B.4) correlation, (B.6) running time for the proposed approach.

each noise level (10, 15, and 20 dB) and with two type of noises are observed and compared in terms of SNR and correlation. For deep analysis, various averaging combinations are observed. In this section, only averaged results in the form of bar charts are included. The SNR plot of algorithms versus noise levels is illustrated in Fig. 9a, from which it is observed that the LMS and NLMS are more far from the desired SNR (zeros for perfect reconstruction or full noise reduction). RLS have potentials to reduce noise, but it is also noticed that its performance degrades with the increase in noise levels. On the other hand, evolutionary techniques (PSO, ABC, and COA) do not have sufficient effect of increase in the noise level, and have very closely obtained SNR to the desired value. Correlation is plotted between algorithms versus noise level in Fig. 9b. Here, the same observation as from the SNR plot is obtained, which reflects that maximum correlation is achieved by evolutionary techniques. RLS also have better correlation values than LMS and NLMS, but is not able to face increased noise (20 dB). The SNR plot of algorithms versus data sets is illustrated in Fig. 9c. Performance of ET on S-VEP data is not so much impressive, but they prove their best on R-VEP and R-SEP data sets. The reason behind average performance of ET on simulated VEP is that the working principle of these techniques is of random nature (also considered as nonlinear), while S-VEP data have smoothness that was not really present in real bioelectric signals. The correlation plot between algorithms and data sets illustrated in Fig. 9d also proves the same fact that on the SVEP data, ET performed in average manner while their effectiveness were proved on R-VEP and R-SEP data set, through achieving maximum correlation value.

SNR plot of algorithms versus noise type is illustrated in Fig. 9e. Observable difference was noticed that ET and RLS perform more sensitively in case of EEG noise, which is more problematic and difficult to analyze. LMS and NLMS have far more SNR value than desired. Correlation plots of algorithm versus noise type are illustrated in Fig. 9f, which also clarify the effectiveness of ET on EEG noise removal. There is not a huge difference

Fig. 7. Illustration of behavior of PSO algorithm with respect to conventional (R to þR) and proposed (R  C to R þ C). Particles range versus the number of particles is simulated to analyze the conventional approach as (A.1) SNR, (A.3) correlation, (A.5) running time, and proposed approaches (A.2) SNR, (A.4) correlation, (A.6) running time.

AHIRWAL ET AL.: EEG/ERP ADAPTIVE NOISE CANCELLER DESIGN WITH CONTROLLED SEARCH SPACE (CSS) APPROACH IN CUCKOO...

Fig. 8. Simulation results to find out the accurate range of particles with the help of Correlation versus SNR plot, (A) for COA, (B) for ABC, and (C) for PSO.

between RLS and ET on white noise removal but can be noticed in case of EEG noise. The reproducibility and predicting behavior of gradientbased algorithms and ET for ANC design is measured in

1499

terms of variability in performance of the algorithm when tested with different data sets. To measure this performance variability, difference of average shape measures (skewness and kurtosis values) of real data sets (R-VEP and S-VEP), used in this study, is considered. Obtained results show that ETs are totally data independent because there performance variability is negligible as compared with gradient-based algorithms, as illustrated in Fig. 10a. Combined plot of SNR and correlation to observe the overall performance of the algorithm on all data sets with both noises on different noise level is given in Fig. 10b. Correlation results are converted to absolute values of dB to make similarity with SNR. It is concluded that COAELR, PSO-CWI, PSO-LDI, ABC basic, and ABC-SF 0.7 are the best among ETs, while RLS is the best among gradientbased algorithms. Results in the tabular form corresponding to data set 3 are appended in the last section (see Tables 2, 3, 4, and 5). Table 2 is listed with fidelity parameters of LMS-, NLMS-, and RLS-based AF. Results of POS-, ABC-, and COA-based AF are listed in Tables 3, 4, and 5, respectively. After the analysis of correlation and SNR obtained through adaptive filtering of all ERP against different noise levels of white and EEG noise, the running time estimation is performed to compare the time consumption of each algorithm. Information about the code execution time, time of n evaluation, and meantime of N trials calculation is formulated as depicted in Table 6 [37]. As the traditional method of ERP classification after ERP is detected with good SNR, shape measure plays an important role [52]. So, to check the quality of ERP obtained after adaptive filtering, skewness (s) and kurtosis (k) value difference are also analyzed between pure and filtered ERP through (13) and (14): s¼

Eðx  Þ3 ; 3

ð13Þ

Fig. 9. Illustrations of combined plots of SNR and CORR corresponding to algorithms with various averaging combinations, (a, b) for noise levels, (c, d) for data sets, (e, f) for noise type.

1500

IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,

VOL. 10,

NO. 6,

NOVEMBER/DECEMBER 2013

Fig. 10. Illustrations of (a) variability in performance, (b) overall performance, and (c) running time and shape measures comparison.



Eðx  Þ4 ; 4

ð14Þ

where  is the mean and  is standard deviation of x. Running time and shape measures results are listed in Table 7, and graphs are illustrated in Fig. 10c, through which it is observed that LMS, NLMS, and RLS take negligible time to run but are unable to preserve the shape of ERP. This fact is also clarified above with SNR and correlation plots. PSO, ABC, and COA, all have minimum and considerable shape measure difference with increased running time as compared to other algorithms. Computational complexity in the form of running time for ETs can also be defined as the execution or searching complexity in finding solutions depending on their quality. A primary or direct approach for running time observation has been employed in the form of execution time calculation (see Table 6). While searching complexity can be considered as a secondary approach and estimated after reaching the maximum generation number or MNC. In the presented study, MNC is equal to the signal length (number of samples). Hence, NS  MNC searches are carried out during a complete run, where NS represents the number of searches. Consider DE, GA, and BFO as additional approaches for comparison, because these algorithms are also reported for AF or ANC design [25], [53], [54]. Terminology of additional approaches for population (pop) representation is known as individuals, chromosomes, and bacteria in DE, GA, and BFO, respectively. Since, NS is directly proportional to the numbers of particles, total bees (employ and onlooker), cuckoo’s eggs, individuals, chromosomes and bacterial, which creates population, while considering each ET individually. Hence, NS  MNC is the estimation of search complexity. NS is the

depending factor, which also discriminates computational effort and execution time required by a particular algorithm. Because NS internally includes various steps that produces difference among performance as well as complexity of algorithms. For example, PSO only needs at least 1D array operation (pop update of particles and velocity), ABC has additional process of finding scout bees and regenerating their food sources. COA-LF and COA-ELR has extra computation process of Le´vy distribution for random walk and calculation of ELR inside which new eggs are layed, respectively. DE and GA work on almost similar principle; mutation, crossover, and selection are applied in DE on each individual. GA needs two major steps to be performed on each chromosome: crossover and mutation. During crossover process, population becomes double which increases the overhead in mutation. Execution of BFO needs at least 3D array operations for each bacterium (pop  elimination  reproduction  Chemotactic). Computational effort of the overall framework as computational time and search effort perspective is generalized as total of parameters (finding optimal parameters), test runs (trials), and function evaluations (iterations). Obviously, to find a good parameter value, the total search effort can be expressed as 1  2  3 , where 1 is the number of parameters to be tested and 2 is the number of tests. The product 1  2 represents the total number of algorithm runs from parameter setting, while 3 is the number of function evaluations performed in one run of the ET (depends up on signal length or samples). Therefore, the asymptotic time complexity with respect of framework can be defined as Oð3 Þ and running time estimated in Table 6 [37], [55].

TABLE 2 LMS-, NLMS-, and RLS-Based ANC Fidelity Parameters

AHIRWAL ET AL.: EEG/ERP ADAPTIVE NOISE CANCELLER DESIGN WITH CONTROLLED SEARCH SPACE (CSS) APPROACH IN CUCKOO...

TABLE 3 PSO-Based ANC Fidelity Parameters

9

1501

TABLE 4 ABC-Based ANC Fidelity Parameters

JUSTIFICATION OF THE PROPOSED METHODS

Justification of the proposed methodology for simulationbased range identification is provided by the calculation of optimal weights through Wiener filter equation given in (15) [3]. The methodology can be verified by simply measuring the difference between weights obtained through winner equation and best range identified for ET in the proposed method. W o ¼ R1 x Pdx ;

ð15Þ

o

where W is the optimal weight. Rx and Pdx represent the autocorrelation and cross correlation of single x and d. Table 8 is listed with the winner optimal weights and the best range for ETs. The average difference of 0.8056 is obtained among the identified best range and optimal weight, which justifies the correctness of our proposed approach.

10 CONCLUSION Conclusion of the overall study with analysis and comparisons of each of the algorithms provides interesting

facts about adaptive filtering implemented with the swarm intelligence technique. Comparison of time consumption and shape measure show that increase in processing time provides less difference in shape measure, because of the increased complexity of the algorithm. Almost all the conventional adaptive filtering applications are based on the gradient-based algorithm like LMS, RLS, and so on. All the swarm intelligences or evolutionary techniques that use the concept of population-based solutions definitely give better result than gradient-based techniques. RLS is also a powerful algorithm, but it seems to give weak

1502

IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,

VOL. 10,

NO. 6,

NOVEMBER/DECEMBER 2013

TABLE 5 COA-Based ANC Fidelity Parameters

TABLE 7 Running Time and Shape Measures

performance when noise level increases. One more thing that makes a huge difference between gradient-based approaches and ET approaches is that the best solution can be achieved just by performing single simulation with increase in the range to achieve desired SNR and correlation, through which perfect solution set is found. Hence, the adaptive filtering based on COA, ABC, and PSO is superior to other conventional methods. Comparison of each ET with their variants makes aware of the differences hidden in their processing. Results of quality measure of ERP filtered by all ETs are highly considerable. But when their running times are noticed, it makes great difference. Running time of ABC is more than that of COA and PSO. PSO runs faster than ABC and COA having some greater difference of quality measures of ERP. Though LMS, NLMS, and RLS take negligible time consumption but are unable to offer good shape preservation of ERP. Finally, an algorithm or a technique is selected on the basis or tradeoff between time consumption and result quality.

adaptive filter to optimize its performance. Besides anticipated results and improved performance, this study also gives a very unique finding that the increase in noise level also increases the range of solution sets depending upon the nature of noise. This increase in solution set is also related to the increase in amplitude of signal after it gets contaminated with noise (as observed in Fig. 4, after noise is added in pure ERP, amplitude increases with nearly 50 V). Hence, it is theoretically concluded that the noise level, amplitude of noisy signal, and solution set (range of cuckoo, bee or particle) are directly proportional to each other:

11 DISCUSSION AND FUTURE WORK In advance and real-time signal processing, adaptive filter covers most of the application areas like channel equalization, system identification, and noise cancellation. In the field of biomedical signal processing mostly noise cancellation are employed to remove the interference created among various biosignals. In this paper, recently developed ET, based on natural behavior of animals, designed to solve the optimization problems simply, is assimilated with TABLE 6 Running Time Formulation

TABLE 8 Justification of Range Identified through Proposed Method

AHIRWAL ET AL.: EEG/ERP ADAPTIVE NOISE CANCELLER DESIGN WITH CONTROLLED SEARCH SPACE (CSS) APPROACH IN CUCKOO...

ðNoise LevelÞ / ðAmplitudeÞ / ðrangeÞ: So, if the analysis of these three things is done with a specific model to figure out the relation among them, this approach can be implemented in real time. If the range can be found out instantaneously according to the change in noise level and amplitude of signal, then prior analysis part for finding range can be eliminated. This theoretical relation of noise and range is considered as the open problem or future work to extend this study, while methodology of ANC design based on ETs is the contribution to EEG processing field.

[5] [6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

ACKNOWLEDGMENTS

[20]

The authors would like to thank R.Q. Quiroga and G.B. Moody et al. for allowing public access to EEG data on web that is used in this study.

[21]

REFERENCES

[23]

[1]

[2] [3] [4]

B. Widrow, J.R. Glover, J.M. McCool, J. Kaunitz, C.S. Williams, R.H. Hearn, J.R. Zeidler, J.R Eugene Dong, and R.C. Goodlin, “Adaptive Noise Cancellation: Principles and Applications,” Proc. IEEE, vol. 63, no. 12, pp. 1692-1716, Dec. 1975. P.S.R. Diniz, “Adaptive Filtering: Algorithms and Practical Implementation,” Springer Science Business Media, 2008. A.D. Poularikas and Z.M. Ramadan, Adaptive Filtering Primer with MATLAB. CRC Press, 2006. S. Sanei and J.A. Chambers, EEG Signal Processing. John Wiley & Sons, 2007.

[22]

[24]

[25]

1503

S. Cerutti, “In the Spotlight: Biomedical Signal Processing,” IEEE Rev. in Biomedical Eng., vol. 1, pp. 8-11, 2008. T. Ball, M. Kern, I. Mutschler, A. Aertsen, and A. Schulze-Bonhage, “Signal Quality of Simultaneously Recorded Invasive and NonInvasive EEG,” NeuroImage, vol. 46, no. 3, pp. 708-716, 2009. S. Machado, F. Arau´jo, F. Paes, B. Velasques, M. Cunha, H. Budde, F. Basile, R. Anghinah, O. Arias-Carrio´n, M. Cagy, R. Piedade, T.A. Graaf, A.T. Sack, and P. Ribeiro, “EEG-Based BrainComputer Interfaces: An Overview of Basic Concepts and Clinical Applications in Neurorehabilitation,” Rev. in the Neurosciences, vol. 21, no. 6, pp. 451-468, 2010. G. Pfurtscheller and F.H.L. Silva, “Event-Related EEG/MEG Synchronization and Desynchronization: Basic Principles,” Clinical Neurophysiology, vol. 110, no. 11, pp. 1842-1857, 1999. R.C. Panicker, S. Puthusserypady, and Y. Sun, “An Asynchronous P300 BCI with SSVEP-Based Control State Detection,” IEEE Trans. Biomedical Eng., vol. 58, no. 6, pp. 1781-1788, June 2011. E. Yin, Z. Zhou, J. Jiang, F. Chen, Y. Liu, and D. Hu, “A Novel Hybrid BCI Speller Based on the Incorporation of SSVEP into the P300 Paradigm,” J. Neural Eng., vol. 10, no. 2, pp. 1-9, 2013. J.C. Rajapakse, A. Cichocki, and V.D. Sanchez, “Independent Component Analysis and Beyond in Brain Imaging: EEG, MEG, fMRI, and PET,” Proc. IEEE Ninth Int’l Conf. Neural Information Processing, vol. 1, pp. 404-412, 2002. D. Jarchi, S. Sanei, J.C. Principe, and B. Makkiabadi, “A new Spatiotemporal Filtering Method for Single-Trial Estimation of Correlated ERP Subcomponents,” IEEE Trans. Biomedical Eng., vol. 58, no. 1, pp. 132-143, Jan. 2011. D. Jarchi, S. Sanei, H.R. Mohseni, and M.M. Lorist, “Coupled Particle Filtering: A New Approach for P300-Based Analysis of Mental Fatigue,” Biomedical Signal Processing and Control, vol. 6, no. 2, pp. 175-185, 2011. K. Yu, K. Shen, S. Shao, W.C. Ng, K. Kwok, and X. Li, “A SpatioTemporal Filtering Approach to Denoising of Single-Trial ERP in Rapid Image Triage,” J. Neuroscience Methods, vol. 204, no. 2, pp. 288-295, 2012. F. Cong, P.H.T. Leppa¨nen, P. Astikainen, J. Ha¨ma¨la¨inen, J.K. Hietanen, and T. Ristaniemi, “Dimension Reduction: Additional Benefit of an Optimal Filter for Independent Component Analysis to Extract Event-Related Potentials,” J. Neuroscience Methods, vol. 201, no. 1, pp. 269-280, 2011. L. Spyrou and S. Sanei, “Source Localization of Event-Related Potentials Incorporating Spatial Notch Filters,” IEEE Trans. Biomedical Eng., vol. 55, no. 9, article 16, pp. 2232-2239, Sept. 2008. K. Asaduzzaman, M.B.I. Reaz, F. Mohd-Yasin, K.S. Sim, and M.S. Hussain, “A Study on Discrete Wavelet-Based Noise Removal from EEG Signals,” Advances in Experimental Medicine and Biology, vol. 680, pp. 593-599, 2010. Z. Wang, A. Maier, D.A. Leopold, N.K. Logothetis, and H. Liang, “Single-Trial Evoked Potential Estimation Using Wavelets,” Computers in Biology and Medicine, vol. 37, no. 4, pp. 463-473, 2007. O. Svensson, “Tracking of Changes in Latency and Amplitude of the Evoked Potential by Using Adaptive LMS Filters and Exponential Averages,” IEEE Trans. Biomedical Eng., vol. 40, no. 10, pp. 1074-1079, Oct. 1993. N. Thakor, “Adaptive Filtering of Evoked Potentials,” IEEE Trans. Biomedical Eng., vol. 34, no. 1, pp. 6-12, Jan. 1987. S. Aydin, “Comparison of Basic Linear Filters in Extracting Auditory Evoked Potentials,” Turkish J. Electrical Eng., vol. 16, no. 2, pp. 111-123, 2008. P. He, G. Wilson, and C. Russell, “Removal of Ocular Artifacts from Electroencephalogram by Adaptive Filtering,” Medical and Biological Eng. and Computing, vol. 42, no. 3, pp. 407-412, 2004. S. Selvan and R. Srinivasan, “Removal of Ocular Artefacts from EEG Using an Efficient Neural Network Based Adaptive Filtering Technique,” IEEE Signal Processing Letters, vol. 6, no. 12, pp. 330332, Dec. 1999. N. Karaboga, “A New Design Method Based on Artificial Bee Colony Algorithm for Digital IIR filters,” J. Franklin Inst., vol. 346, no. 4, pp. 328-348, 2009. N. Karaboga and M.B. Cetinkaya, “A Novel and Efficient Algorithm for Adaptive Filtering: Artificial Bee Colony Algorithm,” Turkish J. Electrical Eng. and Computer Sciences, vol.19, no. 1, pp. 175-190, 2011.

1504

IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,

[26] D.J. Krusienski and W.K. Jenkins, “Adaptive Filtering Via Particle Swarm Optimization,” Proc. IEEE Conf. Record of the 37th Asilomar Conf. Signals, Systems, and Computers, vol. 1, pp. 571-575, 2003. [27] U. Mahbub, C. Shahnaz, and S.A. Fattah, “An Adaptive Noise Cancellation Scheme Using Particle Swarm Optimization Algorithm,” Proc. IEEE Int’l Conf. Comm. Control and Computing Technologies (ICCCCT), pp. 683-686, 2010. [28] A.T. Al-Awami, A. Zerguine, L. Cheded, A. Zidouri, and W. Saif, “A New Modified Particle Swarm Optimization Algorithm for Adaptive Equalization,” Digital Signal Processing, vol. 21, no. 2, pp. 195-207, 2011. [29] M.K. Ahirwal, A. Kumar, and G.K. Singh, “Analysis and Testing of PSO Variants through Application in EEG/ERP Adaptive filtering Approach,” Biomedical Eng. Letters, vol. 2, no. 3, pp. 186197, 2012. [30] S.C. Douglas, “Introduction to Adaptive Filters,” Digital Signal Processing Handbook, V.K. Madisetti and D.B. Williams, ed., CRC Press, pp. 417-422, 1999. [31] A. Zaknich, “Principles of Adaptive Filters and Self-Learning Systems,” Advanced Textbooks in Control and Signal Processing. Springer, pp. 9-18, 2005. [32] J. Kennedy and R.C. Eberhart, “Particle Swarm Optimization,” Proc. IEEE Int’l Conf. Neural Networks, vol. 4, pp. 1942-1948, 1995. [33] R. Poli, J. Kennedy, and T. Blackwell, “Particle Swarm Optimization: An Overview,” Swarm Intelligence, vol. 1, pp. 33-57, 2007. [34] K.E. Parsopoulos and M.N. Vrahatis, Particle Swarm Optimization and Intelligence: Advances and Applications. Information Science Reference, 2010. [35] S.A.M. Fahad and M.E. El-Hawary, “Overview of Artificial Bee Colony (ABC) Algorithm and Its Applications,” Proc. IEEE Conf. Systems, pp. 1-6, 2012. [36] D. Karaboga, “An Idea Based on Honey Bee Swarm for Numerical Optimization,” Technical Report TR06, Erciyes Univ., pp. 1-10, 2005. [37] B. Akay and D. Karaboga, “A Modified Artificial Bee Colony Algorithm for Real-Parameter Optimization,” Information Sciences, vol. 192, no. 1, pp. 120-142, 2012. [38] X.-S. Yang and S. Deb, “Cuckoo Search via Le´vy Flights,” Proc. IEEE Conf. Nature and Biologically Inspired Computing, pp. 210-214, 2009. [39] R. Rajabioun, “Cuckoo Optimization Algorithm,” Applied Soft Computing, vol. 11, no. 8, pp. 5508-5518, 2011. [40] The Life of Birds, Parenthood, http://www.pbs.org/lifeofbirds/ home/index.html, 2013. [41] Y. Xin-She, Nature-Inspired Metaheuristic Algorithms, second ed., chapter 2, p. 16, Luniver Press, 2010. [42] V. Parsa, P.A. Parker, and R.N. Scott, “Adaptive Stimulus Artifact Reduction in Noncortical Somatosensory Evoked Potential Studies,” IEEE Trans. Biomedical Eng., vol. 45, no. 2, pp. 165-179, Feb. 1998. [43] S. Haufe, R. Tomioka, G. Nolte, K.R. Mu¨ller, and M. Kawanabe, “Modeling Sparse Connectivity between Underlying Brain Sources for EEG/MEG,” IEEE Trans. Biomedical Eng, vol. 57, no. 8, pp. 1954-1963, Aug. 2010. [44] J. Holsheimer and B.W.A. Feenstra, “Volume Conduction and EEG Measurements within the Brain: A Quantitative Approach to the Influence of Electrical Spread on the Linear Relationship of Activity Measured at Different Locations,” Electroencephalography and Clinical Neurophysiology, vol. 43, pp. 52-58, 1977. [45] C.J. James, M.T. Hagan, R.D. Jones, P.J. Bones, and G.J. Carroll, “Multireference Adaptive Noise Canceling Applied to the EEG,” IEEE Trans. Biomedical Eng., vol. 44, no. 8, pp. 775-779, Aug. 1997. [46] Generation of simulated EEG data http://www.cs.bris.ac.uk/ ~rafal/phaser e set/, 2013. [47] J.V. Odom, M. Bach, C. Barber, M. Brigell, M.F. Marmor, A.P. Tormene, and G.E. Holder, “Visual Evoked Potentials Standard,” Documenta Ophthalmologica, vol. 108, pp. 115-123, 2004. [48] R.Q. Quiroga, “EEG, ERP and Single Cell Recordings Database,” http://www.vis.caltech.edu/~rodri/data.htm, 2013. [49] G.B. Moody, R.G. Mark, and A.L. Goldberger, “PhysioNet: Physiologic Signals, Time Series and Related Open Source Software for Basic, Clinical, and Applied Research,” Proc. IEEE Conf. Eng. in Medicine and Biology Soc., pp. 8327-8330, 2011. [50] A.L. Goldberger , et al., “PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation,” Circulation, vol. 101, no. 23, pp. e215-e220, June 2000.

VOL. 10,

NO. 6,

NOVEMBER/DECEMBER 2013

[51] V.P. Oikonomou and D.I. Fotiadis, “A Bayesian Approach for Biomedical Signal Denoising,” Proc. IEEE Conf. Information Technology Applications in Biomedicine (ITAB) pp. 1-5, 2006. [52] X. Wu and Z. Ye, “The Study of Classification of Motor Imaginaries Based on Kurtosis of EEG,” Proc. 13th Int’l Conf. Neural Information Processing, pp. 74-81, 2006. [53] S. Gholami-Boroujeny and M. Eshghi, “Non-Linear Active Noise Cancellation Using a Bacterial Foraging Optimisation Algorithm,” IET Signal Processing, vol. 6, pp. 364-373, 2012. [54] R.T. Xiao, “Research on White Noise Suppression by Adaptive filtering of Genetic Algorithm,” Applied Mechanics and Materials, vol. 155, pp. 989-994, 2012. [55] P.N Suganthan, N. Hansen, J.J. Liang, K. Deb, A. Chen, Y.P. Auger, and S. Tiwari, “Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization,” technical report, Nanyang Technological Univ., http://www.ntu.edu.sg/home/EPNSugan, 2005. M.K. Ahirwal received the BE degree in computer science and engineering from Samrat Ashok Technological Institute (affiliated to Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal, India), Vidisha, Madhya Pradesh, India, in 2009, and the MTech degree in computer technology from the National Institute of Technology, Raipur, India, in 2011. Currently, he is working toward the PhD degree in the Computer Science and Engineering Department at the Pandit Dwarka Prasad Mishra Indian Institute of Information Technology, Design & Manufacturing, Jabalpur, India. His research interests include EEG signal processing, adaptive filtering, and optimization techniques. Anil Kumar received the BE degree from the Army Institute of Technology (AIT), Pune University, Maharashtra, India, and the MTech and PhD degrees from IIT Roorkee, India, in 2002, 2006, and 2010, respectively, all in electronic and telecommunication engineering. Currently, he is an assistant professor in the Electronic and Communication Engineering Department, Pandit Dwarka Prasad Mishra Indian Institute of Information Technology, Design & Manufacturing, Jabalpur, India. His research interests include design of digital filters and multirate filter bank, multirate signal processing, biomedical signal processing, image processing, and speech processing. G.K. Singh received the BTech degree from the G.B. Pant University of Agriculture and Technology, Pantnagar, India, in 1981, and the PhD degree from Banaras Hindu University, Varanasi, India, in 1991, both in electrical engineering. He worked in the industry for nearly five and a half years. Currently, he is a professor in the Electrical Engineering Department, IIT Roorkee, India. His academic and research interests include design and analysis of electrical machines and biomedical signal processing. He has coordinated a number of research projects sponsored by the CSIR and UGC, Government of India.

. For more information on this or any other computing topic, please visit our Digital Library at www.computer.org/publications/dlib.

ERP adaptive noise canceller design with controlled search space (CSS) approach in cuckoo and other optimization algorithms.

This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/...
6MB Sizes 0 Downloads 0 Views