Journal of Biomedical Informatics 49 (2014) 11–15

Contents lists available at ScienceDirect

Journal of Biomedical Informatics journal homepage: www.elsevier.com/locate/yjbin

Guest Editorial

Methods and applications of evolutionary computation in biomedicine 1. Introduction Over the past 20 years, there has been a burgeoning interest in problem solving approaches that are inspired by ubiquitous phenomena in nature, specifically but not limited to optimization, search, and discovery. Evidence for this assertion is supported by consummatory behaviors (such as seeking and obtaining food, water, or sex) that preserve self and species. All of these take place in the context of competition, symbiosis, and co-evolution. Ultimately, these are highly stochastic, self-organizing processes that are governed by some type of meta-heuristic that is focused on discovery. That meta-heuristic is, loosely defined, evolution, and this is what inspires the family of computational methods knows as evolutionary computation. This branch of the computational sciences focuses on iteration, but within a temporal, meta-heuristic framework that allows for discovery in continuous time of new knowledge, expressed as patterns, solutions, or more generally, optimizations. The processes used in this paradigm are not only temporal, but they enforce a trajectory that leads to the successful solution of a given problem. This trajectory is ultimately cumulative, building on previous successes in the context of an evolutionary epoch, although there may be fits and starts along the way. These successes and failures are characteristic of exploration, another critically important feature of evolutionary computation. 2. Evolutionary computation methods The methodological approaches to evolutionary computation are many. For our purposes, these may be divided into those that are genetics-based evolutionary algorithms and the non-genetic evolutionary computation methods which are truly evolutionary, but without a clear genetics metaphor. 2.1. Genetics-based evolutionary computation methods: evolutionary algorithms There are numerous examples of evolutionary algorithms, but four of the most important ones are described here: the genetic algorithm, evolutionary strategies, the learning classifier system, and genetic programming. All of these are inspired by a rich and obvious biological metaphor in terms of knowledge representation and processing. These features are best described in the context of the genetic algorithm, which provides the basis for this type of evolutionary computation. 2.1.1. The genetic algorithm John Holland coined the term genetic algorithm, or GA, to describe a computational method that uses Darwinian evolution http://dx.doi.org/10.1016/j.jbi.2014.05.008 1532-0464/Ó 2014 Published by Elsevier Inc.

principles for optimization [1]. It is Darwinian in the sense that the evolutionary nature of optimization—searching for the optimal solution in a space of all possible solutions—is based on ‘‘fitness’’, in which the best solutions will survive into the next generation, or iteration, of the algorithm. Knowledge representation in the GA, and in many evolutionary algorithms is focused on the genome; that is, an ‘‘individual’’ represented by a single ‘‘chromosome’’ that is itself one member of a population of many such individuals. Like a biological chromosome, each individual is composed of an array of ‘‘genes’’, each of which represents an attribute or variable. In both natural and computational contexts, a gene can take on different forms or values. These individuals exist in a set, or population, and each individual has a fitness, which determines its probability of surviving (or not) into the next generation. The overall performance of the GA is determined by the fitness of the population overall, usually calculated as a mean of all individual fitnesses. At each generation, individuals are selected for ‘‘mating’’. These individuals become ‘‘parents’’ of ‘‘offspring’’ that will be created through simple reproduction. There are many different schemes for this process, ranging from allowing all individuals to mate and reproduce to those in which only a single pair of individuals is selected to mate. In any case, individuals are more likely to mate with those of similar fitness since individuals are usually selected for mating with a probability that is based on their fitness, proportional to the population fitness. Once selected for mating, the genome-like representation allows the use of three genetic operators: reproduction, in which a chromosome is simply copied as an ‘‘offspring’’; crossover, in which the ‘‘genetic material’’ from the two parents is switched at some locus (or loci) in the offspring; and mutation, in which a gene’s value is altered according to a pre-defined probability. The purpose of crossover is to explore the solution space by proposing new individuals as testable hypotheses. Mutation is used to keep in check poorly performing, unfit individuals that may be created by crossover. The GA runs, generation by generation, until some termination criterion such as a predetermined fitness level is reached asymptotically. The GA has been used alone or in conjunction with other methods in a wide variety of applications in biomedicine, including gene–gene interactions [2], bone segmentation in computerized tomography [3], comparative effectiveness studies [4], diagnosis [5–7], and decision support [8]. An excellent review of the application of GAs in bioinformatics is provided in Manning et al. [9]. The GA has provided a foundation for a large number of related evolutionary computation approaches, such as evolution strategies, learning classifier systems, and genetic programming.

12

Guest Editorial / Journal of Biomedical Informatics 49 (2014) 11–15

2.1.2. Evolution strategies Evolution strategies (ESs) are another method of optimization that uses the genetic operators described above. This approach is very similar to the GA, but with two important differences. First, selection of individuals for reproduction (and possible crossover and mutation) is based deterministically by rank, rather than probabilistically as is usually done in the GA. Second, mutation is applied in a variable, adaptive fashion from one generation to the next. Although the applications of evolution strategies in biomedicine are not as prevalent as those of the GA, the ES approach has been used in such domains as image analysis [10,11] and proteomics [12]. 2.1.3. Learning classifier systems The learning classifier system (LCS) is a rule-based evolutionary computation method. The knowledge representation of the LCS includes several components embodied in a classifier: the gene vector (similar to a genotype), a class or ‘‘action’’ variable (similar to a phenotype), a strength value (broadly reflective of a classifier’s fitness). As in the GA, there is a population of classifiers within which the various genetic operators function. There are two main types of LCS: the so-called Pittsburgh LCS, in which each classifier is a set of individuals [13], and the more common Michigan LCS [1], in which all the classifiers exist in a single set. There are three main functional components to the LCS. The first is the evaluation component, which assesses the match between a case presented to the LCS from the environment and the genotypes of the classifiers in the population at that iteration. The matching classifiers form what is called a match set from which is probabilistically selected an action; this will be the decision or action proposed by the system. Such selection may be based on classifier fitness, or in the case of XCS-based LCSs, the accuracy of the prediction made by the classifier [14]. During supervised learning, the decision proposed by the LCS will be evaluated against the known class or phenotype of the case presented to the system from the environment. The strength of the classifier(s) contributing to the decision—that is, those in the action set—will be increased or decreased according to the correctness of the decision. The second functional component is the learning module, which is responsible for updating classifier strengths according to one of a number of different schemes, such as a simple stimulus–response reward regime or reinforcement learning. Classifier strength plays an important role in the third functional component of the LCS, discovery, which is performed through a GA. During this phase of an iteration in the LCS, parents are selected for reproduction, and the genetic operators are then applied to create offspring which are then inserted into the population. In a closed population, an equivalent number of classifiers are removed from the population, typically having been selected proportional to their fitness or by some other criteria. The LCS has been the subject of considerable research in the evolutionary computation community, and has been applied in a number of biomedical settings, such as genomics of cancer susceptibility [15], electroencephalograph signal detection [16], and data mining in epidemiologic surveillance [17,18]. 2.1.4. Genetic programming In genetic programming (GP), knowledge representation is expressed as computer programs or program segments. Originally, GP was developed and applied for programs expressed using the programming language LISP. This representation facilitated the application of the genetic operators to evolve the optimal program for a given problem, through the selection and crossover of the S-expressions, which are the syntactical unit in LISP. Since its inception in the early 1990s, GP has grown beyond the constraints of LISP environments, using other programming languages. GP is

employed in a variety of problem domains, including feature detection and identification [19], clinical prediction [20,21], biochemical interaction networks [22], and injury research [23]. 2.2. Non-genetic evolutionary computation methods While genetics-based methods make up a large proportion of the methods used in evolutionary computation, there are other, non-genetic methods that use an evolutionary approach. These include artificial immune systems, ant colony optimization, particle swarm optimization, simulated annealing, neural networks, and a variety of artificial life approaches. These approaches are represented in the papers in this special section, and it is worth considering several of these non-genetic methods briefly to gain an appreciation of the field. 2.2.1. Artificial immune systems The artificial immune system (AIS) uses a knowledge representation and optimization procedure that is very much like that of an organism’s immune system. The primary function of the AIS is to learn from stimuli encountered in the environment to evolve a sense of ‘‘self’’ in the service of protection from ‘‘non-self’’ stimuli. As in living systems, these stimuli are equivalent to antigens. The AIS does this through a cycle of antibody creation, recognition of antigens through pattern detection applied in the context of memory, titrated responses to novel antigens, and antibody cloning. This process occurs in an iterative fashion, applied to a population of individuals each with a varying degree of affinity to a specific antigen that is encountered at a given iteration. Over time, the population of individuals becomes increasingly accurate at identifying potentially harmful antigens that do not match their immunity pattern. As a result, a trained AIS can accurately detect so-called ‘‘invaders’’, or anomalous patterns based on prior experience. AISs have been applied to diagnosis [24–27] and proteomics [28], among such other domains as computer virus identification and fraud detection. 2.2.2. Ant colony optimization Ant colony optimization (ACO) is inspired by the food foraging behavior of ant colonies. The communication between ants is based on an action taken by ants that preceded it, specifically through a trail of pheromone. It is this pheromone-mediated behavior that is most important for ACO. Like most evolutionary computation methods, it is a population-based approach that relies on feedback from the environment as it seeks an optimized solution to a problem. Artificial ant colonies are composed of software agents that work cooperatively to solve problems by following ‘‘pheromone’’ pathways, or ‘‘trails’’. These trails are typically represented as arrays of integers or other variable types, which in turn represent the edges of a graph. The trails taken through the graph represent possible solutions to a given problem. ACOs are an effective method for search and optimization, and they have been used in numerous diverse domains, such as analytic optimization [29,30], image detection and classification [31,32], workflow optimization [33], decision support [8], feature selection in genotyping [34], and arrhythmia detection [35]. 2.2.3. Particle swarm optimization The basic tenet of particle swarm optimization (PSO) is that the whole is greater than the sum of its parts. A swarm, such as that of certain fish, birds, or flying insects, has a distinctive behavior that is a collective of the individuals within the swarm. This behavior is emergent (or evolutionary), not controlled by a central authority or algorithm, and it is not known to the individuals in the swarm. Again, like other evolutionary methods, PSO uses a population of individuals (or particles), each of which is an agent that is capable

Guest Editorial / Journal of Biomedical Informatics 49 (2014) 11–15

of proposing a solution to a given problem. Each particle has its own behavior, which is to explore the search space for an optimal solution. In so exploring, a particle can exert an influence on other particles in the swarm, such that the swarm as a whole evolves toward an optimal solution. One only has to observe a swarm of starlings to see how the action of a small number of birds can spread like contagion to affect the behavior of the others in an evolutionary way, until the swarm itself takes on a completely new morphology. The applications of PSO in biomedicine are broad, extending to a number of diverse domains. These include signal detection and classification [36–38], diagnostic genomics [39,40], image segmentation in cardiology [41], tumor classification [42], and biomarker feature selection [43]. 2.2.4. Simulated annealing Simulated annealing (SA) is an evolutionary computation method that is informed by a process from the physical, rather than the biological world. It is based on the well-known metallurgical principle of annealing, wherein a metal is heated and then allowed to cool in a controlled fashion. During cooling, the metal becomes stronger than if the cooling were not controlled, due to the reduction of thermodynamic free energy in the metal which in turn results in the re-alignment of the metal’s crystal structure. This principle is used in SA through three parameters: state (current and neighboring), temperature (which is gradually decreased over time), and an energy function (corresponding to thermodynamic free energy). The movement from one state to the next is determined probabilistically and based on the temperature, which decreases over time to result in a smaller value of the energy function. The goal of SA is to evolve a solution that is characterized by minimization of the energy function. Simulated annealing has been applied in clinical trial management [44], health services optimization [45], proteomics [46], diagnosis [47,48], and treatment planning [49]. 3. Applying evolutionary computation to real-world problems in biomedicine As is discussed in the previous section, the methods of evolutionary computation share a common connection to the biological or physical world, specifically phenomena that most of us have experienced or learned about in real life. It would be tempting to dismiss these methods as mere simulations of these phenomena, but that would ignore the many contributions that evolutionary computation has made to many fields such as data mining, robotics, engineering, and computer science, as well as those described above which are more focused on biomedicine. Ample evidence of the successful application of the approaches described above and in the papers of this special section is demonstrated every year in the very well attended (and highly competitive) Genetic and Evolutionary Computation Conference (GECCO). Reports of biomedical applications of new evolutionary computation methods appear with increasing frequency each year, but there is much opportunity for more such research. Dismissing evolutionary computation as simply a clever approach to solving problems would also be a mistake. After all, we see problems being solved in nature all the time: swarms providing a protective mantle for the organisms within them from predators, economical and effective ways of scavenging for food, and the basic evolution which underlies these and other mechanisms for survival. Evolution in the biological world is fundamentally search for optimal solutions in a vast problem space. So it is in the computational world as well, and it makes much sense therefore to turn to evolution for inspiration for solving the kinds of problems we face in that world.

13

Clearly, evolutionary computation has become more than an ‘‘interesting’’ approach to solving a wide variety of complex problems and today it far exceeds the original visions of such seminal thinkers as John Holland, Ingo Rechenberg, and Lawrence Fogel who, among others, laid the foundations of the field starting in the early 1960s. Even so, it would be proper to ask if evolutionary computation is ‘‘ready’’ for use in biomedical applications, or for which kinds of problems in biomedicine it would be most appropriate to apply evolutionary computation. The easy answer to this latter question is that any problem that involves search or optimization, such as image pattern detection or health services allocation, fits well with evolutionary computation. In addition to a metaphorical link to the biological or physical world, one advantage of using these methods is their ease of implementation. For example, a genetic algorithm can be implemented in less than 200 lines of C code, and a variety of evolutionary computation methods have been implemented in such platforms as R (http://www.r-project.org/), Matlab (http://www.mathworks. com), and data mining suites such as Weka (http://www.cs. waikato.ac.nz/ml/weka/). Another advantage is that they can be implemented within other analytical approaches, such as linear regression. Evolutionary computation can also be used alone or with other machine learning methods for feature selection to create optimal sets of covariates for analysis that are free of the constraints imposed by traditional statistical methods that are commonly used for this purpose. Finally, a real strength of evolutionary computation methods is that they are model-free, in that they provide a meta-heuristic framework, rather than one that relies on prior assumptions and requires complete or perfect data. As a result, they can be used to solve a wide variety of problems in a wide variety of application domains. A genetic algorithm could be used to identify lesions on a mammogram as well as to identify mRNA targets, mining temporal workflow data, or optimizing a clinical decision. The basic algorithm is the same in all of these applications, even though the domains are quite different. Once one has determined the appropriate way to represent the domain-specific knowledge (for example, which variables map to which ‘‘genes’’), and the various parameters (such as crossover rate and population size), the hard work is done by the genetic algorithm itself. 4. The papers in this special section As the field of evolutionary computation matures, we are seeing growth not only in terms of the number of researchers and their productivity in this domain, but in their creativity, especially expressed as exploration of hybrid approaches that incorporate evolutionary computation with other methods. The papers in this special section reflect that creativity. Kiranyaz, for example, integrates PSO and morphological filtering in a classification ensemble to create binary classifier networks for EEG pattern classification [50]. This application domain has attracted the attention of many in the statistical sciences, especially those working in time series analysis, but also those computer scientists who work in machine learning and pattern detection. The PSO-based method used by Kiranyaz performed better on EEG signal classification than a support vector machine using seven different feature selection methods. Luque Baena et al. incorporate seeding the GA with preexisting knowledge of biological pathway data in order to improve classification performance [51]. This approach to improving the learning performance of the GA has been used in numerous domains, such as case-based reasoning [52], but not in microarray analysis. The authors do not compare their approach with other classification algorithms but demonstrate that incorporating domain knowledge improved classification performance over the

14

Guest Editorial / Journal of Biomedical Informatics 49 (2014) 11–15

GA alone. And this method also performed better at feature selection than several well-known benchmarks. Dheeba et al. use a wavelet neural network optimized with a PSO algorithm to optimize computer-assisted detection of lesions on mammograms [53]. Their study builds on previous work by others who hybridized neural networks with PSO for improved training performance [54]. This is important work, in that the neural network paradigm has long been criticized for slow training times in certain data environments, including those in the biomedical domain. Of particular importance is the impressive classification performance Dheeba et al. obtain with their hybridized neural network when applied to a clinical database. In a related approach, Hsieh, Su, and Wang employ PSO to trim rule sets obtained from the application of a fuzzy hyper-rectangular composite neural network to three well-known benchmark datasets [55]. The authors demonstrate superior classification performance of the PSO-neural network hybrid compared with the non-hybridized network, and they show better classification error rates compared to 11 other commonly used classification methods. Alexandridis and Chondrodima also use a hybrid approach by integrating a non-symmetric fuzzy means training regime with a radial basis function neural network, and then using evolutionary simulated annealing to improve performance of the neural network [56]. This paper is the only one in this special section to use an evolutionary method derived from the physical world. Simulated annealing has been used to optimize neural networks in the past, but this is the first report on using simulated annealing in the context of optimizing the partition between fuzzy classifiers, specifically in diagnosis. The authors have applied their method to several well-known benchmark datasets with excellent classification accuracy, suggesting that it should be investigated further on larger, more complex data. The paper by Acosta-Mesa et al. illustrates the application of a novel use of genetic programming to discover the optimal discretization scheme for classifying precancerous cervical lesions from complex time series data [57]. Discretization, in general, and certainly when applied to time series data, is often a thorny task. The authors have build on their previous work that incorporates evolutionary programming to time series analysis and discretization on 20 benchmark time series datasets, all of which are non-medical in content. The application of this method to cervical cancer diagnosis illustrates how one can use evolutionary computational methods in biomedicine, even though they were developed outside that domain. De Falco, Sannino, and De Pietro also focus on time series data, specifically heart rate variability in sleep apnea [58]. Differential evolution, a type of genetics-based evolutionary computation related to the learning classifier system described above, is used to extract rules from these data, obtained from wearable, single-lead ECGs. Their approach calls for using an ECG recording for a given patient, annotated by experts for variability. The annotations are then used in a supervised learning environment for classification by the differential evolution algorithm to extract IF–THEN rules that suggest an episode of obstructive sleep apnea. These rules are then used during subsequent sleep monitoring of that patient to take an action, such as waking, that would interrupt the apneic phase of sleep. Since the rules are extracted for each patient, this approach to obstructive sleep apnea diagnosis and treatment is highly individualized. Kim, Ha, and Zhang employ sequential Bayesian sampling with genetics-based evolutionary computation to evolve hypergraph classifiers to identify gene interactions for prognosis and recurrence of cancer [59]. In this approach, the Bayesian network components are directly analogized to the components of a genetic algorithm: genes represent vertices (nodes) on a graph, chromosomes (or individuals) represent edges where each individual has a fitness that corresponds to the weight of the edge, and the

population of individuals represents the hypergraph structure. The population has a fitness function that reflects the posterior probability. By applying the genetic operators, crossover and mutation, the graph structure can be evolved over time by maximizing the posterior. Their evolutionary hypergraph model, applied to two datasets (breast cancer and multiple myeloma) performed comparably to other classification methods, including a learning classifier system. Finally, Gorunescu and Belciug demonstrate the application of a committee-based approach to developing an intelligent decision support system, in which the decision of the committee is mediated by an evolutionary process [60]. In this work, the ‘‘population’’ consists of eight machine-learning algorithms, and a GA-hybridized multi-layer perceptron neural network is used to evolve the optimal decision accuracy on five well-known benchmark datasets. The classification accuracy of this approach compared favorably with that of others, including support vector machines (with and without the information of the GA), decision tree induction, and naïve Bayes classification. The use of evolutionary computation in committee-based machine learning systems is not new, but its application to clinical classification and decision support is, and should stimulate further investigation into its use in more complex data systems, even in real-time environments. The papers in this section demonstrate the many paths one can take in developing evolutionary computational approaches for use in biomedical domains. The authors have provided a broad landscape of new methods as well as those that have enjoyed applications in other, non-biomedical arenas. They have also demonstrated the ease with which evolutionary computation methods can be developed, enhanced, and used in these domains. The naturally-inspired metaphor that drives evolutionary computation assists with this, of course, but so too do the various literature and software resources available to those who are interested in exploring this branch of computer science. As demonstrated in this special section, the innovation and application of these methods in a variety of biomedical domains suggests that there is a compelling empirical basis for continuing investigation in evolutionary computation in biomedicine. We hope that these papers stimulate further interest in developing new evolutionary computation methods and using these approaches in a wide variety of biomedical informatics disciplines. References [1] Holland JH. Adaptation in natural and artificial systems. Cambridge, MA: MIT Press; 1992. [2] Mooney M, Wilmot B, Bipolar Genome ST, McWeeney S. The GA and the GWAS: using genetic algorithms to search for multilocus associations. IEEE/ ACM Trans Comput Biol Bioinform 2012;9(3):899–910. [3] Janc K, Tarasiuk J, Bonnet AS, Lipinski P. Genetic algorithms as a useful tool for trabecular and cortical bone segmentation. Comput Methods Programs Biomed 2013;111(1):72–83. [4] Yan M, Ye F, Zhang Y, Cai X, Fu Y, Yang X. Optimization model research on efficacy in treatment of chronic urticaria by Chinese and Western medicine based on a genetic algorithm. J Tradit Chin Med 2013;33(1):60–4. [5] Alvarez D, Hornero R, Marcos JV, Del CF. Feature selection from nocturnal oximetry using genetic algorithms to assist in obstructive sleep apnoea diagnosis. Med Eng Phys 2012;34(8):1049–57. [6] Kocer S, Canal MR. Classifying epilepsy diseases using artificial neural networks and genetic algorithm. J Med Syst 2011;35(4):489–98. [7] Wang X, Yang J, Jensen R, Liu X. Rough set feature selection and rule induction for prediction of malignancy degree in brain glioma. Comput Methods Programs Biomed 2006;83(2):147–56. [8] Suganthi M, Madheswaran M. An improved medical decision support system to identify breast cancer using mammography. J Med Syst 2012;36(1):79–91. [9] Manning T, Sleator RD, Walsh P. Naturally selecting solutions: the use of genetic algorithms in bioinformatics. [Rev] Bioeng 2013;4(5):266–78. [10] Flexman ML, Kim HK, Stoll R, Khalil MA, Fong CJ, Hielscher AH. A wireless handheld probe with spectrally constrained evolution strategies for diffuse optical imaging of tissue. Rev Sci Instrum 2012;83(3):033108. [11] Li R, Emmerich MT, Eggermont J, Back T, Schutz M, Dijkstra J, et al. Mixed integer evolution strategies for parameter optimization. Evol Comput 2013;21(1):29–64.

Guest Editorial / Journal of Biomedical Informatics 49 (2014) 11–15 [12] Socha RD, Tokuriki N. Modulating protein stability – directed evolution strategies for improved protein function. [Rev]. FEBS J 2013;280(22):5582–95. [13] Smith SF. Flexible learning of problem solving heuristics through adaptive search. Los Altos, CA: Morgan Kaufmann; 1988. p. 421–5. [14] Wilson SW. Classifier fitness based on accuracy. Evol Comput 1995;3(2):149–75. [15] Urbanowicz RJ, Andrew AS, Karagas MR, Moore JH. Role of genetic heterogeneity and epistasis in bladder cancer susceptibility and outcome: a learning classifier system approach. J Am Med Inform Assoc 2013;20(4):603–12. [16] Skinner BT, Nguyen HT, Liu DK. Classification of EEG signals using a geneticbased machine learning classifier. Conf Proc: Annu Int Conf IEEE Eng Med Biol Soc 2007;2007:3120–3. [17] Holmes JH, Durbin DR, Winston FK. Discovery of predictive models in an injury surveillance database: an application of data mining in clinical research. Proc/ AMIA Annu Symp 2000:359–63. [18] Holmes JH, Durbin DR, Winston FK. The learning classifier system: an evolutionary computation approach to knowledge discovery in epidemiologic surveillance. Artif Intell Med 2000;19(1):53–74. [19] Huml M, Silye R, Zauner G, Hutterer S, Schilcher K. Brain tumor classification using AFM in combination with data mining techniques. BioMed Res Int 2013;2013:176519. [20] Engoren M, Habib RH, Dooner JJ, Schwann TA. Use of genetic programming, logistic regression, and artificial neural nets to predict readmission after coronary artery bypass surgery. J Clin Monit Comput 2013;27(4):455–64. [21] Janssen KJ, Siccama I, Vergouwe Y, Koffijberg H, Debray TP, Keijzer M, et al. Development and validation of clinical prediction models: marginal differences between logistic regression, penalized maximum likelihood estimation, and genetic programming. J Clin Epidemiol 2012;65(4):404–12. [22] Kandpal M, Kalyan CM, Samavedham L. Genetic programming-based approach to elucidate biochemical interaction networks from data. IET Syst Biol 2013;7(1):18–25. [23] Das A, Abdel-Aty M. A genetic programming approach to explore the crash severity on multi-lane roads. Accid Anal Prev 2010;42(2):548–57. [24] Delibasis KK, Asvestas PA, Matsopoulos GK, Zoulias E, Tseleni-Balafouta S. Computer-aided diagnosis of thyroid malignancy using an artificial immune system classification algorithm. IEEE Trans Inf Technol Biomed 2009;13(5):680–6. [25] Er O, Sertkaya C, Temurtas F, Tanrikulu AC. A comparative study on chronic obstructive pulmonary and pneumonia diseases diagnosis using neural networks and artificial immune system. J Med Syst 2009;33(6):485–92. [26] Sengur A. An expert system based on principal component analysis, artificial immune system and fuzzy k-NN for diagnosis of valvular heart diseases. Comput Biol Med 2008;38(3):329–38. [27] Zhao W, Davis CE. A modified artificial immune system based pattern recognition approach–an application to clinical diagnostics. Artif Intell Med 2011;52(1):1–9. [28] Sree PK, Babu IR, Devi NS. Investigating an Artificial Immune System to strengthen protein structure prediction and protein coding region identification using the cellular automata classifier. Int J Bioinform Res Appl 2009;5(6):647–62. [29] Garbarino S, Caviglia G, Brignone M, Massollo M, Sambuceti G, Piana M. Estimate of FDG excretion by means of compartmental analysis and ant colony optimization of nuclear medicine data. Comput Math Methods Med 2013;2013:793142. [30] Liu W, Chen H, Chen L. An ant colony optimization based algorithm for identifying gene regulatory elements. Comput Biol Med 2013;43(7):922–32. [31] Kavitha G, Ramakrishnan S. An approach to identify optic disc in human retinal images using ant colony optimization methodz. J Med Syst 2010;34(5):809–13. [32] Pereira C, Goncalves L, Ferreira M. Optic disc detection in color fundus images using ant colony optimization. Med Biol Eng Comput 2013;51(3):295–303. [33] Rizk C, Arnaout JP. ACO for the surgical cases assignment problem. J Med Syst 2012;36(3):1891–9. [34] Huang CF, Kaur J, Maguitman A, Rocha LM. Agent-based model of genotype editing. Evol Comput 2007;15(3):253–89. [35] Korurek M, Nizam A. A new arrhythmia clustering technique based on ant colony optimization. J Biomed Inform 2008;41(6):874–81. [36] Delgado.Saa. JF, Cetin M. Discriminative methods for classification of asynchronous imaginary motor tasks from EEG data. IEEE Trans Neural Syst Rehabil Eng 2013;21(5):716–24. [37] Escalante HJ, Gomez M, Gonzalez JA, Gomez-Gil P, Altamirano L, Reyes CA, et al. Acute leukemia classification by ensemble particle swarm model selection. Artif Intell Med 2012;55(3):163–75. [38] Kang X, Safdar N, Myers E, Martin AD, Grisan E, Peters CA, et al. Automatic analysis of pediatric renal ultrasound using shape anatomical and image acquisition priors. Med Image Comput Comput – Assist Interv: MICCAI 2013;16(Pt:3):3–66.

15

[39] Wu SJ, Chuang LY, Lin YD, Ho WH, Chiang FT, Yang CH, et al. Particle swarm optimization algorithm for analyzing SNP–SNP interaction of reninangiotensin system genes against hypertension. Mol Biol Rep 2013;40(7):4227–33. [40] Wei B, Peng Q, Zhang Q, Li C. Identification of a combination of SNPs associated with Graves’ disease using swarm intelligence. Sci China Life Sci 2011;54(2):139–45. [41] Cruz-Aceves I, Avina-Cervantes JG, Lopez-Hernandez JM, Gonzalez-Reyna SE. Multiple active contours driven by particle swarm optimization for cardiac medical image segmentation. Comput Math Methods Med 2013;2013:132953. [42] Abdi MJ, Hosseini SM, Rezghi M. A novel weighted support vector machine based on particle swarm optimization for gene selection and tumor classification. Comput Math Methods Med 2012;2012:320698. [43] Martinez E, Alvarez MM, Trevino V. Compact cancer biomarkers discovery using a swarm intelligence feature selection algorithm. Comput Biol Chem 2010;34(4):244–50. [44] Chen N, Lee JJ. Optimal continuous-monitoring design of single-arm phase II trial based on the simulated annealing method. Contemp Clin Trials 2013;35(1):170–8. [45] Ceschia S, Schaerf A. Modeling and solving the dynamic patient admission scheduling problem under uncertainty. Artif Intell Med 2012;56(3):199–205. [46] Zheng W, Schafer NP, Davtyan A, Papoian GA, Wolynes PG. Predictive energy landscapes for protein–protein association. Proc Natl Acad Sci USA 2012;109(47):19244–9. [47] Favrot C, Steffan J, Seewald W, Picco F. A prospective study on the clinical features of chronic canine atopic dermatitis and its diagnosis. Vet Dermatol 2010;21(1):23–31. [48] Sartakhti JS, Zangooei MH, Mozafari K. Hepatitis disease diagnosis using a novel hybrid method based on support vector machine and simulated annealing (SVM–SA). Comput Methods Programs Biomed 2012;108(2):570–9. [49] Cunha JA, Pouliot J, Weinberg V, Wang-Chesebro A, Roach III M, Hsu IC. Urethra low-dose tunnels: validation of and class solution for generating urethrasparing dose plans using inverse planning simulated annealing for prostate high-dose-rate brachytherapy. Brachytherapy 2012;11(5):348–53. [50] Kiranyaz S. Automated patient-specific classification of long-term electroencephalography. J Biomed Info 2014;49:16–31. [51] Luque Baena RM, Urda D, Claros MG, Franco L, Jerez JM. Robust gene signatures from microarray data using genetic algorithms enriched with biological pathway keywords. J Biomed Info 2014;49:32–44. [52] Passone S, Chung PWH, Nassehi V. Incorporating domain-specific knowledge into a genetic algorithm to implement case-based reasoning adaptation. Knowledge-Based Syst 2006;19(3):192–201. [53] Dheeba J, Singh AN, Selvi TS. Computer-aided detection of breast cancer on mammograms: a swarm intelligence optimized wavelet neural network approach. J Biomed Info 2014;49:45–52. [54] Zhang J-R, Zhang J, Lok T-M, Lyu MR. A hybrid particle swarm optimization– back-propagation algorithm for feedforward neural network training. Appl Math Comput 2007;185:1026–37. [55] Hsieh YZ, Su MC, Wang PC. A PSO-based rule extractor for medical diagnosis. J Biomed Info 2014;49:53–60. [56] Alexandridis A, Chondrodima E. A medical diagnostic tool based on radial basis function classifiers and evolutionary simulated annealing. J Biomed Info 2014;49:61–72. [57] Acosta-Mesa HG, Rechy-Ramirez F, Mezura-Montes E, Cruz-Ramirez N, Hernandez-Jimenez R. Application of time series discretization using evolutionary programming for classification of precancerous cervical lesions. J Biomed Info 2014;49:73–83. [58] De Falco I, Sannino G, De Pietro G. Monitoring obstructive sleep apnea by means of a real-time mobile system based on the automatic extraction of sets of rules through differential evolution. J Biomed Info 2014;49:84–100. [59] Kim SJ, Ha JW, Zhang BT. Bayesian evolutionary hypergraph learning for predicting cancer clinical outcomes. J Biomed Info 2014;49:101–11. [60] Gorunescu F, Belciug S. Evolutionary strategy to develop learning based decision systems application to breast cancer and liver fibrosis stadialization. J Biomed Info 2014;49:112–18.

John H. Holmes Associate Professor of Medical Informatics in Epidemiology, Department of Biostatistics and Epidemiology, University of Pennsylvania, Perelman School of Medicine, United States E-mail address: [email protected] Available online 27 May 2014

Methods and applications of evolutionary computation in biomedicine.

Methods and applications of evolutionary computation in biomedicine. - PDF Download Free
286KB Sizes 2 Downloads 3 Views