sensors Article

A Framework for the Multi-Level Fusion of Electronic Nose and Electronic Tongue for Tea Quality Assessment Ruicong Zhi 1,2, *, Lei Zhao 3 and Dezheng Zhang 1,2 1 2 3

*

School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China; [email protected] Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing 100083, China China National Institute of Standardization, Beijing 100191, China; [email protected] Correspondce: [email protected] or [email protected]; Tel.: +86-10-6233-4547

Academic Editor: Manel del Valle Received: 2 March 2017; Accepted: 20 April 2017; Published: 3 May 2017

Abstract: Electronic nose (E-nose) and electronic tongue (E-tongue) can mimic the sensory perception of human smell and taste, and they are widely applied in tea quality evaluation by utilizing the fingerprints of response signals representing the overall information of tea samples. The intrinsic part of human perception is the fusion of sensors, as more information is provided comparing to the information from a single sensory organ. In this study, a framework for a multi-level fusion strategy of electronic nose and electronic tongue was proposed to enhance the tea quality prediction accuracies, by simultaneously modeling feature fusion and decision fusion. The procedure included feature-level fusion (fuse the time-domain based feature and frequency-domain based feature) and decision-level fusion (D-S evidence to combine the classification results from multiple classifiers). The experiments were conducted on tea samples collected from various tea providers with four grades. The large quantity made the quality assessment task very difficult, and the experimental results showed much better classification ability for the multi-level fusion system. The proposed algorithm could better represent the overall characteristics of tea samples for both odor and taste. Keywords: multi-level fusion; feature fusion; decision fusion; electronic nose; electronic tongue; tea quality assessment

1. Introduction Tea quality assessment is crucial for both producers and consumers, and it is a very challenging task due to the presence of innumerable compounds and their diverse contribution to tea quality. The evaluation of tea quality is usually carried out through human sensory analysis, which can provide direct and integrated measurements of various attributes. The gradation of tea is yielded on the basis of experienced tea tasters’ scores, and tea tasters assign the perceived intensities of appearance, aroma, and taste captured by their sense organs [1]. However, human sensory evaluation is subjective, as even the same person may provide different evaluation information in different experiments, and sensory panel are inconsistent due to the individual variability. Moreover, there are various affective factors (e.g., physical or mental states) which make the human sensory evaluation inaccurate [2]. With the increased expectation of high quality and a large quantity of the product, there are many requirements for objective measurements in a fast, accurate, and cost-effective manner [3]. Conventional flavor analysis techniques are high cost and not suitable for online quality control, such as gas chromatography (GC), high-performance liquid chromatography (HPLC), plasma atomic emission spectrometry, and capillary electrophoresis [4].

Sensors 2017, 17, 1007; doi:10.3390/s17051007

www.mdpi.com/journal/sensors

Sensors 2017, 17, 1007

2 of 16

Electronic intelligent systems (e.g., E-nose and E-tongue) have received considerable attention during the last two decades, since numerous applications have increased in the food industry [3,5]. The E-nose and E-tongue can mimic the sensory perception of human smell and taste, and they are widely applied in tea quality evaluation by utilizing fingerprints of response signals representing the overall information of tea samples. E-tongue can detect the taste of samples, whereas e-nose can identify the odor. Both of them are based on the principle of identifying odor/taste by extracting an overall signature from the comprehensive mixture of the compounds. Usually, an electronic intelligent system consists of a sensor array for chemical detection, along with advanced pattern recognition systems so that the sensor signal data can be automatically and reliably processed. E-noses and E-tongues have been widely used in many food quality assessment applications, such as fruits (grapes, pears, apples, mangos, pineapples, and peaches), beverages (juice, coffee, tea, wine, and milk), meat, fish, and rice, etc. [3,5–7]. There are a great number of studies focusing on tea quality assessment by electronic intelligent instruments. Both E-nose and E-tongue are successfully applied to provide fast and reliable results for tea grade identification, storage time and fermentation processes, where the odor of tea or the taste of tea is considered as the sole attribute [8–19]. The intrinsic part of human perception is the fusion of sensors, as multiple sensory organs provide more information which helps to make a better decision than a single sensory pipeline. The electronic intelligent system is carried out in a similar way as human perception, with diverse sensors generating different signature phenomena to fully exploit the characteristics of test samples. Data fusion aims to combine information acquired from multiple sources through different strategies to achieve a better description and to enhance the probability of an accurate classification. Usually, fusion can be categorized into data fusion (low-level fusion), feature fusion (intermediate-level fusion), and decision fusion (high-level fusion). The data fusion strategy employs raw data level fusion by combining the raw sensor response signals into a single signal. Feature fusion concatenates the features extracted from the signal values of the sensors through a variety of feature extraction and selection methods. Decision fusion presents the output of multiple classifiers to achieve a final prediction, and each classifier is trained for each signal source. The decision level fusion combines decisions from multiple sensory pipelines, and this is similar to the fusion mechanism in the human brain [5]. A number of researches reported that a system combining an E-nose and E-tongue improved the performance of an individual system for quality assessment [20–29]. Most of the researches focused on data fusion and feature fusion, by combining E-nose and E-tongue raw sensor signals or features extracted from raw signal data, and a few researches reported on the decision fusion of multiple electronic intelligent instruments [30–33]. In this study, a framework for a multi-level fusion strategy of E-nose and E-tongue was proposed to promote the tea quality prediction accuracies by simultaneously modeling feature fusion and decision fusion, and the procedure is illustrated in Figure 1. Two different features of time-domain based and frequency-domain based representations were extracted from the E-nose and E-tongue sensor responses to better represent individual sensor signal characteristics. The merged features were analyzed by a nonlinear dimensionality reduction algorithm for feature selection. Finally, the classification results obtained by the K-nearest neighbor classifier from the E-nose and E-tongue were fused by D-S evidence, which was an effective decision fusion algorithm.

Sensors 2017, 17, 1007 Sensors 2017, 17, 1007

3 of 16 3 of 16

assessment. Figure 1. Illustration of the strategy flow of multi-level fusion tea quality assessment.

2. Materials and Methods Methods 2. Materials and 2.1. Sample Collection All of the tea samples belong to a category of green tea, which is called “Longjing tea”, with four grade levels levels (grade (grade 1/T, 1/T, grade grade 2/Y, 2/Y, grade grade 3/E, 3/E, and grade 4/S) 4/S) selected different grade selected from seven tea producers (which are denoted by MC, MG, ML, MS, MX, MY, and QD) in the Xihu producing area, Hangzhou, China. The experts The quality quality of of the the tea samples was evaluated by national certified tea experts the national national standard standard “GB/T “GB/T 23776-2009 according to the 23776-2009 Methodology Methodology of of sensory sensory evaluation evaluation of of tea”. tea”. samples in Thirty-five tea samples of each grade and each company were used, and there were 980 samples total. The tea samples were individually vacuum packed with aluminum foil materials, and stored in ◦ C[34]. at − −44°C the refrigerator at [34]. 2.2. Electronic Nose Measurement An E-nose (Fox 4000, Alpha M.O.S. Co., Toulouse, France) was utilized to acquire the odor fingerprints fingerprints of of the the tea tea samples. samples. The sensor array consists of 18 metallic oxide sensors. sensors. The tea sample preparation was weighted forfor 1 g, preparation procedure procedureisisas asfollows: follows:In Inthe thepreprocessing preprocessingstage, stage,each eachtea teasample sample was weighted 1 and mixed withwith 5 mL5ultrapure waterwater at room in a 20 mL bottle. Thebottle. headspace g, and mixed ml ultrapure at temperature room temperature in headspace a 20 mL headspace The bottle was arranged in arranged an automatic sampling device after sealing. testing was conducted for headspace bottle was in an automatic sampling device Cross after sealing. Cross testing was different grades of tea samples a cyclic through cross sequence, all sequence, four grades of all teafour samples were conducted for different gradesthrough of tea samples a cyclici.e., cross i.e., grades of tested in the were ordertested of “T, in Y, the E, S;order T, Y, E, Y, E, E, S; T, . . . Y, ”. E, Therefore, samples with various tea samples of S; “T,T,Y, S; T, Y, E,the S; tea ……”. Therefore, the tea qualities couldvarious be tested alternately, avoiding the adaptability of sensors to certain type of tea samples with qualities couldthus be tested alternately, thus avoiding the adaptability of sensors samples. theofexperiments, headspace bottlethe was first sent bottle to the was preheating area for preheating 900 s, with to certainIn type tea samples.the In the experiments, headspace first sent to the ◦ C. Then, 2 mL gas was injected into an oscillator rotation rate of 500 rpm/min and temperature of 60 area for 900 s, with an oscillator rotation rate of 500 rpm/min and temperature of 60 °C. Then, 2 mL the arrayinto room a speed 2 mL/s. theoftea sample detection, nitrogen was gas sensory was injected thewith sensory arrayofroom withBefore a speed 2 mL/s. Before the teapure sample detection, pumped into the array with a speed of 2 with mL/sa for 300of s, 2tomL/s ensure pure nitrogen wassensor pumped intochamber the sensor array chamber speed for that 300 s,the to former ensure gas was cleaned out [35,36]. Theout tea[35,36]. sampleThe reaction time was 120 s,time andwas the thatmolecule the former gasthoroughly molecule was thoroughly cleaned tea sample reaction 120 s, and the responses of the sensors were recorded every 0.5 s. Therefore, there were 241 values for each sensor of the electronic nose.

Sensors 2017, 17, 1007

4 of 16

responses of the sensors were recorded every 0.5 s. Therefore, there were 241 values for each sensor of the electronic nose. 2.3. Electronic Tongue Measurement An E-tongue (α-ASTREE, Alpha M.O.S. Co., Toulouse, France) with an array of seven electrodes was utilized in this study. The tea samples were prepared by boiling 150 mL of ultrapure water with every 1 g tea and covered. Then, the beaker was put in a boiling water bath (100 ◦ C) and shaken every 10 min. After 45 min, the tea infusion was filtered and cooled down for measurement after 2 h [2]. The electronic tongue is not as stable as the electronic nose, and some preprocessing is needed to enhance the stability and reliability of the tea sample measurement. Such preprocessing includes self-testing, activation, training, calibration, and diagnostics. The procedure is as follows: The connection of the hardware is automatically tested by the equipment software. The sensors are cleaned by deionized water for 10–20 s, and activated in another cup of deionized water for 30 min to enrich H+ on the coating film of the sensor. Then, the sensors are trained using HCl (0.01 mol/L, training solution) and ultrapure water (cleaning solution). First, the sensors are cleaned in ultrapure water for 10 s, and then the sensors are moved to the HCl solution for 300 s. The same procedure is repeated for three times which is controlled by the software. Calibration is consequently conducted to check whether the coating film of the sensor is balanced after training. The sensors are cleaned in ultrapure water (10 s) and reacted in 0.01 mol/L HCl (120 s), and the operation is repeated three times. Moreover, the instrument is diagnosed every 20 days in our test to guarantee the sensitivity and effectiveness of the sensors. The procedure is controlled and implemented by the software. Each of the electrodes produced a signal curve, and each signal consisted of 241 points as the signal value was acquired every 0.5 s in the period of 120 s. Therefore, the data formed a matrix of 7 × 241. 2.4. Multi-Level Fusion System 2.4.1. Feature Extraction and Fusion The response signals collected from the E-nose and E-tongue are sequence signals changing over time. It is not feasible to deal with the original signal directly as it is time consuming and there is usually a noisy signal mixed in with the sensor signal sequences. Therefore, feature extraction is of great importance as it can efficiently analyze the sensor sequences and extract the features representing the characteristics of the signals. Most researches employed time-domain based features, which could represent the inner characteristics from the intelligent sensors’ response signals. In the signal processing field, more effective representations are utilized to denote the frequency-domain based features by filter processing, as they can differentiate the signals which are not well discriminated in the time domain. Therefore, both time-domain based representations and frequency-domain based representations are extracted from the E-nose and E-tongue response signals, and feature fusion is conducted to comprehensively adopt the superiority of both representations. In this study, the time-domain based representations include the maximum value (MV) and the average value (AV) of the original sensor responses of the E-nose and E-tongue, respectively. The maximum value of the i-th sensor is defined as: ! i = 1, 2, . . . , 18 for E-nose MVi = max| xi,1 , xi,2 , . . . , xi,241 | (1) i = 1, 2, . . . , 7 for E-tongue

Sensors 2017, 17, 1007

5 of 16

The average value of the i-th sensor is defined as: xi,1 + xi,2 + . . . + xi,241 AVi = 241

i = 1, 2, . . . , 18 for E-nose i = 1, 2, . . . , 7 for E-tongue

! (2)

where i is the sensor label and xi,1 , xi,2 , . . . , xi,241 are the absolute signal of the i-th sensor for the E-nose and E-tongue, respectively. The frequency-domain based representation is extracted by wavelet packet analysis, and the maximum energy (ME) and average energy (AE) of the wavelet packet coefficients are explored to denote the mainstream traits and the overall level of the sensor signals. The maximum energy (ME) of the i-th sensor is defined as: ! i = 1, 2, . . . , 18 for E-nose i i i MEi = max E30 , E31 , . . . , E37 (3) i = 1, 2, . . . , 7 for E-tongue The average energy (AE) of the i-th sensor is defined as: i Ei , Ei , . . . , E37 AEi = 30 31 8

i = 1, 2, . . . , 18 for E-nose i = 1, 2, . . . , 7 for E-tongue

! (4)

i , Ei , . . . , Ei are the energies of the i-th sensor at each frequency band, and E where E30 3j = 37 2 31 ∑m k =1 C3jk ( j = 0, 1, . . . , 7). m is the number of coefficients and C3jk is the k-th coefficient of the j-th frequency band corresponding to three scale orthogonal wavelet decomposition. The feature-level fusion is conducted by connecting the time-domain based features and frequency-based features in series, i.e., the four different features of each sensor are individually arrayed as (MVi , AVi , MEi , AEi ), and the fused features of each sensor are concatenated in order of the E-nose and E-tongue, respectively [36]. The fused feature is standardized for further analysis.

2.4.2. Nonlinear Subspace Embedding Linear separation in original data space is rare in the practical application of quality classification due to the correlation and redundancy among multiple electronic sensors. Therefore, the fused features concatenating time-domain representation and frequency-domain representation extracted from the original sensor signals are embedded in a high dimensional subspace which is linearly separable for the samples. Kernel-based algorithms widely concern nonlinear embedding methods for dimensionality reduction. The effectiveness of kernel-based dimensionality reduction methods was evaluated in [19]. KLDA (Kernel Linear Discriminant Analysis) is utilized in this study to reduce the fused feature dimension [37]. The fused features for sample k (k = 1, 2, . . . N) are denoted by xk = [ MVi , AVi , MEi , AEi ] for the E-nose (i = 1, 2, . . . , 18) and E-tongue (i = 1, 2, . . . , 7). In the following, we use a single symbol X = [x1 ; x2 ; . . . ; x N ] to represent the E-nose and E-tongue data matrix for simplicity. The data matrix X is implicitly mapped to a high dimensional feature space F with a nonlinear transformation Φ,  i.e., F = Φ(X) : X ∈ R N , where N is the sample number. The objective of the KLDA algorithm is to maximize the ratio of the between-class scatter and the within-class scatter in F, where the within-class    T ni c scatter is defined as SΦ w = ∑i =1 ∑ j=1 Φ xij − mi Φ xij − mi , where c is the number of classes (four grades), ni is the number of samples in class i, and mi is the mean of the i-th class sample. T c The between-class scatter is defined as SΦ b = ∑i =1 ni (mi − m)(mi − m) , where m is the mean of all the tea samples.

Sensors 2017, 17, 1007

6 of 16

2.4.3. Classification Classification is the grade identification process on the basis of fused and selected features. The K-nearest neighbors (KNN) classifier is a non-parametric method which is among the simplest of the machine learning algorithms employed for classification. The output of the classifier is the class label for the new unclassified sample by a majority vote of its K-nearest neighbors in the training set, and no explicit training step is required. 2.4.4. Decision Fusion Based on D-S Evidence D-S evidence has been widely applied to artificial intelligence and multi-sensor fusion [38]. The D-S evidence theory is a more general and flexible method than traditional probability approaches, as it supports both the imprecision and uncertainty representation [39]. The basic principle of the D-S evidence is stated in detail, as follows: Let Θ = {θ1 , θ2 , . . . , θc } be the set of θc (θc ∈ Θ) corresponding to c identifiable classes, then Θ is the space of the hypotheses called a frame of discernment. A key issue of the D-S evidence theory is the definition of basic probability assignment. The basic probability assignment function m is defined on 2Θ as m : 2Θ → [0, 1] for every element A of 2Θ . The probability assignment value m( A) satisfies the following properties: m(φ) = 0 ∑ m( A) = 1

(5)

A ∈2Θ

For any A ∈ 2Θ , the quantity m( A) represents a measure of belief that is exactly committed to A. φ is the empty set. The element A of 2Θ with m( A) being greater than zero is called the focal element of m. The other key issue of the D-S evidence theory is the aggregate multiple evidence from different sources defined on the same frame of discernment by means of the basic probability assignment function. Two bodies of evidence, m1 and m2 , with focal elements A1 , A2 , . . . , Ai and B1 , B2 , . . . , Bj , respectively, can be merged to obtain a new basic probability assignment function m by a combination rule. The D-S evidence combination rule is defined as:  ∑ m1 ( A i ) m2 B j m( A) =

Ai ∩ B j

1−Q

A 6= φ

(6)

 where Q = ∑ Ai ∩ Bj =φ m1 ( Ai )m2 Bj and Q < 1. One of the main difficulties in the D-S evidence is how to initialize the basic probability assignment function as well as possible. There is no general answer to the key problem of the basic probability assignment function definition [40,41]. Generally, the probability values are determined artificially depending on the specific applications. 2.4.5. Decision Fusion for the E-Nose and E-Tongue The grade identification is conducted by D-S evidence decision fusion for the E-nose and E-tongue, and the KNN classifier is utilized for the E-nose and E-tongue, respectively. Let the four grades of the tea samples be denoted by T, Y, E, and S. Therefore, the identifiable set is defined as Θ = { T, Y, E, S}. In our experiment, the basic probability assignment function is defined by the output of the classifier. The results of the KNN classifier are utilized to calculate the membership degree from the K-nearest neighbors of each class, that is:

Sensors 2017, 17, 1007

7 of 16

m (i ) =

x_neighi , K

i = T, Y, E, S

(7)

where x_neighi is the number of neighbors belonging to class i from the K-nearest neighbors. The parameter K is determined according to the recognition accuracies of the experiments. Assume that miT , mYi , miE , miS and niT , nYi , niE , niS are the basic probability assignment values for the four grades of the i-th tea samples by the E-nose and E-tongue, respectively. The final probability values of the tea sample MTi , MYi , MEi , MSi are identified by the D-S evidence combination rule, expressed as Equation (6). The grade of a test tea sample is classified according to the maximum membership rule by the final probability values, that is:  n o class(i ) = su f f ix max Mij , j = T, Y, E, S

(8)

The suffix of the maximum value of MTi , MYi , MEi , MSi is the grade that the test sample is assigned. The algorithms are implemented by Matlab R2015b (Mathworks, Natick, MA, USA). 3. Results and Discussion 3.1. Feature Representation from Sensors Both the time-domain based features (MV, AV) and the frequency-domain based features (ME, AE) were calculated from the sequence signal of the E-nose and E-tongue to enhance the discriminative ability of the features. The feature fusion was individually conducted for the E-nose and E-tongue. The fused features of all the 18 sensors of the E-nose were concatenated in order, and obtained a feature vector of 72 dimensions. Similarly, a vector of 28 dimensions was obtained by fusing the features of all the seven sensors of the electronic tongue. The discriminant ability of the features was evaluated by cluster scatters performed by linear discriminant analysis. The cluster trends of the tea samples were visualized by a 2-D scatter plot. Figure 2a shows the cluster scatter of the seven tea sample providers by single features (maximum values which were most commonly used) for the electronic nose, and Figure 2b shows the cluster scatter by the fused features. Scatters for the average value, maximum energy, and average energy of the E-nose are provided in the supplementary material. The results of the electronic nose demonstrated that the tea samples with four grades were not correctly discriminated by a single feature. The boundaries of the classes were not explicit and the samples with different tea grades overlapped. Moreover, the samples in the same class were diverse (i.e., located over a large area). This may decrease the probability of correctly assigning a certain test sample to one of the four grades. In Figure 2b, the scatter plots depict that the fused features significantly improved the performances. It could be seen that for most of the tea samples (i.e., producers of MC, MG, ML, MS, MX, and MY), the tea samples with the same class were much more compact, and the tea samples with different classes were clearly distinguished. Although a number of tea samples from tea producer QD with grade E and grade S overlapped, the samples within the same class became much closer than the scatters of a single feature. The results meet the objective of the dimensionality reduction methods which is to maximize the between-class scatter and minimize the within-class scatter.

Sensors 2017, 17, 1007 Sensors 2017, 17, 1007

8 of 16 8 of 16

MC 3

D F 2 - 1 0 .1 7 6 %

2 1

4

4

44

3 3 3 33 3 3 3 33 3 33 3 3 33 3 3 33

44 4 4 4 4 -1 4 44 44 44 44 4 4 4 4 4 44 4 -2 4 0

MG

3

33 3 3 3 3 3 3 3 3

4 3

4

4

1

2

1

1

1

1

1 1

1

1 1

4 1

11 1 1 1111 1 11 111 1 1 1 1 1 1 1 1 11 1

0

4

1

-1 -2 -3

22

-4

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

6

7

-9

-8

-7

-6

-5

-4

-3

DF1 - 86.057%

1 0 -1

33 3 33 3 33 33 33 3 3 33 3 3 333 3 3 33 3 33 3 3

-2 -3 -4 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0

2

4

33

0

-2 -4

1 2 3 4

5 6 7

8 9 10 11 12 13

-6

-5

-4

-3

-2

-1

0

DF1 - 95.018%

MX

-1 -2

22 2 4 4 242 2 422 4 2 4 4 2 2222 22 2242 24 4 4 4 4 4 4 4 42 2 2442 44 4 4 4 2 4 24 4 4 42 232 3 24 3 33 3 3 3 3 3 333 3 33 3 3 4 3 3 3 3 33 3 3 3 3 33 3 3

3

2

1

1

-3

D F 2 - 1 7 .3 4 6 %

0

4

-7

-6

-5

-4

-3

2

-2

-1

0

1

2

3

4

22 3

2

3

4

5

6

4

5

6

4 4 44 4 4 44 44 4 44 4 4 4 4444 4 44 4 4 4

7

8

4

9

10

11

0 4

-1 -2

5

-7

-6

-5

-4

-3

-2

-1

4

0 -1 -2

11 1 1 1 1

1 1 11 1

1 1 1 11 1 11 1 11 11 1 1 1 1 1 1 1

1 2 2 2 2 22

-3 -4

4 4 4 34 3 44 3 4 4 4 44 4 3 4 43 43 3 4 4 4 44 444 3 33 3 3 3 4 4 44 4 4 33 33333 3 3 4 3 3 33 33 3 4 4 2 33 34 33 2 22 22 2 2 2 22 2 2 2 2 22 2 22 2 2 2 22 2 2 2 2

-12 -11 -10 -9

-8

-7

-6

-5

-4

-3

-2

-1

DF1 - 93.388%

(a)

Figure 2. Cont.

0

1

2

3

4

5

6

7

8

1

1

1 111 1 1 1 1 1 1 1 11 1 1 1 1 11 1 1 1 11 11 11 1 1

-4

1 1

1 1

3 3

1

3

2

4

QD

1

2

0

DF1 - 73.572%

4

2

2 2 2 22 22 2 2 22 2 2 2 2 2 2 2 22 2 2 2222 2 2

2 3 3 33 3 3 3 33 3 44 4 444 4 3 44 4 4 4 44 3444 4 44 34 3 33 33 33 3333 3 444 4 4 3 333 33 3 4 4 4

1

DF1 - 84.909%

D F 2 - 4.96 2%

-8

1

-3

33 3 -9

3

4

4

4

1

2

44 4 44 44

5

2

3 1 1 1 1 1 11 1 1 1 1 1 1 11 1 1 1 1 1 1 11 1 11 1 1 1 1 1 1

1

MY 2

4

1

0

DF1 - 78.985%

5

2

-1

3 3 33 3 33 3 33 333 3 3 3 33 3 3 33 3 2 2 2 33 3 33 3 3 3 2 2 2 2 3 222 2 3 3 2 2 2 2222 22 22 2 2 22 2 12 2 2 1 1 2 1 1 11 1 12 1 2 1 11 111 1 1 1 1 1 11 1 1 1 1 1 11 1 11 1 1

4

44 4 4 4 44 4 4 4 444 4 4 444 4 4 44 4 4 4 4 4 444 4 4

D F 2 - 1 7.6 4 6%

D F 2 - 3 .4 9 8 %

2

4

1 1 1 1 1 1 1 1 1 1 1 11 1 1 11 1 1 1 1 1 1 11 11 1 1 1 1 11

2 2 2 2 22 22 22 2 2 2 2 2 2 2 2 2222 2 2 2 2 2 2 2 2 2 22 2 2

-2

MS

1

3

2

DF1 - 83.889%

ML 4

2 2 2 2 22 2 2 22 22 2 2 2 2 2 2 2 2 22 2 22 2 2 2 2

-5

-7

D F 2 - 1 2 .2 0 9 %

4 4 4 3 4 4 4 4 3 43 33 4 44 44 4 3 3 43 44 4 3 3 3 4 3 4 33 33 3 3 3 3 4 4 3 3 3 3 4 4 4 3 3 333 3 33 3 4 443 4

3

1 1 1 1 11 1 1 1 11 1 2 11 111111 11 21 1 1 1 11 2 2 1 12 2 2 2 2 2 2222 2 22 222222 222 2 2 2 2 2 2

4 4

-4

1

1 2

4 4

-3

4

1

D F 2 - 1 4 .5 3 %

4

1

2

3

4

5

6

7

Sensors 2017, 17, 1007 Sensors 2017, 17, 1007

9 of 16

9 of 16

MC

MG

8

8

33 3 333333333333 33 3

D F 2 - 1 2 .5 8 9 %

4 2 0

22 22 2 222 2 22 222222222 2 2 2 222 22 22 22

44 4 4 4 4 4444 4444 4 4 44 44 4 4 4 44 4 4 4 4 44

-4 -6

4

2

2

4

-2

6

1 1 11 111 11 111111 111 11 111 11 1 11 1 111

2

D F 2 - 1 8 .3 8 8 %

6

3 3 33 3 33333 3333 3 333333333 3 33 3 3

0

-2 -4

2 22 2 222222 2222 2222 22 2 2 22 2222 2

-6 -8

10

1 1111 1111 111 1 11 1 111111 111 11 1 11

0

-10

-10

4 4 44 4 4 444444 44 4 44 44444 444 4 4 44 4

20

-10

0

DF1 - 83.699%

10

ML

2 0 -2

3

2 2 22 22 2 2 22222222 2 2 2 22 2 2 22 2222222

33 33 33 33 3 333 3 33 3 3 3 33 333 33 33

-4 -6 -20

333 3 333333333 3 3333 33 33 33 2 22222 22 22 222222222 222 2 22

5

D F 2 - 4 .0 9 6 %

D F 2 - 6 .9 5 9 %

1 1 11 111 1 1 11 1111111111 1 11 11 1 1 11 1

4 44 44444 4 4444 4 4 4 44 44444 444 4 4

4

0

0

10

20

-20

-10

DF1 - 88.701%

-2 -4 -6 -10

11 1

D F 2 - 2 0 .2 2 7 %

4 4 4 4 44 4 44444 4 244 2 4 4 44 2 444444244444222 42 22 22 2 2 2 22 2 2 22 2222 2 2 222 2

0

1 1 1 1 11 11 111 1 1 11 11 11 1 1 4 1 1 1 1 11 1 1 1 1 2 1 1 6

1 11 1 1 11 11 11 11111111 11 1 1 11 1 1 1

0

30

40

50

-6

0

10

4 444 44 4444 4 4 4 4 44 4 4 44 4444 4 4 4 4 3 33 3 3 3 2 3 2 2 2 2 2 2 3 3 3 3333333333 3 2 2 3 3 3 3 2 2 22 2 22 2 3 3 2 2 2 3 3 2 2 22 22 2 2 2 2 2 2

-2 -4

-11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 11 12

20

DF1 - 67.848%

DF1 - 79.691%

QD 22 2 2222222 222 22 2 2 22 22222 22 2 2 2 2

10

D F 2 - 2 4 .3 3 8 %

D F 2 - 1 7 .8 0 7 %

2

20

8

3 33 3 3333 333 3 3 33 33 33333 33 3 3

4

10

MY

3

6

0

DF1 - 95.271%

MX 8

4 444 4 44444444444 4444 44444444

1 1111 11 1111111111 1 111 11111 1

-5

-10

10

30

MS 10

8 6

20

DF1 - 79.732%

5

0

-5

4 4 4443 4 4 4 3 43 444 444343434433334 43333434 3 4 33443333 3 3 33

1 1 111 11 11 11 1 1111 1 1 1 1 111 1 1 1 1 11 1 -10

0

10

20

DF1 - 72.136%

(b)

Figure 2. Score plot of the E-nose (a) maximum value (MV) (b) fused feature. Figure 2. Score plot of the E-nose (a) maximum value (MV) (b) fused feature.

Sensors 2017, 17, 1007

10 of 16

Sensors 2017, 17, 1007

10 of 16

Moreover, Figure 3a illustrated the cluster scatter of the seven tea sample providers by the average values which wereFigure commonly used forthe thecluster electronic tongue, Figure 3b illustrated Moreover, 3a illustrated scatter of the and seven tea sample providersthe bycluster the scatter by the fusedwhich features. average values were commonly used for the electronic tongue, and Figure 3b illustrated the According toby the results the electronic tongue, the tea samples of the four grades significantly cluster scatter the fused of features. According to the results of thefused electronic tongue, the teathe samples of the fouras grades significantly overlapped for single features. The feature improved performance, shown in Figure 3b. overlapped for single features. The fused feature improved the performance, as shown in Figure 3b.the Most of the four grades of tea samples were separated correctly, and few of them were close to of the four grades of tea samples were3a separated correctly, few offrom them producers were close MG to theand classMost boundaries. The scatter plots of Figure show that the teaand samples class boundaries. The scatter plots of Figure 3a show that the tea samples from producers MG andand MS overlapped for more than two classes, while the fused feature separated the samples clearly MS overlapped for more than two classes, while the fused feature separated the samples clearly the class boundaries were explicit for each class. The tea samples from producers ML and MXand were the class boundaries were explicit for each class. The tea samples from producers ML and MX were a mass for a single AV feature, and Figure 3b illustrates that the classes were separated more clearly. a mass for a single AV feature, and Figure 3b illustrates that the classes were separated more clearly. Moreover, the samples were closely located within the same class for all the cases. Moreover, the samples were closely located within the same class for all the cases. The results of both the E-nose and E-tongue demonstrate the superiority of the time-domain The results of both the E-nose and E-tongue demonstrate the superiority of the time-domain based and based feature representing comprehensive signal characteristics. based frequency-domain and frequency-domain based fusion featureinfusion in representing comprehensive signal The characteristics. fused feature could represent thecould overall characteristics of characteristics the sensor signals, it issignals, more suitable The fused feature represent the overall of theand sensor and for tea the resultsHowever, are not clear when putting all ofwhen the tea samples from it isgrade more identification. suitable for teaHowever, grade identification. the results are not clear putting all of diverse companies together, even in the case of feature fusion. The tea samples from this the tea samples from diverse companies together, even in the case of feature fusion.collected The tea samples study are much more than state-of-art, andthan the state-of-art, tea providers too. This the real collected from this study are much more andare thevarious tea providers are is various too.problem This is the realgradation problem faced in tea gradation China. To identify the green teaa grades with a large faced in tea in China. To betterinidentify thebetter green tea grades with large nubmer of tea nubmer of tea samples with diverse sources, advanced technologies should be introduced, and that samples with diverse sources, advanced technologies should be introduced, and that is the purpose of is the purpose of proposing this study. proposing this study. MG

MC 2 11 1 1 11 111 1111 111 1 1 1 111 1 1 1 11 1 1

D F 2 - 1 4 .0 7 1 %

1 0 -1 -2 -3

22 22

3 3 3 22 3 2 2 2 2 2222222 2 2 3 3 3 3 3 2 2 2222 2 2 2 2

22

11

2 11

2

4

5

2 1 0 -1

-5

-2

4 -4

-3

-2

-1

0

1

2

3

4

22

3

-4

-5

4

4

4

3 333 3 44 3 333 4 3 3 4 3 3 4 33 3 333 3 3 3 34 3 4 4 44 44 4 44 4 4 4 4 4 44 4 4 4 4 4 44 4 4 4

D F 2 - 1 7 .3 7 5 %

3

5

323 3 323

2 2 22 2 2 22 22222222222222 22 2 1 1 11 1 1 11 3 3 3 3 111 1 1 1 11 1 111 1 1 11 1

2 1

6

1

-3

-2

-1

4

1

4

-1

44

4

-2 -3

43 34 3 4 4 4 4 4 444 44 44 4 4 44 4 44 4 4 4 4 4

1

1 1

-5

-4

-3

-2

-1

0

1

2

3

4

0 -1 -2

44 4

4 4

1

2

3

4

5

-3

-5 5

3 3 33 3 3 3 3 3 3 3 3 3 3

6

MX

3

3

2 2 -3

-2

-1

0

1

MY Figure 3. Cont.

3 3 3

3

2

DF1 - 78.306%

DF1 - 85.007%

33 33 3 3 3

3

-4

2

-4 -6

4

1

4 4 11 1 44 44 4 411 1 11 1 44 4 4 4 411 41 1 1 4 3 33 4144 4 4 444 1 44 4 3 2 1 1 1 1 11 2 2121 121 44143 3 2 222 41 11 2 22 22 12 11 3 4 21 2 22 1 2 22 22 2 2 2 2 2 2

1 1 1 1 121 21222222 2 22 2 1 212 2212 2 2 22 12 21211111 122 1 2 1 1 21121 2 1 2 11 1 1 1 1

4

0

3

4

4

2 33333 3 3 3 33 3333 33 33 33 333 3

D F 2 - 1 4 .9 2 3 %

D F 2 - 1 3 .3 6 4 %

2

3

44 44

4 44 4 4 4 4 44 4 34 33 34 34 3 3 33 3 333 33333333 34 4 44

MS 3

4 3 3

4

DF1 - 82.107%

ML 5

3

4

0

DF1 - 82.831%

4

4 4

3

4

5

6

7

Sensors 2017, 17, 1007

11 of 16

Sensors 2017, 17, 1007

11 of 16

4 4

D F 2 - 2 3 .2 38 %

3

4

0 -1

-3

-2

2

2 2

22 2 2 22 4 44 2 2 2 2 1 2 21 2 3 1 4 1 21 11 21 1 1 1 1 1 1 1 1 41 1 14 1 11 1 1 1 1 11 11 1 1 1

-1

0

1

2

3

-2

11 1 1 11 1 1111 11 1 1 1 1 111 1 11 1 111 1 222 2 1 222 1 22 2 2 22 222 2 2 2 222

-3

222 2

3

2

222 222 22

24

2 2

-2

-4

2

2

2

4 444 4 333 3333 4 4 3 23 3 4 3 3 43 3 3 33 3 3 33333 3 3 33 3 33 3 3

1

3

4

4 4 44 4 44 4 4 444

2

5

2

44 4

D F 2 - 0 .3 8 5 6 %

4

3

4

2 1 0 -1

3 3 33 33 333 333 3333 3 3 33333 33 3 34 43 4444 4 444 4 4 4 4444 44444 4 4 44 4

-20

5

-10

0

10

20

30

DF1 - 99.515%

DF1 - 73.62%

QD

4 44 4 44444 4 4 44 4444 44 4 4 4444 4 3 4 33 333 4 4 3 33333 4 3 3 3 3 3 33 3 32332 323 33 3 3 2 2222222 323 2 2 2222 2 2 2 22 22 3 2 3 22 2 2

4 3

D F 2 - 2 1 .9 1 8 %

2 1 0 -1 -2 -3 -4

1

1 11 1 1 1 1 1 111 1 1 1 1 11 1 1 1 1 1 1 111 1 1 1

1

1

1 1

1

2

2 -8

-7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

6

7

8

9

10

DF1 - 77.792%

(a) MC

MG

4

D F 2 - 2 3 .2 4 9 %

4

44 44 444 4 44 4 44 4 4 4 4 4 4 4 4 4 44 44 4 4 44 4 4

1 1 1 1 2 1 1 1 1 2 1 1 111 1 1 1 11 1 1 1 1 111 1 1 1 11 1 11 1 0 22 2 2 2 22 222 22 2 2 2 2 2 2 2 -2 2 2 2 22 2 2 2 2 2 2 2 22

33

2

3

-4 -7

-6

-5

-4

-3

-2

-1

0

1

2

3 3 33 3 33 3 33 3 3 3 3 33 3 3 3 3 3 33 3 3 3 3

3

3

7

5

6

3

1 0

44 4 4 444 4 4 4 44 4 4

4 4

-1

4

3 3 333 33 3333 3 4 33 3 3 33 3 3 3

-3

-2

-1

0 -2 -4 -6

1

3 4 4 4 44 4 4 44 4 4 4444 4 4 4 4 4 44 4 4 44 4 44 4 4 4 4

-8

-7

-6

-4

-3

4

5

6

1 1 1111211 1 2121 1 212111211 22222 2 2 1 11 1 12 1121 1221122122 2221222 2

2 0 -2

-2

-1

0

1

2

3

4

5

6

7

8

4

4 4 4444 4 44 4 4 4 4 4 44 4 44 4 44 4 4 4 1 11 1 1 1 1 1 1 1 1 11 2 2 11 1 1 11 1 1 111 1 4 4 2 22 2 1 1 1 111 4 4 4 2 2 22 2 2 4

2 22 2 2 22222 222 2 2 2 22 2 2 22

-4

-5

3

1

4 4

43

-9

2

1

MS

D F 2 - 1 6.5 7 1%

D F 2 - 1 6 .5 1 7 %

2

1

1

2 11

DF1 - 61.174%

ML

4

0

1 1

11 1 1 1 1 1 111 1 1 1 11 1

3 -4

1 1 11 1 111 1

33 3 33 1 3

3

3

-3 -5

2 2

4 4 44 44 44 4 3 44344 33 4

-2

3 3 33 3 3 3333333 3333 3 3 3 3 3 3 333 3 3 3 3

6

2

2

4

2

DF1 - 71.736% 8

22 22 2 2 22 22222 2 2 2 222 2 22 2 2

4

3

4

2 2

5

3

3 3

2 2

6

D F 2 - 3 2 .9 0 1 %

6

9

MX

-5

-4

-3

-2

-1

0

MY Figure 3. Cont.

4

3 3 3 33 3 3 3 3 3 3 3 33 3 3 3 3 3 3 33 33 3 3 3 3 3 33 3

3

2

1

DF1 - 71.437%

DF1 - 80.113%

4

2

3

4

5

6

7

Sensors 2017, 17, 1007

12 of 16

Sensors 2017, 17, 1007

12 of 16 4

4

D F 2 - 1 4 .4 5 %

2 0 -2

2 3

1

-4 -7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

6

44 4 44 44444444 44444 4444 444 4 4

4

D F 2 - .2 4 9 2 %

2 22 2 2 2 2 22 2 22 22 22 2 22 2 22 2 22 2 2 2 2 12 1 111 1 1 1 1 1 1 1 111 1 111 1 111 1 1 1 1 1 1 1111

2 2

6

4444 4 4 4 444 4 4 4 4 4 4 44 44 4 44 4 4 4 4 44 4 4 434 3 33 3 3 33 3 3 43 3 3 33 3 3 3 33 3 33 3 3 3 3 3 3 3 3 3

2 0 -2

2 2222 2222222 222222 1 222222 12 11112111 2 1 111111111 11

3 33 333 3 33 3333 33 3 333 333 3 3333 3 333

-4 -6

3 -50

7

-40

-30

-20

-10

0

10

20

30

40

50

60

DF1 - 99.626%

DF1 - 80.918%

QD 6

444 4 44 44 44 444 444 4 4444 4434 43 434 4 3 333 3 3333333 3 3 33333 333 32 33 3 22 22 222 222 2222 222 22222 22 2 22 2 2 2

D F 2 - 9.499 %

4 2 0 -2 -4 -6 -10

11 111 1 11111 1 1 1111111111 1 11 1 1 1

1 1 1

2

0

10

20

DF1 - 88.82%

(b) Figure 3. Score plot of the E-tongue (a) average value (AV) (b) fused feature.

Figure 3. Score plot of the E-tongue (a) average value (AV) (b) fused feature.

3.2. Dimensionality Reduction of Fused Feature

3.2. Dimensionality Reduction of Fused Feature

In this section, the experimental results of the classification performed by feature fusion, the

In this section, the experimental the classifier classification performed by feature fusion, KLDA-based dimensionality reduction,results and theof KNN were illuminated. The tea samples were divided into twoand subsets, the training set was utilized to develop the model the KLDA-based dimensionality reduction, the KNN classifier were illuminated. and tea the samples testing setwere was divided utilized to verify performance of the set model. training set consisted of The into twothe subsets, the training wasThe utilized to develop the model total) by random twenty samples of each gradeto from eachthe tea performance provider ( 20 of 4 the 7 model. 560 inThe and the testing set was utilized verify training set selection, consisted of andsamples the testing set was constructed the remaining The recognition accuracies twenty of each grade from eachfrom tea provider (20 × 4tea × samples. 7 = 560 in total) by random selection, values (parameter of the KNN classifier) are versus the dimensionality according to different K and the testing set was constructed from the remaining tea samples. The recognition accuracies versus shown in Figure 4 for the E-nose and E-tongue, respectively. The recognition accuracies variable the dimensionality according to different K values (parameter of the KNN classifier) are shown in versus the reduced dimensions and for different K values. The top recognition accuracy of the EFigure 4 for the E-nose and E-tongue, respectively. The recognition accuracies variable versus the nose was 71.3%, and it was 82.7% for the E-tongue. The top recognition rate of the E-nose was reduced dimensions and for different K values. The top recognition accuracy of the E-nose was 71.3%, obtained with Dim  5 and K  3 , while that of the E-tongue was obtained with Dim  4 and and itKwas for the E-tongue. The top recognition rate of the E-nose was obtained with Dim = 5 .  382.7% and K = 3, while that of E-tongue was obtained with Dim = E-nose 4 and Kand = 3. Furthermore, thethe corresponding confusion matrices of the E-tongue are shown in Furthermore, the corresponding confusion matrices of the E-nose and E-tongue shown Table 1. Each column of the matrix represents the instances in a predicted class, whileare each row in Tablerepresents 1. Each column of the represents thethat instances in a predicted class, among while classes each row the instance in anmatrix actual class. It showed the tea samples were confused differently. A largein number of grade samples misclassified as grade Y, andamong many classes tea represents the instance an actual class. EIt tea showed thatwere the tea samples were confused samples grade S were of misclassified grade Y were and grand E for the E-nose. While many teatea samples differently. Aof large number grade E teaassamples misclassified as grade Y, and many samples of grade T were misclassified as grade Y, grade E samples misclassified as grade Y, and a of great of grade S were misclassified as grade Y and grand E for thewere E-nose. While many tea samples grade number of tea samples of grade S were misclassified as grade E for the E-tongue. T were misclassified as grade Y, grade E samples were misclassified as grade Y, and a great number of tea samples of grade S were misclassified as grade E for the E-tongue.

Sensors 2017, 17, 1007 Sensors 2017, 17, 1007

13 of 16 13 of 16

100

recognition accuracies (%)

90 80 70 60

k=1

50

k=3

40

k=5 k=7

30

k=9

20 10 0 1

2

3

4 5 Dimensionality

6

7

(a) 100

recognition accuracies (%)

90 80 70 60

k=1

50

k=3

40

k=5 k=7

30

k=9

20 10 0 1

2

3

4 5 Dimensionality

6

7

(b) Figure 4. 4. Recognition Recognition accuracies accuracies of of the the KLDA-KNN KLDA-KNN model model for for (a) (a) E-nose E-nose (b) (b) E-tongue. E-tongue. Figure Table 1. Confusion matrix for KLDA-KNN classification ofE-nose, the E-nose, E-tongue, and decision Table 1. Confusion matrix (%)(%) for KLDA-KNN classification of the E-tongue, and decision fusion. fusion. E-Nose Feature

T Y E S

T T 92.0 Y 19.0 E 2.5 S 4.9

E-Nose Feature TY Y E E S S 92.0 8.0 8.0 0 0 0 0 19.0 78.6 78.61.2 1.2 1.21.2 2.5 26.6 58.2 12.7 26.6 18.3 58.2 20.712.7 4.9 56.1 18.3 20.7 56.1

E-Tongue Feature

Decision Fusion

E-Tongue Feature Decision Fusion T Y E S T Y SE T Y E S T YE 78.7 16.0 16.0 5.3 2.7 2.7 4.0 0 78.7 5.3 0 0 93.3 93.3 4.0 8.3 85.7 4.8 94.0 94.0 4.8 0 8.3 85.7 4.8 1.2 1.2 1.2 1.2 4.8 1.3 8.9 82.3 7.6 1.3 3.8 86.1 8.9 1.3 8.9 8.5 82.3 84.17.6 0 1.30 3.8 86.1 2.4 4.9 8.5 91.5 2.4 4.9 8.5 84.1 0 0 8.5

S 0 0 8.9 91.5

3.3. Decision Level Fusion for Tea Quality Identification 3.3. Decision Level Fusion for Tea Quality Identification The feature-level fusion, together with the kernel dimensional reduction method, could correctly The feature-level fusion, together with the kernel dimensional reduction method, could correctly identify most of the tea samples. However, it was not ideal for all the tea samples collected from identify most of the tea samples. However, it was not ideal for all the tea samples collected from the the various companies. The D-S evidence-based decision fusion was conducted to improve the various companies. The D-S evidence-based decision fusion was conducted to improve the classification. As we discussed in Section 2.4.5, the basic probability assignment function was defined classification. As we discussed in Section 2.4.5, the basic probability assignment function was defined by the membership degree from the K-nearest neighbors. In this study, the value K related to the by the membership degree from the K-nearest neighbors. In this study, the value K related to the highest recognition accuracy was selected, where the parameter K was set to three for the KNN highest recognition accuracy was selected, where the parameter K was set to three for the KNN classifier and the subsequent analysis. classifier and the subsequent analysis.

Sensors 2017, 17, 1007

14 of 16

The D-S evidence was conducted on the classification results obtained by feature fusion + KLDA + KNN for both the E-nose and E-tongue. The maximum membership rule defined in Equation (8) was utilized to identify the grade of the tea samples. Based on the multi-level fusion method, the average recognition accuracy achieved for the testing set was 91.3%, compared to 71.3% for the E-nose, and 82.7% for the E-tongue. The corresponding confusion matrix is shown in Table 1. The diagonal of the confusion matrix denoted the correct recognition rates of the four grades, and the other elements denoted the misclassified rates among the four grades. Some tea samples with grade T were misclassified as grade Y (8.0%) for the E-nose, and it was even worse for the E-tongue (16.0%), where the misclassification accuracy reduced to 2.7% for the decision fusion scheme. For the E-nose, the samples of grade Y were misclassified to grade T (19.0%), and for the E-tongue, the misclassification rate was 8.3%. This was improved by the fusion method, producing a value of 1.2%. Moreover, the recognition accuracy of the grade Y samples increased to 94% for the fusion output, and there was no sample misclassified to grade S. The tea samples of grade E were easily misclassified to the other three categories. The recognition accuracy of decision fusion for grade E was enhanced compared to the E-nose and E-tongue. For the E-nose, the number of grade S samples that were misclassified as grade Y and grade E were 18.3% and 20.7%, respsectively. The number of grade S samples that were misclassified as grade Y and E were 4.9% and 8.5% for the E-tongue, respectively. For the fusion method, the grade S tea samples were well identified with a recognition accuracy of 91.5%, while no misclassification occurred for grade T and grade Y samples. It seemed that the multi-level fusion strategy combining the E-nose and E-tongue data significantly improved the recognition accuracy. The grade T, Y, and S samples could be distinguished effectively, and the recognition accuracy for these three grades was above 90% for the large quantity of tea samples. 4. Conclusions A multi-level fusion strategy which combines an electronic nose and electronic tongue was proposed and applied to tea quality identification. The procedure included feature-level fusion (fuse the time-domain based feature and frequency-domain based feature) and decision-level fusion (D-S evidence to combine the classification results from KNN). The experiments were carried out on tea samples collected from various tea providers with four grades which made it very difficult, and the results showed a much better classification ability for the multi-level fusion system. The proposed algorithm could better represent the overall characteristics of the tea samples for both odor and taste, and that was the reason why the system outperformed the single feature method or single classifier. Furthermore, the general idea of this paper can be improved by selecting better feature representing methods, and more attributes like appearance may be considered for this multi-level fusion system. This paper provides a framework for the combination of features and decisions, and it is important consider the difference sources to better identify the tea quality. Supplementary Materials: The supplementary materials are available online at http://www.mdpi.com/14248220/17/5/1007/s1. Acknowledgments: This work is supported by the National Natural Science Foundation of China (No. 61673052, No. 31201358), and the Fundamental Research Fund for the Central Universities of China (06116070), the National Research and Development Major Project (SQ2017YFNC010027), National High Technology Research and Development Program 863 (No. 2011AA1008047), and the grant from Chinese Scholarship Council (CSC). Author Contributions: R. Zhi and L. Zhao conceived and designed the experiments, and performed the experiments; R. Zhi and D. Zhang analyzed the data, and R. Zhi wrote the paper. Conflicts of Interest: The authors declare no conflict of interest.

References 1.

Bhattacharyya, N.; Bandyopadhyay, R.; Bhuyan, M.; Tudu, B.; Ghosh, D.; Jana, A. Electronic Nose for Black Tea Classification and Correlation of Measurements with “Tea Taster” Marks. IEEE Trans. Instrum. Meas. 2008, 57, 1313–1321. [CrossRef]

Sensors 2017, 17, 1007

2.

3. 4.

5. 6. 7. 8.

9.

10.

11. 12.

13.

14.

15. 16. 17. 18.

19. 20.

21.

15 of 16

Dutta, R.; Hines, E.L.; Gardner, J.W.; Kashwan, K.R.; Bhuyan, M. Tea Quality Prediction Using a Tin Oxide-Based Electronic Nose: An Artificial Intelligence Approach. Sens. Actuators B Chem. 2003, 94, 228–237. [CrossRef] Kiani, S.; Minaei, S.; Ghasemi-Varnamkhasti, M. Fusion of Artificial Senses as a Robust Approach to Food Quality Assessment. J. Food Eng. 2016, 171, 230–239. [CrossRef] Ghasemi-Varnamkhasti, M.; Mohtasebi, S.S.; Siadat, M.; Razavi, S.H.; Ahmadi, H.; Dicko, A. Discriminatory Power Assessment of The Sensor Array of an Electronic Nose System for the Detection of Non-Alcoholic Beer Aging. Czech J. Food Sci. 2012, 30, 236–240. Banerjee, R.; Tudu, B.; Bandyopadhyay, R.; Bhattacharyya, N. A Review on Combined Odor and Taste Sensor Systems. J. Food Eng. 2016, 190, 10–21. [CrossRef] Baldwin, E.A.; Bai, J.; Plotto, A.; Dea, S. Electronic Noses and Tongues: Applications for the Food and Pharmaceutical industries. Sensors 2011, 11, 4744–4766. [CrossRef] [PubMed] Persaun, K. Electronic Noses and Tongues in the Food Industry. In Electronic Noses and Tongues in Food Science, 1st ed.; Mendez, M.L.R., Preedy, V.R., Eds.; Elsevier Academic Press: London, UK, 2016; Chapter 1; pp. 1–12. Chen, Q.; Zhao, J.; Chen, Z.; Lin, H.; Zhao, D.A. Discrimination of Green Tea Quality Using the Electronic Nose Technique and the Human Panel Test, Comparison of Linear and Nonlinear Classification Tools. Sens. Actuators B Chem. 2011, 159, 294–300. [CrossRef] Kaur, R.; Kumar, R.; Gulati, A.; Ghanshyam, C.; Kapur, P.; Bhondekar, A.P. Enhancing Electronic Nose Performance: A Novel Feature Selection Approach Using Dynamic Social Impact Theory and Moving Window Time Slicing for Classification of Kangra Orthodox Black Tea (Camellia sinensis (L.) O. Kuntze). Sens. Actuators B Chem. 2012, 166, 309–319. [CrossRef] Yu, H.; Wang, Y.; Wang, J. Identification of Tea Storage Times by Linear Discrimination Analysis and Back-Propagation Neural Network Techniques Based on the Eigenvalues of Principal Components Analysis of E-nose Sensor Signals. Sensors 2009, 9, 8073–8082. [CrossRef] [PubMed] Yu, H.; Wang, J.; Zhang, H.; Yu, Y.; Yao, C. Identification of Green Tea Grade Using Different Feature of Response Signal from E-Nose Sensors. Sens. Actuators B Chem. 2008, 128, 455–461. [CrossRef] Bhattacharyya, N.; Seth, S.; Tudu, B.; Tamuly, P.; Jana, A.; Ghosh, D.; Bandyopadhyay, R.; Bhuyan, M. Monitoring of Black Tea Fermentation Process Using Electronic Nose. J. Food Eng. 2007, 80, 1146–1156. [CrossRef] Ivarsson, P.; Holmin, S.; Höjer, N.; Krantz-Rülcker, C.; Winquist, F. Discrimination of tea by means of a voltammetric electronic tongue and different applied waveforms. Sens. Actuators B Chem. 2001, 76, 449–454. [CrossRef] Palit, M.; Tudu, B.; Dutta, P.K.; Dutta, A.; Jana, A.; Roy, J.K.; Bhattacharyya, N.; Bandyopadhyay, R.; Chatterjee, A. Classification of Black Tea Taste and CORRELATION with Tea Taster’s Mark Using Voltammetric Electronic Tongue. IEEE Trans. Instrum. Meas. 2010, 59, 2230–2239. [CrossRef] Xiao, H.; Wang, J. Discrimination of Xihulongjing Tea Grade Using an Electronic Tongue. Afr. J. Biotechnol. 2009, 8, 6985–6992. Chen, Q.; Zhao, J.; Vittayapadung, S. Identification of the Green Tea Grade Level Using Electronic Tongue and Pattern Recognition. Food Res. Int. 2008, 41, 500–504. [CrossRef] Wu, J.; Liu, J.; Fu, M.; Li, G. Classification of Chinese Green Tea by a Voltammetric Electronic Tongue. Chin. J. Sens. Actuators 2006, 19, 963–965. Lvova, L.; Legin, A.; Vlasov, Y.; Cha, G.S.; Nam, H. Multicomponent Analysis of Korean Green Tea by Means of Disposable All-Solid-State Potentiometric Electronic Tongue Microsystem. Sens. Actuators B Chem. 2003, 95, 391–399. [CrossRef] Zhi, R.; Zhao, L.; Shi, B.; Jin, Y. New Dimensionality Reduction Model (Manifold Learning) Coupled with Electronic Tongue for Green Tea Grade Identification. Eur. Food Res. Technol. 2014, 239, 157–167. [CrossRef] Haddi, Z.; Mabrouk, S.; Bougrini, M.; Tahri, K.; Sghaier, K.; Barhoumi, H.; Bari, N.E.; Maaref, A.; Jaffrezic-Renault, N.; Bouchikhi, B. E-Nose and E-tongue Combination for Improved Recognition of Fruit Juice Samples. Food Chem. 2014, 150, 246–253. [CrossRef] [PubMed] Rudnitskaya, A.; Kirsanov, D.; Legin, A.; Beullens, K.; Lammertyn, J.; Nicolaï, B.M.; Irudayaraj, J. Analysis of Apples Varieties—Comparison of Electronic Tongue with Different Analytical Techniques. Sens. Actuators B Chem. 2006, 116, 23–28. [CrossRef]

Sensors 2017, 17, 1007

22.

23. 24. 25.

26. 27.

28.

29.

30.

31. 32.

33. 34. 35. 36. 37. 38. 39. 40.

41.

16 of 16

Natale, C.D.; Paolesse, R.; Macagnano, A.; Mantini, A.; D’Amico, A.; Legin, A.; Lvova, L.; Rudnitskaya, A.; Vlasov, Y. Electronic Nose and Electronic Tongue Integration for Improved Classification of Clinical and Food Samples. Sens. Actuators B Chem. 2000, 64, 15–21. [CrossRef] Sole, M.; Covington, J.A.; Gardner, J.W. Combined Electronic Nose and Tongue for a Flavour Sensing System. Sens. Actuators B Chem. 2011, 156, 832–839. Hong, X.; Wang, J. Detection of Adulteration in Cherry Tomato Juices Based on Electronic Nose and Tongue: Comparison of Different Data Fusion Approaches. J. Food Eng. 2014, 126, 89–97. [CrossRef] Ouyang, Q.; Zhao, J.; Chen, Q. Instrumental Intelligent Test of Food Sensory Quality as Mimic of Human Panel Test Combining Multiple Cross-Perception Sensors and Dada Fusion. Anal. Chim. Acta 2014, 841, 68–76. [CrossRef] [PubMed] Qiu, S.; Wang, J.; Tang, C.; Du, D. Comparison of ELM, RF, and SVM on E-nose and E-tongue to Trace the Quality Status of Mandarin (Citrus unshiu Marc.). J. Food Eng. 2015, 166, 193–203. [CrossRef] Buratti, S.; Ballabio, D.; Giovanelli, G.; Dominguez, C.M.Z.; Moles, A.; Benedetti, S.; Sinelli, N. Monitoring of Alcoholic Fermentation Using Near Infrared and Mid Infrared Spectroscopies Combined with Electronic Nose and Electronic Tongue. Anal. Chim. Acta 2011, 697, 67–74. [CrossRef] [PubMed] Zakaria, A.; Shakaff, A.Y.; Masnan, M.J.; Ahmad, M.N.; Adom, A.H.; Jaafar, M.N.; Ghani, S.A.; Abdullah, A.H.; Aziz, A.H.; Kamarudin, L.M.; et al. A Biomimetic Sensor for the Classification of Honeys of Different Floral Original and the Detection of Adulteration. Sensors 2011, 11, 7799–7822. [CrossRef] [PubMed] Rodriguez-Mendez, M.L.; Apetrei, C.; Gay, M.; Medina-Plaza, C.; De Saja, J.A.; Vidal, S.; Aagaard, O.; Ugliano, M.; Wirth, J.; Cheynier, V. Evaluation of Oxygen Exposure Levels and Plyphenolic Content of Red Wines Using an Electronic Panel Formed by an Electronic Nose and an Electronic Tongue. Food Chem. 2014, 155, 91–97. [CrossRef] [PubMed] Zakaria, A.; Shakaff, A.Y.M.; Adom, A.H.; Ahmad, M.N.; Masnan, M.J.; Aziz, A.H.A.; Fikri, N.A.; Abdullah, A.H.; Kamarudin, L.M. Improved Classification of Orthosiphon stamineus by Data Fusion of Electronic Nose and Tongue Sensors. Sensors 2010, 10, 8782–8796. [CrossRef] [PubMed] Fikri, N.A.; Adom, A.H.; Md. Shakaff, A.Y.; Ahmad, M.N.; Abdullah, A.H.; Zakaria, A.; Markom, M.A. Development of Human Sensory Mimicking System. Sens. Lett. 2011, 9, 423–427. [CrossRef] Banerjee, R.; Chattopadhyay, P.; Tudu, B.; Bhattacharyya, N.; Bandyopadhyay, R. Artificial flavor perception of black tea using fusion of electronic nose and tongue response: A Bayesian statistical approach. J. Food Eng. 2014, 142, 87–93. [CrossRef] Banerjee, R.; Tudu, B.; Shaw, L.; Jana, A.; Bhattacharyya, N.; Bandyopadhyay, R. Instrumental Testing of Tea by Combining the Responses of Electronic Nose and Tongue. J. Food Eng. 2012, 110, 356–363. [CrossRef] Xu, Y.; Chen, W.; Yin, B.; Huang, L. Green Tea Storage and Preservation. Agric. Mach. Technol. Ext. 2004, 9, 29–32. Shi, B.; Zhao, L.; Zhi, R.; Xi, X. Optimization of Electronic Nose Sensor Array by Genetic Algorithms in Xihu-Longjing Tea Quality Analysis. Math. Comput. Model. 2013, 58, 752–758. [CrossRef] Dai, Y.; Zhi, R.; Zhao, L.; Gao, H.; Shi, B.; Wang, H. Longjing Tea Quality Classification by Fusion of Features Collected from E-nose. Chemom. Intell. Lab. Syst. 2015, 144, 63–70. [CrossRef] Ma, B.; Qu, H.; Wong, H. Kernel clustering-based discriminant analysis. Pattern Recognit. 2007, 40, 324–327. [CrossRef] Lin, T. Improving D-S evidence Theory for Data Fusion System. Available online: http://citeseerx.ist.psu. edu/viewdoc/download?doi=10.1.1.115.652&rep=rep1&type=pdf (accessed on 10 April 2017). Guan, J.W.; Bell, D.A. Evidence Theory and its Applications; Elsevier Science Inc.: New York, NY, USA, 1992. Hegarat-Mascle, S.L.; Bloch, I.; Vidal-Madjar, D. Application of Dempster-Shafer Evidence Theory to Unsupervised Classification in Multisource Remote Sensing. IEEE Trans. Geosci. Remote Sens. 1997, 35, 1018–1031. [CrossRef] Walley, P.; Moral, S. Upper Probabilities Based Only on the Likelihood Function. J. R. Stat. Soc. 1999, 61, 831–847. [CrossRef] © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

A Framework for the Multi-Level Fusion of Electronic Nose and Electronic Tongue for Tea Quality Assessment.

Electronic nose (E-nose) and electronic tongue (E-tongue) can mimic the sensory perception of human smell and taste, and they are widely applied in te...
2MB Sizes 0 Downloads 9 Views