Statistical Performance Effect of Feature Selection Techniques on Eye State Prediction Using EEG
DOI:
https://doi.org/10.6000/1929-6029.2016.05.03.9Keywords:
Classification, Statistical performance, Feature Selection, Machine Learning, EEGAbstract
Several recent studies have demonstrated that electrical waves recorded by electroencephalogram (EEG) can be used to Predict eye state (Open or Closed) and all the studies in the literatures used 14 electrodes for data recording. To reduce the number of electrodes without affecting the statistical performance of an EEG device, it is not an easy task. Hence, the focus of this paper is on reducing the number of EEG electrodes by means of feature selection techniques without any consequences on the statistical performance measures of the earlier EEG devices. In this study, we compared different attribute evaluators and classifiers. The results of the experiments have shown that ReliefF attribute evaluator was the best to identify the two least important features (P7, P8) with 96.3% accuracy. The overall results show that two data-recording electrodes could be removed from the EEG devices and still perform well for eye state prediction. The accuracy achieved was equal to 96.3% with KStar (K*) classifier which was also the best classifier among the 21 tested classifiers in this study.
References
Fabio MP, Alexis M, Emiro F, et al. Feature selection, learning metrics and dimension reduction in training and classification processes in intrusion detection systems. Journal of Theoretical and Applied Information Technology 2015; 82(2).
Vipin K, Sonajharia M. Feature Selection: A literature Review. Smart Computing Review 2014; 4(3). DOI: https://doi.org/10.6029/smartcr.2014.03.007
Durgabai RP. Feature Selection using ReliefF Algorithm. International Journal of Advanced Research in Computer and Communication Engineering 2014; 3(10). DOI: https://doi.org/10.17148/IJARCCE.2014.31031
Liang J, Yang S, Winstanley A. Invariant optimal feature selection: A distance discriminant and feature ranking based solution. In: Pattern Recognition 2008; 41(5): 1429-1439. http://dx.doi.org/10.1016/j.patcog.2007.10.018 DOI: https://doi.org/10.1016/j.patcog.2007.10.018
Arunasakthi K, Kamatchi PL. A Review on Linear and Non-Linear Dimensionality Reduction Techniques. Machine Learning and Applications: An International Journal (MLAIJ) 2014; 1(1).
Andreas J, Wilfried NG. On the Relationship between Feature Selection and Classification Accurac. JMLR: Workshop and Conference Proceedings 2008; 4: 90-105.
Mark AH. Correlation-based Feature Selection for Machine Learning. Thesis of Doctor of Philosophy at The University of Waikato, April 1999.
Albert S, Francisco JR, Andreu C, et al. Interval-valued Feature Selection. CETpD, Neàpolis Building. Rambla de l’Exposició, pp. 59-69.
Gianluca B. On the use of feature selection to deal with the curse of dimensionality in microarray data Available at http://www.ulb.ac.be/di/map/gbonte/ftp/gand.pdf, Machine Learning Group Université Libre de Bruxelles.
Wasif A, Richard T. Towards benchmarking feature subset selection methods for software fault prediction. Computational Intelligence and Quantitative Software Engineering 2016; 617: 33-58. http://dx.doi.org/10.1007/978-3-319-25964-2_3 DOI: https://doi.org/10.1007/978-3-319-25964-2_3
Ramaswami M, Bhaskaran R. A Study on Feature Selection Techniques in Educational Data Mining. Journal of Computing 2009; 1(1).
Isabelle G, Andre E. An Introduction to Variable and Feature Selection. Journal of Machine Learning Research 2003; 3: 1157-1182.
Jiliang T, Salem A, Huan L. Feature selection for classification: A review. In: Data Classification: Algorithms and Applications. CRC Press 2014; p. 37.
Zhao Z, Liu H. Semi-supervised feature selection via spectral analysis. Proceedings of SIAM International Conference on Data Mining 2007; pp. 641-646. http://dx.doi.org/10.1137/1.9781611972771.75 DOI: https://doi.org/10.1137/1.9781611972771.75
Ronaldo CP. Combining feature ranking algorithms through rank aggregation. The 2012 International Joint Conference on Neural Networks (IJCNN) 2012; pp.1-8.
Kohavi, John G. Wrappers for feature selection. Artificial Intelligence 1997; 97(1-2): 273-324. http://dx.doi.org/10.1016/S0004-3702(97)00043-X DOI: https://doi.org/10.1016/S0004-3702(97)00043-X
Lichman M. UCI Machine Learning Repository. Irvine, CA: University of California, School of Information and Computer Science 2013. Available at http://archive.ics.uci.edu/ml.
Matthieu D. Performance of the EmotivEpoc headset for P300-based applications. Biomed Eng Online 2013; 12: 56. http://dx.doi.org/10.1186/1475-925X-12-56 DOI: https://doi.org/10.1186/1475-925X-12-56
Sahu M, Nagwani NK, Shrish V, Saransh S. Performance Evaluation of Different Classifier for Eye State Prediction Using EEG Signal. International Journal of Knowledge Engineering 2015; 1(2): 141-145. http://dx.doi.org/10.7763/IJKE.2015.V1.24 DOI: https://doi.org/10.7763/IJKE.2015.V1.24
Oliver R, David S. A First Step towards Eye State Prediction Using EEG. Baden-Wuerttemberg Cooperative State University (DHBW), Germany 2013.
Ian HW, Eibe F, Mark AH. Data Mining-Practical Machine Learning Tools and Techniques. The Morgan Kaufmann series in data management systems, third Edition 2011.
Vijayasankari S, Ramar K. Enhancing Classifier Performance Via Hybrid Feature Selection and Numeric Class Handling-A Comparative Study. International Journal of Computer Applications 2012; 41(17): 0975-08887. DOI: https://doi.org/10.5120/5634-8003
Kira K, Rendell LA. A practical approach to feature selection. Machine Learning 1992; 249-256. http://dx.doi.org/10.1016/b978-1-55860-247-2.50037-1 DOI: https://doi.org/10.1016/B978-1-55860-247-2.50037-1
Matthias K. Performance Measures in Binary Classification. International Journal of Statistics in Medical Research 2012; 79-81.
Turgay I, Esra MK, Uwisengeyimana JD. Meta Learning on small biomedical datasets. Information Science and Applications (ICISA 2016), Lecture Notes in Electrical Engineering 2016; 376: 933-939. http://dx.doi.org/10.1007/978-981-10-0557-2_89 DOI: https://doi.org/10.1007/978-981-10-0557-2_89
John GC, Leonard ET. "K*: An Instance-based Learner Using an Entropic Distance Measure. 12th International Conference on Machine Learning 1995; 108-114. DOI: https://doi.org/10.1016/B978-1-55860-377-6.50022-0
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2016 Jean de Dieu Uwisengeyimana, Nusaibah Khalid Al_Salihy, Turgay Ibrikci
This work is licensed under a Creative Commons Attribution 4.0 International License.
Policy for Journals/Articles with Open Access
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are permitted and encouraged to post links to their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work
Policy for Journals / Manuscript with Paid Access
Authors who publish with this journal agree to the following terms:
- Publisher retain copyright .
- Authors are permitted and encouraged to post links to their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work .