Improvement of Eye Tracking Based on Deep Learning Model for General Purpose Applications
DOI:
https://doi.org/10.29194/NJES.25010012Keywords:
Eye Tracking, Artificial Intelligence, Machine Learning, Deep Learning, CNNAbstract
The interest in the Eye-tracking technology field dramatically grew up in the last two decades for different purposes and applications like keeping the focus of where the person is looking, how his pupils and irises are reacting for a variety of actions, etc. The resulted data can deliver an extraordinary amount of information about the user when it's interlocked through advanced data analysis systems, it may show information concerned with the user’s age, gender, biometric identity, interests, etc. This paper is concerned about eye motion tracking as an unadulterated tool for different applications in any field required. The improvements in this area of artificial intelligence (AI), machine learning (ML), and deep learning (DL) with eye-tracking techniques allow large opportunities to develop algorithms and applications. In this paper number of models were proposed based on Convolutional neural network (CNN) have been designed, and then the most powerful and accurate model was chosen. The dataset used for the training process (for 16 screen points) consists of 2800 training images and 800 test images (with an average of 175 training images and 50 test images for each spot on the screen of the 16 spots), and it can be collected by the user of any application based on this model. The highest accuracy achieved by the best model was (91.25%) and the minimum loss was (0.23%). The best model consists of (11) layers (4 convolutions, 4 Max pooling, and 3 Dense). Python 3.7 was used to implement the algorithms, KERAS framework for the deep learning algorithms, Visual studio code as an Integrated Development Environment (IDE), and Anaconda navigator for downloading the different libraries. The model was trained with data that can be gathered using cameras of laptops or PCs and without the necessity of special and expensive equipment, also It can be trained for any single eye, depending on application requirements.
Downloads
References
D. Venugopal, J. Amudha, and C. Jyotsna, "Developing an application using eye t racker," 2016 IEEE International Conference on Recent Trends in Electronics, Information & Comnmnication Technology (RTE¬ICT), pp. 1518-1522, 2016.
A. F. Klaib, N. O. Alsrehin, W. Y. Melhem, H. O. Bashtawi, and A. A. Magableh “Eye tracking algorithms, techniques, tools, and applications with an emphasis on machine learning and Internet of Things technologies”, Expert Systems with Applications, vol. 166,p. 114037, 2021.
T. A. Sadoon and M. H. Ali, “Deep learning model for glioma, meningioma and pituitary classification,” International Journal of Advances in Applied Sciences (IJAAS) Vol. 10, No. 1, March 2021, pp. 88~98.
https://opencv.org/, OpenCV. [Online]. Available : https://opencv.org/.
http://dlib.net /, Dlib. [Online]. Available: http://dlib.net.
J. Geller, M. B. Winn, T. Mahr, and D. Mirman, “GazeR: A Package for Processing Gaze Position and Pupil Size Data,” Behavior research methods, vol. 52, no. 5,pp,, 2232-2255, 2020.
T. Shinoda and M. Kato, "A pupil diameter measurement system for accident prevention," in 2006 IEEE International Conference on Systems, Man and Cybernetics, vol. 2, IEEE, 2006, pp. 1699-1703.
K.-N. Kim and R. Ramakrishna, "Vision-based eye-gaze tracking for human computer interface," in IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics, vol. 2. IEEE, 1999, pp. 324-329.
A. E. Kaufman, A. Bandyopadhyay, and B. D. Shaviv, "An eye tracking computer user interface", in Proceedings of 1993 IEEE Research Properties in Virtual Reality Symposium. IEEE, 1993, pp. 120-121.
A. Poole and L. Ball, “Eye tracking in hci and usability research,” in Encyclopedia of human computer interaction. IGI global, 2006, pp. 211-219.
J. Griffin and A. Ramirez, “Convolutional Neural Networks for Eye Tracking Algorithm”, Stanford University, 2018.
P. A. Punde, M. E. Jadhav, and R. R. Manza, "A study of eye tracking technology and its applications," in 2017 1st International Conference on Intelligent Systems and Information Management (ICISIM). IEEE, 2017, pp. 86-90.
I. Rakhmatulin, and A. T. Duchowski, “Deep neural networks for low-cost eye tracking,” Procedia Computer Science, vol. 176, pp. 685-694, 2020.
M. F.Ansari, P. Kasprowski, and M. Obetkal, “Gaze tracking using an unmodifiedweb camera and convolutional neural network,” Applied Sciences, Vol. 11, no. 19, p. 9068, 2021.
C. Jyotsna and J. Amudha, “Eye gaze as an indicator for stress level analysis in students,” in 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI). IEEE, 2018, pp. 1588-1593.
S. Dahal, “Eyes don’t lie: understanding users’ first impressions on website design using Eye Tracking,” 2011.
R. O. J. dos Santos, J. H. C. de Oliveira, J. B. Rocha, and J. M. E. Giraldi, “Eye tracking in neuromarketing: A Research Agenda for Marketing
Studies,” International Journal of Psychological Studies, vol. 7, no. 1, p. 32, 2015.
M.-C. Su, K.-C. Wang, and G.-D. Chen, “An Eye Tracking System and Its Application in Aids for People with Severe Disabilities,” Biomedical Engineering Applications Basis and Communications, vol. 18, no. 06, pp. 319-327, 2006.
C. Sennersten, “Model-based Simulation Training Supporting Military Operational Processes,” Ph.D. dissertation, Blekinge Institute of Technology, 2010.
J. Z. Lim, J. Mountstephens, and J. Teo, “Emotion recognition using eye-tracking: taxonomy, review and current challenges,” Sensors, vol.20, no. 8, p. 2384, 2020.
R. N. Aslin, “Infant eyes: A window on cognitive development”, Infancy, vol. 17, no. 1, pp. 126-140, 2012.
F. Thabtah, “An accessible and efficient autism screening method for behavioural data and predictive analyses,” Health Informatics Journal, vol. 25, no. 4, pp. 1739-1755, 2019.
F. Chollet, Deep Learning mit Python und Keras: Das Praxis-Handbuch vom Entwickler der Keras-Bibliothek, MITP-Verlags GmbH & Co. KG, 2018.
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, Col. 86, no. 11, pp. 2278-2324, 1998.
Downloads
Published
Issue
Section
License
Copyright (c) 2022 Al-Nahrain Journal for Engineering Sciences
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
The authors retain the copyright of their manuscript by submitting the work to this journal, and all open access articles are distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC-BY-NC 4.0), which permits use for any non-commercial purpose, distribution, and reproduction in any medium, provided that the original work is properly cited.