-
آرشیو :
نسخه زمستان 1397
-
موضوع :
هوش مصنوعی
-
نویسنده/گان :
میترا نجفی اسفتانی، غلامحسین اکباتانی فرد
-
کلید واژه :
ردیاب چشم، شبکههای عصبی کانولوشن، رفتارهای چشم، طراحی محیط بازی.
-
Title :
The method for analysis and designing game environment by the human gaze prediction using convolutional neural networks
-
Abstract :
The smart designing of the game environment can affect both game experience and the efficiency of playing the game. Visual design of the game is an important factor to Make an engaging game. In this paper, we study how to attract the audience through targeted design by analyzing the user's visual behaviors. We extract and analyse the visual behaviors in terms of as gaze data using two different techniques. The first method is through the eye-tracking device and the second method through the neural networks. Neural networks are important techniques in Artificial intelligence that have been widely used in the field of image processing. Finally, the results of two techniques are compared and result of the predictor model is evaluated with the eye tracker device. During the empirical research of both of the above factors, it was concluded that not only neural networks are usable in predicting the gaze points, but also their performance's similarity is a fairly acceptable percentage for game designers. We concluded that there is a possibility to use artificial intelligence and neural network models to analyzing users eye behavior in the time of playing games so before the releasing the game there would be a chance to make necessary changes.
-
مراجع :
1 Nelson, M.J. and M. Mateas(2007) Towards automated game design. in Congress of the Italian Association for Artificial Intelligence. Springer.
.2 Cook, D (2007) The chemistry of game design. World Wide Web electronic publication,.
.3 Theodosiou, S. and I. Karasavvidis (2015) Serious games design: A mapping of the problems novice game designers experience in designing games. Journal of e-Learning and Knowledge Society,. 11(3).
.4 Björk, S. and J. Holopainen (2006) Games and design patterns. The game design reader: p. 410-437.
.5 Aleven, V., et al (2010)Toward a framework for the analysis and design of educational games. in 2010 third IEEE international conference on digital game and intelligent toy enhanced learning. IEEE.
.6 Renshaw, T., R. Stevens, and P.D. Denton (2009) Towards understanding engagement in games: an eye‐tracking study. On the Horizon,.
.7 Schrom-Feiertag, H., V. Settgast, and S. Seer (2017) Evaluation of indoor guidance systems using eye tracking in an immersive virtual environment. Spatial Cognition & Computation. 17(1-2): p. 163-183.
8. مصطفوی, س.و.ر.س.ی.س.و.ب.د.ص.( 1396) ارایه روشی در جهت بهبود روند طراحی مرحله در بازی های رایانه ای با استفاده از دستگاه ردیاب چشم. سومین کنفرانس ملی بازی های رایانه ای؛ فرصت ها و چالش ها، اصفهان، دانشگاه اصفهان.
.9 Cornia, M., et al(2018) Predicting human eye fixations via an lstm-based saliency attentive model. IEEE Transactions on Image Processing. 27(10): p. 5142-5154.
.10 Vidal, M., et al (2012) Wearable eye tracking for mental health monitoring. Computer Communications. 35(11): p. 1306-1311.
.11 Ettinger, U., et al (2003) Reliability of smooth pursuit, fixation, and saccadic eye movements. Psychophysiology. 40(4): p. 620-628.
.12 Arolt, V., et al (1998) Distinguishing schizophrenic patients from healthy controls by quantitative measurement of eye movement parameters. Biological psychiatry. 44(6): p. 448-458.
.13 Pons, J., et al (2017) Timbre analysis of music audio signals with convolutional neural networks. in 2017 25th European Signal Processing Conference (EUSIPCO). IEEE.
.14 Takahashi, R., et al (2018) A system for three-dimensional gaze fixation analysis using eye tracking glasses. 5(4): p. 449-457.
.15 Salehin, M.M. and M. Paul (2017) A novel framework for video summarization based on smooth pursuit information from eye tracker data. in Multimedia & Expo Workshops (ICMEW), 2017 IEEE International Conference on. IEEE.
.16 Li, B., B. Mettler, and J. Andersh (2015) Classification of human gaze in spatial guidance and control. in Systems, Man, and Cybernetics (SMC), 2015 IEEE International Conference on. IEEE.
.17 Zhang, X. and S.-M.J.I.A. Yuan (2018) An Eye Tracking Analysis for Video Advertising: Relationship Between Advertisement Elements and Effectiveness. 6: p. 10699-10707.
.18 Tatler, B.W (2007) The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of vision. 7(14): p. 4-4.
.19 Roy, A.K., et al (2017) A Novel Technique to develop Cognitive Models for Ambiguous Image Identification using Eye Tracker.(1): p. 1-1.
.20 Yin, N. and M.G.J.J.o.H.S.N. Hluchyj (1993) Analysis of the leaky bucket algorithm for on-off data sources. 2(1): p. 81-98.
.21 Majaranta, P. and A. Bulling (2014) Eye tracking and eye-based human–computer interaction, in Advances in physiological computing. Springer. p. 39-65.
.22 Mallick, R., et al (2016) The use of eye metrics to index cognitive workload in video games. in 2016 IEEE Second Workshop on Eye Tracking and Visualization (ETVIS). IEEE.
.23 Rudoy, D., et al (2013) Learning video saliency from human gaze using candidate selection. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
.24 Krejtz, K., et al (2014) Entropy-based statistical analysis of eye movement transitions. in Proceedings of the Symposium on Eye Tracking Research and Applications. ACM.
.25 Tien, T., et al (2015) Differences in gaze behaviour of expert and junior surgeons performing open inguinal hernia repair. 29(2): p. 405-413.
.26 Sarter, N.B., R.J. Mumaw, and C.D.J.H.f. Wickens (2007) Pilots' monitoring strategies and performance on automated flight decks: An empirical study combining behavioral and eye-tracking data. 49(3): p. 347-357.
.27 Frutos-Pascual, M. and B. Garcia-Zapirain (2015) Assessing visual attention using eye tracking sensors in intelligent cognitive therapies based on serious games. Sensors. 15(5): p. 11092-11117.
.28 Almeida, S., Ó. Mealha, and A. Veloso (2016) Video game scenery analysis with eye tracking. Entertainment Computing. 14: p. 1-13.
.29 Polonio, L., S. Di Guida, and G. Coricelli (2015) Strategic sophistication and attention in games: An eye-tracking study. Games and Economic Behavior. 94: p. 80-96.
.30 Devetag, G., S. Di Guida, and L. Polonio (2016) An eye-tracking study of feature-based choice in one-shot games. Experimental Economics. 19(1): p. 177-201.
.31 Wang, L., et al (2015) Deep networks for saliency detection via local estimation and global search. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
.32 Kümmerer, M., T.S. Wallis, and M.J.a.p.a. Bethge (2016) DeepGaze II: Reading fixations from deep features trained on object recognition.
.33 Xu, J., et al (2014) Predicting human gaze beyond pixels. 2014. 14(1): p. 28-28.
.34 Kummerer, M., T.S. Wallis, and M. Bethge (2018) Saliency benchmarking made easy: Separating models, maps and metrics. in Proceedings of the European Conference on Computer Vision (ECCV).
.35 Wloka, C., I. Kotseruba, and J.K. Tsotsos (2018) Active fixation control to predict saccade sequences. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
.36 Selvaraju, R.R., et al (2019) Grad-cam: Visual explanations from deep networks via gradient-based localization. in Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG).
.37 Harel, J., C. Koch, and P. Perona (2007) Graph-based visual saliency. in Advances in neural information processing systems.
.38 Itti, L., C. Koch, and E. Niebur (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on pattern analysis and machine intelligence. 20(11): p. 1254-1259.
.39 Itti, L. and C. Koch (2000) A saliency-based search mechanism for overt and covert shifts of visual attention. Vision research. 40(10-12): p. 1489-1506.
.40 Itti, L. and P.F. Baldi (2006) Bayesian surprise attracts human attention. in Advances in neural information processing systems.
.41 Selvaraju, R.R., et al (2017) Grad-cam: Visual explanations from deep networks via gradient-based localization. in Proceedings of the IEEE international conference on computer vision.
.42 Simonyan, K., A. Vedaldi, and A. Zisserman (2013) Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.
.43 Suryawanshi, D (2018) Image Recognition: Detection of nearly duplicate images, California State University Channel Islands.
.44 Li, Q., Z.J. Huang, and K. Christianson (2016) Visual attention toward tourism photographs with text: An eye-tracking study. Tourism Management. 54: p. 243-258.
- صفحات : 17-24
-
دانلود فایل
( 781 KB )