Main Article Content
Due to the ever increasing number of blind and visually impaired people in the world, there has been a great amount of research dedicated to the design of assistive technologies to support them. The various assistive technologies apply different techniques including laser, ultrasonic sensors and image processing. Autonomous navigation is a significant challenge for the visually impaired, it makes life uncomfortable for them and poses serious safety issues. In this paper we review the progress made so far in vision based systems and propose an approach for developing navigation aids through techniques used in other autonomous systems like self-driving vehicles. The proposed system uses a front camera to capture images and then produces commensurate guiding audio signals that allow the user freely move in their environment. An extra rear camera is included to allow the user to obtain more information about the scene. Care is taken however not to overload the user with information. The proposed method is tested both in indoor and outdoor scenes and is effective in notifying the user for any obstacles. The goal of this paper is to propose a model for and to develop subsystems for an intelligent, high performance, affordable and easy to use image based navigation aid for the visually impaired.
(Published Online August 2)
Cook, et al. Assistive technologies: Principles and practice. Elsevier Health Sciences; 2014.
Ando B. A smart multisensor approach to assist blind people in specific urban navigation tasks. IEEE Transactions Neural Systems and Rehabilitation Engineering. 2008;16(6):592-594.
Sung, et al. Development of an intelligent guide-stick for the blind. IEEE International Conference on Robotics and Automation Seoul, Korea, May 21- 26; 2001.
Ivanov R. Real-time GPS track simplification algorithm for outdoor navigation of visually impaired. Journal of Network and Computer Applications; 2012.
Peter B, Meijer L. An experimental system for auditory image representations. IEEE Transactions on Biomedical Engineering. 1992;39(2).
Lex Fridman. Large-scale deep learning based analysis of driver behavior and interaction with automation in MIT auto-nomous vehicle technology study; 2017.
Aladrén, et al. Navigation assistance for the visually impaired using RGB-D sensor with range expansion. In IEEE Systems Journal. 2016;10(3):922-932.
Zhang, et al. An efficient method of image-sound conversion based on IFFT for vision aid for the blind. Lecture Notes on Software Engineering. 2014;2(1).
Rui, et al. Let blind people see: Real-time visual recognition with results converted to 3D audio in Stanford CS231n reports; 2018.
Bogdan, et al. Seeing without sight – An automatic cognition system dedicated to blind and visually impaired people. 2017 IEEE International Conference on Computer Vision Workshops; 2017.
Bourbakis, et al. A multimodal interaction scheme between a blind user and the Tyflos assistive prototype. In Tools with Artificial Intelligence, ICTAI ’08. 20th IEEE International Conference on. 2008;2:487-494.
Sudol, et al. LookTel — Computer vision applications for the visually impaired. UCLA: Computer Science 0201; 2013.
Redmon J, et al. YOLO9000: Better, faster, stronger. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI. 2017;6517-6525.
Liaquat, et al. Object detection and depth estimation of real world objects using single camera. Fourth International Conference on Aerospace Science and Engineering (ICASE), Islamabad; 2015.