PEDESTRIAN ACTIVITY PREDICTION BASED ON SEMANTIC SEGMENTATION AND HYBRID OF MACHINES
Author affiliations
DOI:
https://doi.org/10.15625/1813-9663/34/2/12655Keywords:
Deep learning, Pedestrian recognition, Semantic segmentation, Feature extraction, Object detection, Autonomous vehicleAbstract
The article presents an advanced driver assistance system (ADAS) based on a situational recognition solution and provides alert levels in the context of actual traffic. The solution is a process in which a single image is segmented to detect pedestrians’ position as well as extract features of pedestrian posture to predict the action. The main purpose of this process is to improve accuracy and provide warning levels, which supports autonomous vehicle navigation to avoid collisions. The process of the situation prediction and issuing of warning levels consists of two phases: (1) Segmenting in order to definite the located pedestrians and other objects in traffic environment, (2) Judging the situation according to the position and posture of pedestrians in traffic. The accuracy rate of the action prediction is 99.59% and the speed is 5 frames per second.Metrics
Metrics Loading ...
Downloads
Published
03-10-2018
How to Cite
[1]
D.-P. Tran, V.-D. Hoang, T.-C. PHAM, and C.-M. LUONG, “PEDESTRIAN ACTIVITY PREDICTION BASED ON SEMANTIC SEGMENTATION AND HYBRID OF MACHINES”, JCC, vol. 34, no. 2, pp. 113–125, Oct. 2018.
Issue
Section
Computer Science
License
1. We hereby assign copyright of our article (the Work) in all forms of media, whether now known or hereafter developed, to the Journal of Computer Science and Cybernetics. We understand that the Journal of Computer Science and Cybernetics will act on my/our behalf to publish, reproduce, distribute and transmit the Work.2. This assignment of copyright to the Journal of Computer Science and Cybernetics is done so on the understanding that permission from the Journal of Computer Science and Cybernetics is not required for me/us to reproduce, republish or distribute copies of the Work in whole or in part. We will ensure that all such copies carry a notice of copyright ownership and reference to the original journal publication.
3. We warrant that the Work is our results and has not been published before in its current or a substantially similar form and is not under consideration for another publication, does not contain any unlawful statements and does not infringe any existing copyright.
4. We also warrant that We have obtained the necessary permission from the copyright holder/s to reproduce in the article any materials including tables, diagrams or photographs not owned by me/us.