2010 Asia-Pacific Conference on Information Theory (APCIT 2010 E-BOOK)

Xi'an,China,10.1-10.2,2010

ISBN: 978-1-935068-47-1 Scientific Research Publishing, USA

E-Book 506pp Pub. Date: November 2010

Category: Computer Science & Communications

Price: $80

Title: Interview Correlations Based Fast Reference Frames Selection Algorithm for Multiview Depth Video Coding
Source: 2010 Asia-Pacific Conference on Information Theory (APCIT 2010 E-BOOK) (pp 18-26)
Author(s): Yuehou Si, Faculty of Information Science and Engineering, Ningbo University, Ningbo, China
Gangyi Jiang, National Key Lab of Software New Technology, Nanjing University, Nanjing, China
Zongju Peng, Faculty of Information Science and Engineering, Ningbo University, Ningbo, China
Mei Yu, National Key Lab of Software New Technology, Nanjing University, Nanjing, China
Feng Shao, Faculty of Information Science and Engineering, Ningbo University, Ningbo, China
Abstract: Multiview depth video coding utilizes the multiple references technique to select the best coding mode for each macro block. This technique achieves the highest possible coding efficiency, but it results in extremely large encoding time which obstructs it from practical use. In this paper, a fast reference frames se- lection algorithm based on interview correlation for multiview depth video coding is proposed to reduce the computational complexity. The basic idea of the method is to utilize the interview correlations of frames in prediction where the interview reference frame is needed. All the views are divided into two categories in this paper, that is, key view and assistant view. Key view is the view in which there is no disparity estimation or only anchor frame have disparity estimation. Assistant view has motion estimation and disparity estimation. Firstly, the proposed algorithm analyzed the temporal correlation of key views and the interview correlation of assistant views. Then, the assistant view which was encoded can be used to predict if there is necessary for current assistant view to search the interview reference frame. Experimental results show that the proposed algorithm can speed up 1.56~2.71 times while maintains high virtual rendering performance in comparison with full search algorithm.
Free SCIRP Newsletters
Copyright © 2006-2024 Scientific Research Publishing Inc. All Rights Reserved.
Top