Depth-guided patch-based disocclusion filling for view synthesis via Markov random field modelling

Abstract

In this paper, we propose a novel patch-based disocclusion filling method for view synthesis from video-plus-depth data. The proposed method treats disocclusion filling as a global optimization problem, where global (spatial) consistency among the patches is enforced via a Markov random field (MRF) model. The main idea of our method is to exploit disocclusion properties to limit and guide the search for candidate patches (labels) and to minimize efficiently the resulting MRF energy. In particular, we propose to constrain the label selection to local background regions in order to ensure that the disocclusions are filled with background information. Background is determined based on the locally estimated hard threshold on the depth values. The efficient minimization approach represents an extension of our previous method for general inpainting, where we propose to visit the MRF nodes from the background to the foreground disocclusion border and discard unnecessary labels. In this way, the number of labels is further reduced and the propagation of background information is additionally enforced. Finally, efficient inference is performed to obtain the final inpainting result. The proposed disocclusion filling method represents one step of the complete view synthesis framework that we also introduce in this paper. Experimental results show improvement of the proposed approach over related state-of-the-art methods both for small and big disocclusions.

Publication
2014 8TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ICSPCS)