Dense image labeling using Deep Convolutional Neural Networks


We consider the problem of dense image labeling. In recent work, many state-of-the-art techniques make use of Deep Convolutional Neural Networks (DCNN) for dense image labeling tasks (e.g. multi-class semantic segmentation) given their capacity to learn rich features. In this paper, we propose a dense image labeling approach based on DCNNs coupled with a support vector classifier. We employ the classifier based on DCNNs outputs while leveraging features corresponding to a variety of different labels drawn from a number of different datasets with distinct objectives for prediction. The principal motivation for using a support vector classifier is to explore the strength of leveraging different types of representations for predicting class labels, that are not directly related to the target task (e.g. predicted scene geometry may help assigning object labels). This is the first approach where DCNNs with predictions tied to different objectives are combined to produce better segmentation results. We evaluate our model on the Stanford background (semantic, geometric) and PASCAL VOC 2012 datasets. Compared to other state-of-the-art techniques, our approach produces state-of-the-art results for the Stanford background dataset, and also demonstrates the utility of making use of intelligence tied to different sources of labeling in improving upon baseline PASCAL VOC 2012 results. © 2016 IEEE.

Proceedings - 2016 13th Conference on Computer and Robot Vision, CRV 2016

cited By 2