Fusion of pixel-based and object-based features for classification of urban hyperspectral remote sensing data

Abstract

Hyperspectral imagery contains a wealth of spectral and spatial information that can improve target detection and recognition performance. Typically, spectral information is inferred pixel-based, while spatial information related to texture, context and geometry are deduced on a per-object basis. Existing feature extraction methods cannot fully utilize both the spectral and spatial information. Data fusion by simply stacking different feature sources together does not take into account the differences between feature sources. In this paper, we propose a feature fusion method to couple dimension reduction and data fusion of the pixel- and object-based features of hyperspectral imagery. The proposed method takes into account the properties of different feature sources, and makes full advantage of both the pixel- and object-based features through the fusion graph. Experimental results on classification of urban hyperspectral remote sensing image are very encouraging.

Publication
SOUTH-EASTERN EUROPEAN JOURNAL OF EARTH OBSERVATION AND GEOMATICS