Learning Spatial Relationships between Samples of Image Shapes
Many applications including image based classification and retrieval of scientific and patent documents involve images in which brightness or color is not representative of content. In such cases, it seems intuitive to perform analysis on image shapes rather than on texture variations (i.e., pixel values ). Here, we propose a method that combines sparsely sampling points from image shapes and learning the spatial relationships between the extracted samples that characterize them. A dynamic graph CNN producing a different graph at each layer is trained and used as the learning engine of node and edge features in a classification/retrieval task. Our set of experiments on multiple datasets demonstrate a variety of point sampling sparsities, training-set size, rigid body transformations and scaling; and show that the accuracy of our approach is less likely to degrade due to small training sets or transformations on the data.
READ FULL TEXT