Function-Based Classification From 3D Data And Audio
Function-Based Classification from 3D Data and Audio.
In Proc. Of the IEEE/RSJ International Conf. on Intelligent Robots and Systems, 2006
Online Version
A pdf version is available for download.
Abstract
We propose a novel scheme for fusion between two types of modalities to support function-based classification. While the first modality targets functional classification from sounds registered at impact, the second one aims classification of objects in 3D images. Usind audio one can answer functional questions such as what is the material the analyzed objects are built of, if the objects are full or hollow, if they are heavy, and if they are rigidly linked to their supports. Audio based Signatures are used to label parts of the object under analysis. Different parts of any object can be partitioned in generic multi-level hierarchical descriptions of functional components. Functionality, in the visual modality reasoning scheme, is derived from a large set of geometric attributes and relationships between object parts. These geometric properties represent labeling signatures to the primitive and functional parts of the analyzed classes. The fusion between both of the modalities relies on a shared cooperation among and visual signatures of the functional and primitive parts. The scheme does not require a priori knowledge about any class. We tested the proposed scheme on a database of about one thousand different 3D objects. The results show high accuracy in classification.
Keywords
Co-authors
Bibtex Entry
@inproceedings{AmsellemSR06i,
title = {Function-Based Classification from 3D Data and Audio.},
author = {Aliza Amsellem and Octavian Soldea and Ehud Rivlin},
year = {2006},
month = {October},
booktitle = {Proc. Of the IEEE/RSJ International Conf. on Intelligent Robots and Systems},
keywords = {Function},
abstract = {We propose a novel scheme for fusion between two types of modalities to support function-based classification. While the first modality targets functional classification from sounds registered at impact, the second one aims classification of objects in 3D images. Usind audio one can answer functional questions such as what is the material the analyzed objects are built of, if the objects are full or hollow, if they are heavy, and if they are rigidly linked to their supports. Audio based Signatures are used to label parts of the object under analysis. Different parts of any object can be partitioned in generic multi-level hierarchical descriptions of functional components. Functionality, in the visual modality reasoning scheme, is derived from a large set of geometric attributes and relationships between object parts. These geometric properties represent labeling signatures to the primitive and functional parts of the analyzed classes. The fusion between both of the modalities relies on a shared cooperation among and visual signatures of the functional and primitive parts. The scheme does not require a priori knowledge about any class. We tested the proposed scheme on a database of about one thousand different 3D objects. The results show high accuracy in classification.}
}