The aim of this paper is to introduce a novel, biologically inspired approach to extract visual features relevant for controlling and understanding reach-to-grasp actions. One of the most relevant of such features has been found to be the grip-size defined as the index finger-tip - thumb-tip distance. For this reason, in this paper we focus on this feature. The human visual system is naturally able to recognize many hand configurations – e.g. gestures or different types of grasps – without being affected substantially by the (observer) viewpoint. The proposed computational model preserves this nice ability. It is very likely that this ability may play a crucial role in action understanding within primates (and thus human beings). More specifically, a family of neurons in macaque’s ventral premotor area F5 have been discovered which are highly active in correlation with a series of grasp–like movements. This findings triggered a fierce debate about imitation and learning, and inspired several computational models among which the most detailed is due to Oztop and Arbib (MNS model). As a variant of the MNS model, in a previous paper, we proposed the MEP model which relies on an expected perception mechanism. However, both models assume the existence of a mechanism to extract visual features in a viewpoint independent way but neither of them faces the problem of how this mechanism can be achieved in a biologically plausible way. In this paper we propose a neural network model for the extraction of visual features in a viewpoint independent manner, which is based on the work by Poggio and Riesenhuber.

A Neural Network Model for a Viewpoint Independent Extraction of Reach-To-Grasp Action Features / Prevete, Roberto; M., Santoro; E., Catanzariti; G., Tessitore. - STAMPA. - 4729:(2007), pp. 124-133. [10.1007/978-3-540-75555-5_12]

A Neural Network Model for a Viewpoint Independent Extraction of Reach-To-Grasp Action Features

PREVETE, ROBERTO;
2007

Abstract

The aim of this paper is to introduce a novel, biologically inspired approach to extract visual features relevant for controlling and understanding reach-to-grasp actions. One of the most relevant of such features has been found to be the grip-size defined as the index finger-tip - thumb-tip distance. For this reason, in this paper we focus on this feature. The human visual system is naturally able to recognize many hand configurations – e.g. gestures or different types of grasps – without being affected substantially by the (observer) viewpoint. The proposed computational model preserves this nice ability. It is very likely that this ability may play a crucial role in action understanding within primates (and thus human beings). More specifically, a family of neurons in macaque’s ventral premotor area F5 have been discovered which are highly active in correlation with a series of grasp–like movements. This findings triggered a fierce debate about imitation and learning, and inspired several computational models among which the most detailed is due to Oztop and Arbib (MNS model). As a variant of the MNS model, in a previous paper, we proposed the MEP model which relies on an expected perception mechanism. However, both models assume the existence of a mechanism to extract visual features in a viewpoint independent way but neither of them faces the problem of how this mechanism can be achieved in a biologically plausible way. In this paper we propose a neural network model for the extraction of visual features in a viewpoint independent manner, which is based on the work by Poggio and Riesenhuber.
2007
9783540755548
A Neural Network Model for a Viewpoint Independent Extraction of Reach-To-Grasp Action Features / Prevete, Roberto; M., Santoro; E., Catanzariti; G., Tessitore. - STAMPA. - 4729:(2007), pp. 124-133. [10.1007/978-3-540-75555-5_12]
File in questo prodotto:
File Dimensione Formato  
finalVersion.pdf

non disponibili

Descrizione: Articolo Principale
Tipologia: Documento in Pre-print
Licenza: Accesso privato/ristretto
Dimensione 679.71 kB
Formato Adobe PDF
679.71 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11588/120642
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact