Texture classification using Rao's distance on the space of covariance matrices

28/10/2015
Publication GSI2015
OAI : oai:www.see.asso.fr:11784:14302

Résumé

The current paper introduces new prior distributions on the zero-mean multivariate Gaussian model, with the aim of applying them to the classification of covariance matrices populations. These new prior distributions are entirely based on the Riemannian geometry of the multivariate Gaussian model. More precisely, the proposed Riemannian Gaussian distribution has two parameters, the centre of mass ˉY and the dispersion parameter σ. Its density with respect to Riemannian volume is proportional to exp(−d2(Y;ˉY)), where d2(Y;ˉY) is the square of Rao’s Riemannian distance. We derive its maximum likelihood estimators and propose an experiment on the VisTex database for the classification of texture images.

Texture classification using Rao's distance on the space of covariance matrices

Collection

application/pdf Texture classification using Rao's distance on the space of covariance matrices Salem Said, Lionel Bombrun, Yannick Berthoumieu
Détails de l'article
The current paper introduces new prior distributions on the zero-mean multivariate Gaussian model, with the aim of applying them to the classification of covariance matrices populations. These new prior distributions are entirely based on the Riemannian geometry of the multivariate Gaussian model. More precisely, the proposed Riemannian Gaussian distribution has two parameters, the centre of mass ˉY and the dispersion parameter σ. Its density with respect to Riemannian volume is proportional to exp(−d2(Y;ˉY)), where d2(Y;ˉY) is the square of Rao’s Riemannian distance. We derive its maximum likelihood estimators and propose an experiment on the VisTex database for the classification of texture images.
Texture classification using Rao's distance on the space of covariance matrices

Auteurs

Média

Voir la vidéo

Métriques

140
5
3.5 Mo
 application/pdf
bitcache://8f297947c708453b058cca17b91a620d4ce8a3ef

Licence

Creative Commons Attribution-ShareAlike 4.0 International

Sponsors

Organisateurs

logo_see.gif
logocampusparissaclay.png

Sponsors

entropy1-01.png
springer-logo.png
lncs_logo.png
Séminaire Léon Brillouin Logo
logothales.jpg
smai.png
logo_cnrs_2.jpg
gdr-isis.png
gdrmia_logo.png
logo_x.jpeg
logo-lix.png
logorioniledefrance.jpg
isc-pif_logo.png
logo_telecom_paristech.png
csdcunitwinlogo.jpg
<resource  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                xmlns="http://datacite.org/schema/kernel-4"
                xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4/metadata.xsd">
        <identifier identifierType="DOI">10.23723/11784/14302</identifier><creators><creator><creatorName>Lionel Bombrun</creatorName></creator><creator><creatorName>Yannick Berthoumieu</creatorName></creator><creator><creatorName>Salem Said</creatorName></creator></creators><titles>
            <title>Texture classification using Rao's distance on the space of covariance matrices</title></titles>
        <publisher>SEE</publisher>
        <publicationYear>2015</publicationYear>
        <resourceType resourceTypeGeneral="Text">Text</resourceType><subjects><subject>Information geometry</subject><subject>Texture classification</subject><subject>Riemannian centre of mass</subject><subject>Mixture estimation</subject><subject>EM algorithm</subject></subjects><dates>
	    <date dateType="Created">Sun 8 Nov 2015</date>
	    <date dateType="Updated">Wed 31 Aug 2016</date>
            <date dateType="Submitted">Mon 10 Dec 2018</date>
	</dates>
        <alternateIdentifiers>
	    <alternateIdentifier alternateIdentifierType="bitstream">8f297947c708453b058cca17b91a620d4ce8a3ef</alternateIdentifier>
	</alternateIdentifiers>
        <formats>
	    <format>application/pdf</format>
	</formats>
	<version>24693</version>
        <descriptions>
            <description descriptionType="Abstract">
The current paper introduces new prior distributions on the zero-mean multivariate Gaussian model, with the aim of applying them to the classification of covariance matrices populations. These new prior distributions are entirely based on the Riemannian geometry of the multivariate Gaussian model. More precisely, the proposed Riemannian Gaussian distribution has two parameters, the centre of mass ˉY and the dispersion parameter σ. Its density with respect to Riemannian volume is proportional to exp(−d2(Y;ˉY)), where d2(Y;ˉY) is the square of Rao’s Riemannian distance. We derive its maximum likelihood estimators and propose an experiment on the VisTex database for the classification of texture images.

</description>
        </descriptions>
    </resource>
.

Geometric Science of Information 2015 Non supervised classification in the space of SPD matrices Salem Said – Lionel Bombrun – Yannick Berthoumieu Laboratoire IMS CNRS UMR 5218 – Universit´e de Bordeaux 29 October 2015 Said et al. (IMS Bordeaux – CNRS UMR 5218) Geometric Science of Information 2015 29 October 2015 0 / 11 Context of our work Our project : Statistical learning in the space of SPD matrices Our team : 3 members of IMS laboratory + 2 post docs (Hatem Hajri, Paolo Zanini) Target applications : remote sensing , radar signal processing , Neuroscience (BCI) Our partners : IMB (Marc Arnaudon + PhD student), Gipsa-lab, Ecole des Mines Our recent work http ://arxiv.org/abs/1507.01760 Riemannian Gaussian distributions on the space of SPD matrices (in review, IEEE IT) Some of our problems : Given a population of SPD matrices (any size or structure) − Non-supervised learning of its class structure − Semi-parametric learning of its density Please look up our paper on Arxiv :-) Said et al. (IMS Bordeaux – CNRS UMR 5218) Geometric Science of Information 2015 29 October 2015 1 / 11 Geometric tools Statistical manifold : Θ = SPD, Toeplitz, Block-Toeplitz, etc, matrices Hessian or Fisher metric : ds2 (θ ) = HessΦ (dθ,dθ ) Φ model entropy — Θ becomes a Riemannian homogeneous space of negative curvature ! ! Example : 2 × 2 correlation (baby Toeplitz) Θ = 1 θ θ∗ 1 |θ | < 1 Φ(θ ) = − log[1 − |θ |2 ] ⇒ ds2 (θ ) = |dθ |2 [1 − |θ |2 ]2 Poincar´e disc model Why do we use this ? – Suitable mathematical properties – Relation to entropy or “information” – Often leads to excellent performance Said et al. (IMS Bordeaux – CNRS UMR 5218) Geometric Science of Information 2015 29 October 2015 2 / 11 First place in IEEE BCI challenge Contribution I-Introduction of Riemannian Gaussian distributions A statistical model of a class/cluster : [Pennec 2006] p(θ | ¯θ, σ ) = Z−1 (σ ) Expression unknown in the literature × exp − d 2 (θ, ¯θ ) 2σ2 d(θ, ¯θ ) Riema- nnian distance Computing Z (σ ) Z (σ ) = Θ exp − d 2 (θ, ¯θ ) 2σ2 dv(θ ) d 2 (θ, ¯θ ) = tr log θ−1 ¯θ 2 dv(θ ) = det(θ )− m+1 2 i