Supervised morphology for structure tensor-valued images based on symmetric divergence kernels

28/08/2013
OAI : oai:www.see.asso.fr:2552:4875
DOI :

Résumé

Supervised morphology for structure tensor-valued images based on symmetric divergence kernels

Métriques

639
172
5.54 Mo
 application/pdf
bitcache://4cefb390476cfa0a3dae6c7388b259b629d36351

Licence

Creative Commons Aucune (Tous droits réservés)

Sponsors

Sponsors scientifique

logo_smf_cmjn.gif

Sponsors financier

logo_gdr-mia.png
logo_inria.png
image010.png
logothales.jpg

Sponsors logistique

logo-minesparistech.jpg
logo-universite-paris-sud.jpg
logo_supelec.png
Séminaire Léon Brillouin Logo
logo_cnrs_2.jpg
logo_ircam.png
logo_imb.png
<resource  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                xmlns="http://datacite.org/schema/kernel-4"
                xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4/metadata.xsd">
        <identifier identifierType="DOI">10.23723/2552/4875</identifier><creators><creator><creatorName>Jesús Angulo</creatorName></creator><creator><creatorName>Santiago Velasco-Forero</creatorName></creator></creators><titles>
            <title>Supervised morphology for structure tensor-valued images based on symmetric divergence kernels</title></titles>
        <publisher>SEE</publisher>
        <publicationYear>2013</publicationYear>
        <resourceType resourceTypeGeneral="Text">Text</resourceType><dates>
	    <date dateType="Created">Mon 16 Sep 2013</date>
	    <date dateType="Updated">Mon 25 Jul 2016</date>
            <date dateType="Submitted">Wed 19 Sep 2018</date>
	</dates>
        <alternateIdentifiers>
	    <alternateIdentifier alternateIdentifierType="bitstream">4cefb390476cfa0a3dae6c7388b259b629d36351</alternateIdentifier>
	</alternateIdentifiers>
        <formats>
	    <format>application/pdf</format>
	</formats>
	<version>9635</version>
        <descriptions>
            <description descriptionType="Abstract"></description>
        </descriptions>
    </resource>
.

Supervised morphology for structure tensor-valued images based on symmetric divergence kernels Santiago Velasco-Forero1 @ Math Department National University of Singapore Jesus Angulo @ CMM MINES Paristech August 29, 2013 1 ITWM - Fraunhofer Institute, Kaiserslautern - Germany Plan 1 Introduction 2 Orders on P(d) 3 From Bregman matrix divergences to PDK on P(d) 4 Applications Binary Image I : E → {0, 1} x → x Grey Scale Image I : E → Z ⊂ R+ x → x ∈ [0, . . . , 255] Color Image I : E → Rd x → x = [x1, . . . , xd ] Structure tensor value image I : E → P(d) x → Ax the set of all d × d positive-definite symmetric matrices Structure tensor-valued images I : E → P(d) x → Ax Structure tensor-valued images I : E → P(d) x → Ax Structure tensor-valued images I : E → P(d) x → Ax Other examples are: SAR image, Magnetic Resonance Imaging, HARDI, etc. Morpho-world I : E → L, L is a lattice. Morpho-world I : E → L, L is a lattice. Assumption: Connectivity approach. Order approach (Total ordering, marginal ordering, etc). Operators: Granulometry. Levelings. Gradients and Watershed. Additive Decompositions. Adaptive Filterings. Hierarchical segmentations. Optimal segmentations Loewner partial order There are also a natural ordering on S(d), the so-called Loewner’s partial ordering defined via the cone of positive semidefinite matrices S(d) by A ≤Loewner B ⇐⇒ A − B ∈ S(d), ∀A, B ∈ S(d) (1) The application of this order to induce mathematical morphology transformation have been introduced in Burgeth05. Lexicographic spectral order Recently, in Angulo, 2012 was introduced the idea of ordering based on the singular value decomposition, as follows: A ≤LS B ⇐⇒ ∃j, 1 ≤ j ≤ d such that λi (A) = λi (B)∀i < j, and λj (A) < λj (B) for j ≤ d where λi (A), i = 1, . . . , d are the ordered eigenvalues of A. Lexicographic spectral order Figure : Original Figure : From Left-Up to Right-Down Corner Supervised ordering [Velasco-Forero et al.,2011] Supervised ordering [Velasco-Forero et al.,2011] h-supervised order: A total ordering can be produced in a set R = {r1, . . . , rn} given the subset B ⊂ R and F ⊂ R, as follow: 1 Define a valid positive definite kernel (PDK) K can be defined on A. 2 Use the evaluation function as a partial ordering (index), for instance to perform mathematical morphology transformation. 3 Random tie-break rule can be applied to obtain a total order. Supervised ordering [Velasco-Forero et al.,2011] h-supervised order: A total ordering can be produced in a set R = {r1, . . . , rn} given the subset B ⊂ R and F ⊂ R, as follow: 1 Define a valid positive definite kernel (PDK) K can be defined on A. 2 Use the evaluation function as a partial ordering (index), for instance to perform mathematical morphology transformation. 3 Random tie-break rule can be applied to obtain a total order. Bregman vector divergences divφ(x, y) = φ(x) − φ(y) − (x − y)T φ(y) (2) Bregman divergences are a general class of distortion function, which includes Euclidean distance, KL divergence, Itakura-Saito distance as special case. Bregman divergences generalise squared Euclidean distance, i.e., if φ(x) = xT x then divφ(x, y) = ||x||2 − ||y||2 − 2y, x − y = ||x − y||2. Bregman divergences generalise many properties of squared loss and relative entropy. See for more details in [Nielsen,2009]. There is a bijection between exponential families and Bregman divergences [Banerjee, 2005]. Bregman matrix divergences We can naturally extend this definition to real, symmetric d × d matrices, in P(d). Given a strictly convex, differentiable function φ : P(d) → R, the Bregman matrix divergence is defined to be: divφ(A, B) = φ(A) − φ(B) − tr(( φ(B))T (A − B)), (3) The most obvious computational benefit of using the divergences arise from the fact that they are defined over positive definite matrices. Bregman matrix divergences if φFrob(A) = ||A||2 F leads to the well-know Frobenius norm. if φNeuman(A) = tr(A log A − A), then φNeuman(B) = (log B)T and: divNeuman(A, B) = tr(A log A − A log B − A + B) (4) φBurg(A) = − log det(A), so φBurg(B) = −(B−T ), where we obtain: divBrug(A, B) = tr(AB−1 ) − log det(AB−1 ) − d. (5) also known as Stein’s loss or the LogDet-Divergence. Symmetric divergence and associated kernels However, Bregman divergences are non negative and definite, but almost always asymmetric. Symmetric divergence and associated kernels However, Bregman divergences are non negative and definite, but almost always asymmetric. This drawback prompted researchers to consider symmetric divergences [Cherian, 2012,Sra 2012], among which the most popular is the Jensen-Shannon divergence Sφ(A, B) = divφ A, A + B 2 + divφ A + B 2 , B (6) Symmetric divergence and associated kernels However, Bregman divergences are non negative and definite, but almost always asymmetric. This drawback prompted researchers to consider symmetric divergences [Cherian, 2012,Sra 2012], among which the most popular is the Jensen-Shannon divergence Sφ(A, B) = divφ A, A + B 2 + divφ A + B 2 , B (6) Applying (6) in the case of (5), we obtain the symmetric stein divergence [Sra, 2012] defined as follows Sstein(A, B) = log det A + B 2 − 1 2 log det(AB) (7) The PDK is obtained from the Stein symmetric divergence using an important result from Sra12. Theorem Sra12 Define divstein(A, B) = Sstein(A, B). Then, divstein is a metric on P(d). Theorem Sra12 Let A1, A2, . . . , An be real symmetric matrices in P(d), the following functions K(Ai , Aj ) = exp(−βSstein(Ai , Aj )) (8) forms a PDK if and only if β satisfies β ∈ j 2 : j ∈ N|1 ≤ j ≤ (d − 1) ∪ {j : j ∈ R|j > 1 2 (d − 1)} Summary Input: Structure tensor-valued image. 1 Stein matrix symmetric divergence. 2 PDK on P(d) 3 h-Supervised Ordering. 4 Learned lattice structure L. Summary Input: Structure tensor-valued image. 1 Stein matrix symmetric divergence. 2 PDK on P(d) 3 h-Supervised Ordering. 4 Learned lattice structure L. Welcome to valid mathematical morphology transformations (total order versions). Supervised ordering Figure : Original Figure : From Left-Up to Right-Down Corner Lexicographic spectral order Figure : Original Figure : From Left-Up to Right-Down Corner Plan 1 Introduction 2 Orders on P(d) 3 From Bregman matrix divergences to PDK on P(d) 4 Applications Application to robust segmentation (a) Original image (b) Markers (c) WS( (I)) (d) WS( stein(I)) (e) Noisy image (f) Markers (g) WS( (I)) (i) WS( (I)) (j) WS( stein(I)) (k) I Image Simplification (Leveling) Image Simplification (Leveling) Original vs Leveling in proposed ordering (Grey scale information is not used). Future work Bregman Divergences and Triangle Inequality, S. Acharyya, A. Banerjee, and D. Boley SIAM International Conference on Data Mining (SDM), 2013. Kernel Methods on the Riemannian Manifold of Symmetric Positive Definite Matrices, S. Jayasumana, R. Hartley, M. Salzmann, H. Li, and M. Harandi, Computer Vision and Pattern Recognition conference (CVPR), 2013. Means of Hermitian positive-definite matrices based on the log-determinant α-divergence function, Chebbi, Z., Moakner, M., Linear Algebra and its applications, 436, 257–276, 2012. Art with PSD Do you have any question? Figure : Centre Pompidou Metz: Sol-Lewitt Collection