GSI2013 - Geometric Science of Information

A propos

The objective of this SEE Conference hosted by MINES ParisTech is to bring together pure/applied mathematicians and engineers, with common interest for Geometric tools and their applications for Information analysis.

 It emphasises an active participation of young researchers for deliberating emerging areas of collaborative research on “Information Geometry Manifolds and Their Advanced Applications”.

Current and ongoing uses of Information Geometry Manifolds in applied mathematics are the following: Advanced Signal/Image/Video Processing, Complex Data Modeling and Analysis, Information Ranking and Retrieval, Coding, Cognitive Systems, Optimal Control, Statistics on Manifolds, Machine Learning, Speech/sound recognition, natural language treatment, etc., which are also substantially relevant for the industry.

The Conference will be therefore being held in areas of priority/focused themes and topics of mutual interest with a mandate to:

  • Provide an overview on the most recent state-of-the-art
  • Exchange of mathematical information/knowledge/expertise in the area
  • Identification of research areas/applications for future collaboration
  • Identification of academic & industry labs expertise for further collaboration

This conference will be an interdisciplinary event and will federate skills from Geometry, Probability and Information Theory to address the following topics among others. The conference proceedings, are published in Springer's Lecture Notes in Computer Science (LNCS) series.

- Computational Information Geometry
- Hessian/Symplectic Information Geometry
- Optimization on Matrix Manifolds
- Probability on Manifolds
- Optimal Transport Geometry
- Divergence Geometry & Ancillarity
- Machine/Manifold/Topology Learning
- Tensor-Valued Mathematical Morphology
- Differential Geometry in Signal Processing
- Geometry of Audio Processing
- Geometry for Inverse Problems
- Shape Spaces: Geometry and Statistic
- Geometry of Shape Variability
- Relational Metric
- Discrete Metric Spaces

Comités

Comité d'organisation

Program chairs

Scientific committee

Sponsors et organisateurs

Documents

XLS

OPENING SESSION ()

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

Geometric Science of Information GSI’13 Frédéric BARBARESCO GSI’13 General Chair President of SEE SI2D Club (Signal, Image, Information & Decision) Société de l'électricité, de l'électronique et des technologies de l'information et de la communication SEE at a glance • Meeting place for science, industry and society • An officialy recognised non-profit organisation • About 2000 members and 5000 individuals involved • Large participation from industry (~50%) • 6 Technical Commissions and 12 Regional Groups • Organizes conferences and seminars • Initiates and attracts International Conferences in France • Institutional French member of IFAC and IFIP • Awards (Glavieux/Brillouin Prize, Général Ferrié Prize, Néel Prize, Jerphagnon Prize, Blanc-Lapierre Prize,Thévenin Prize), grades and medals (Blondel, Ampère) • Publishes 2 periodical publications (REE, 3E.I) • Publishes 3 monographs each year • Present the Web: http://www.see.asso.fr and LinkedIn SEE group • Past SEE Presidents: Louis de Broglie, Paul Langevin, … 1883-2013: From SIE & SFE to SEE: 130 years of Sciences Société de l'électricité, de l'électronique et des technologies de l'information et de la communication 1881 Exposition Internationale d’Electricité 1883: SIE Société Internationale des Electriciens 1886: SFE Société Française des Electriciens 2013: SEE 17 rue de l'Amiral Hamelin 75783 Paris Cedex 16 http://www.see.asso.fr/ GSI’13 Geometric Science of Information GSI’13 Sponsors SMF/SEE GSI’13 • >150 international attendees • 100 scientific presentations on 3 days • 3 keynote speakers • Yann OLLIVIER (Paris-Sud Univ.): “Information geometric optimization: The interest of information theory for discrete and continuous optimization” • Hirohiko SHIMA (Yamaguchi Univ.): “Geometry of Hessian Structures” dedicated to Prof. J.L. KOSZUL • Giovanni PISTONE (Collegio Carlo Alberto): “Nonparametric Information Geometry” • 1 Guest speaker • Shun-ichi Amari (RIKEN Brain Science Institute): “Information Geometry and Its Applications: Survey” • 4 social events • Welcome coktail at Ecole des Mines • Visit and Concert at IRCAM • Diner at Eiffel Tower • Visit of Minearology Museum at Ecole des Mines GSI’13: Dedicated to Jean-Louis KOSZUL WORK • Hessian Geometry and J.L. Koszul Works – Hirohiko Shima Book, « Geometry of Hessian Structures », world Scientific Publishing 2007, dedicated to Jean-Louis Koszul – Hirohiko Shima Keynote Talk at GSI’13 – Plenary Session chaired by Prof. M. Boyom on Hessian Information Geometry Jean-Louis Koszul J.L. Koszul, « Sur la forme hermitienne canonique des espaces homogènes complexes », Canad. J. Math. 7, pp. 562-576., 1955 J.L. Koszul, « Domaines bornées homogènes et orbites de groupes de transformations affines », Bull. Soc. Math. France 89, pp. 515-533., 1961 J.L. Koszul, « Ouverts convexes homogènes des espaces affines », Math. Z. 79, pp. 254-259., 1962 J.L. Koszul, « Variétés localement plates et convexité », Osaka J. Maht. 2, pp. 285-290., 1965 J.L. Koszul, « Déformations des variétés localement plates », .Ann Inst Fourier, 18 , 103-114., 1968 GSI’13 Proceedings • Publication by SPRINGER in « Lecture Notes in Computer Science » LNCS vol. 8085 (879 pages), ISBN 978-3-642-40019-3 • http://www.springer.com/computer/image+processing/boo k/978-3-642-40019-3 GSI’13 Topics • GSI’13 federates skills from Geometry, Probability and Information Theory: • shape spaces (geometric statistics on manifolds and Lie groups, deformations in shape space,…), • probability/optimization & algorithms on manifolds (structured matrix manifold, structured data/Information, …), • relational and discrete metric spaces (graph metrics, distance geometry, relational analysis,…), • computational and hessian information geometry, • algebraic/infinite dimensionnal/Banach information manifolds, • divergence geometry, • tensor-valued morphology, • optimal transport theory, • manifold & topology learning, … and applications (audio-processing, inverse problems and signal processing) GSI’13 Program Keynote/Guest Speakers Talks & Plenary Session in L108 Poincaré Amphi Cocktail and Guest Speaker Talk : 18h15 – 19h15 (Closure at 19h30) 08h30‐09h00 09h00‐10h00 10h00‐10h30 10h30‐12h35 12h35‐13h30 SCILAB “GSI”  TOOLBOX Initiative (Amphi V107) Amphi V107 Amphi V106A Amphi V106B Amphi V107 Amphi V106A Amphi V106B Amphi V107 Amphi V106A Amphi V106B 13h30‐15h35 Relational Metric (chairman: Jean‐ François  Marcotorchino) Algebraic/Infinite  dimensionnal/Banach  Information Manifolds (Chairman: Giovanni  Pistone) Computational  Information Geometry (chairman: Frank  Nielsen) Hessian Information  Geometry II (Chairman: Frédéric  Barbaresco) Tensor‐Valued  Mathematical  Morphology (Chairman: Jesus  Angulo) Geometry of Inverse  Problems (Chairman: Ali  Mohammad‐Djafari) Geometric Statistics  on manifolds and Lie  groups (Chairman: Xavier  Pennec) Machine/Manifold/ Topology Learning (Chairmen: Michael  Aupetit & Frédéric  Chazal) Differential Geometry  in Signal Processing (Chairman: Michel  Berthier) 15h35‐16h05 Amphi V107 Amphi V106A Amphi V106B Amphi V107 Amphi V106A Amphi V106B Amphi V107 Amphi V106A Amphi V106B 16h05‐18h10 Discrete Metric Spaces (chairmen: Michel  Deza & Michel  Petitjean) Optimal Transport  Theory (Chairmen: Gabiel  Peyré & Bertrand  Maury) Geometry of Audio  Processing (Chairmen: Arshia  Cont & Arnaud  Dessein) Optimization on  Matrix Manifolds (Chairman: Silvere  Bonnabel) Divergence Geometry  & Ancillarity (Chairman: Michel  Broniatowski)  Information  Geometry Manifolds (Chairman: Hichem  Snoussi) Entropic Geometry (Chairman: Roger  Balian) Algorithms on  Manifolds (Chairman: Olivier  Schwander) Computational  Aspects of Inform.  Geometry in Statistics  (Chairman: Frank  Critchley) 18h15‐19h15 20h30‐22h30 Mineralogy  Museum  Visit  Mineralogy Museum Visit  Friday 30th of AugustWednesday 28th of August Thursday 29th of August Coffee Break / Poster session Opening Session + Keynote Speaker 1: Yann OLLIVIER Information‐Geometric Optimization: the Interest of Information  Theory for Discrete and Continuous Optimization Amphi L108 Poincaré Welcome /Registration Welcome Welcome Keynote Speaker 2: Hirohiko SHIMA Geometry of Hessian Structures (dedicated to Prof. J.L. KOSZUL) Keynote Speaker 3: Giovanni PISTONE Nonparametric Information Geometry Amphi L108 Poincaré Amphi L108 Poincaré Concert at IRCAM Ecole des Mines, Terrasse of Hôtel de Vendôme Amphi L108 Poincaré IRCAM Eiffel Tower (1rst Floor) Coffee Break Plenary session: Hessian Information Geometry I (Chairman: Michel Boyom) Lunch Break at Ecole des Mines + Poster session (Chairman: Frédéric Barbaresco) Amphi L108 Poincaré End of GSI'13 Coffee Break GSI 2013 GALA DINNER RESTAURANT 58 TOUR EIFFEL, 1st FLOOR Lunch Break at Ecole des Mines Cocktail at Ecole des Mines Guest Speaker: Shun‐Ichi AMARI Information Geometry and Its Applications:  Survey Closing Session Coffee Break Plenary session: Deformations in Shape Space (Chairman: Alain Trouvé) Coffee BreakCoffee Break / Poster session Lunch Break at Ecole des Mines Plenary session: Probability on Manifolds (Chairman: Marc Arnaudon) SCILAB GSI TOOLBOX • Contributing to “Geometric Science of Information” development, project of SCILAB “GSI” TOOLBOX is initiated, inviting contributors to write external modules that extend Scilab capabilities in specific fields of GSI (Information Geometry, Geometry of Structured Matrices, Statistics/optimization on Manifolds, …). These modules provide new features and documentation to Scilab users. A new website called "ATOMS Portal" has been released that host all external modules developed by external developers. These modules can be made available to Scilab users directly from Scilab console via a new feature named ATOMS (AuTomatic mOdules Management for Scilab), if the module author wishes it. • http://wiki.scilab.org/ATOMS • In parallel, external modules sources can now be managed through the new Scilab Forge. • http://forge.scilab.org/index.php/projects/ GSI’13 Coktail • AT ECOLE DES MINES • ON TERRASSE OF HÔTEL DE VENDÔME (exit of Lunch Break area) • Wednesday 28th of August, 18h15 – 19h15 • School is closed at 19h30 ! IRCAM Visit & Concert • AT IRCAM, 1 Place Igor-Stravinsky, 75004 Paris • METRO/RER: Châtelet-Les-Halles • Wednesday 28th of August • 19h30 – 20h00: 1rst Group (20 pers.) of IRCAM labs visit • 20h00 – 20h30: 2rst Group (20 pers.) of IRCAM labs visit • 20h30 – 21h30: Presentation & Demo/Concert of (Automatic Improvisation System with saxophonist player based on Information Geometry) (70 persons max) IRCAM Visit & Concert IRCAM LAB TOUR • We are glad to invite GSI participants for a lab tour of IRCAM. Ircam is a unique research center in the heart of Paris bringing artists and scientists together to foster research and creativity. Besides being a joint research venture between French Ministry of Culture, the CNRS, INRIA and Parisian Universities, it is also home to artists dealing with technological innovation in their works. • IRCAM has accompanied our efforts in Geometric Science of Information for some time now. IRCAM DEMO/CONCERT • The visit will be followed by a live demonstration of automatic improvisation, featuring Jazz saxophone and computer! The Lab Tour will be done in two separate sessions and for limited number of people at each session. • Participants are kindly asked to REGISTER at the registration desk for the visit and the demonstration. GSI’13 Gala Diner • AT Eiffel Tower, Champ de Mars, 5 Avenue Anatole France • First Floor of Eiffel Tower: 58 Tour Eiffel Restaurant • Thursday 29th of August , 20h30 – 22h30 • METRO Ticket & First Floor Lift Ticket will be given at GSI’13 Registration Desk Thursday 29th of August afternoon Visit of The Mineralogy Museum • AT Ecole des Mines • Visit by groups • Opening hours: 10h00-12h00 / 13h30-17h00 • More Information at GSI’13 Registration Desk Ecole des Mines de Paris • Special thanks to « Mathématiques et Systèmes » Department of Mines ParisTech: • Pierre Rouchon • Silvere Bonnabel • Jesus Angulo http://www.mines-paristech.eu/ Ecole des Mines de Paris • Since 1783 (230 years of sciences) GSI Topics & Ecole des Mines Students (Corps des Mines) Paul LEVY (general use of characteristic function in Probability) Henri POINCARE (Introduction of characteristic function  in Probability)  logor  eψ 1869 François MASSIEU (introduction of characteristic function in Thermodynamic: Gibbs-Duhem Potentials)  TT S /1 . 1      « je montre, dans ce mémoire, que toutes les propriétés d’un corps peuvent se déduire d’une fonction unique, que j’appelle la fonction caractéristique de ce corps» Roger BALIAN (metric for quantum states by hessian metric from Von- Neumann Entropy) EntropyS Sdds : 22  Roger Balian, 1986 DISSIPATION IN MANY-BODY SYSTEMS: A GEOMETRIC APPROACH BASED ON INFORMATION THEORY Massieu Work by Poincaré 1908 Characteristic Function by Poincaré 1912 Enjoy « Geometry » 1713-2013: 300 years birthday of Denis Diderot Diderot & d’Alembert Encyclopedy (Geometry chapter) Henri Poincaré / © Gallica Enjoy all « Geometries » 08h30‐09h00 Welcome /Registration Amphi V107 09h00‐10h00 Keynote Speaker 1: Yann OLLIVIER Information‐Geometric Optimization: the Interest of Information Theory for Discrete and Continuous  Optimization 10h00‐10h30 Coffee Break L108 Poincaré Amphi Yann Ollivier, Paris‐Sud University, France Information‐geometric optimization: The interest of information  theory for discrete and continuous optimization Biography Yann's research generally focuses on the introduction of probabilistic models on structured  objects, and  more  particularly  addresses  the  interplay  between  probability  and  differential  geometry.  He is  currently Research  scientist  at  the  CNRS,  currently  in  the  Computer  Science  department at Paris‐Sud Orsay University, previously in the Mathematics department at the  École Normale Supérieure in Lyon (2004–2010).  He graduated to his PhD in Mathematics, under the supervision of M. Gromov and P. Pansu in  2003 and is accredited to supervise research since 2009  http://www.yann‐ollivier.org/rech/index    

POSTER SESSION (Frédéric Barbaresco)

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

(3) is satisfied (3) is not satisfied Fig.10. The two cases of proposition 1. Fast Polynomial Spline Approximation for Large Scattered Data Sets via L1 Minimization Laurent Gajny, Éric Nyiri, Olivier Gibaru Laurent.GAJNY@ensam.eu, Eric.NYIRI@ensam.eu, Olivier.GIBARU@ensam.eu References [Her13] F.Hernoux, R.Béarée, L.Gajny, E.Nyiri, J.Bancalin, O.Gibaru, Leap Motion pour la capture de mouvement 3D par spline L1. Application à la robotique. GTMG 2013, Marseille, 27-28 mars 2013 [Gib1899] J. Willard Gibbs, lettre à l’éditeur, Nature 59 (April 27, 1899) 606. [Lav00] J.E. Lavery, Univariate Cubic Lp splines and shape-preserving multiscale interpolation by univariate cubic L1 splines. CAGD 17 (2000) 319 – 336. [Lav00bis] J.E. Lavery, Shape-preserving, multiscale tting of univariate data by cubic L1 smoothing splines, Comput. Aided Geom. Design, 17 (2000), 715-727. [NGA11] E. Nyiri, O. Gibaru, P. Auquiert, Fast L1 kCk polynomial spline interpolation algorithm with shape preserving properties. CAGD 28(1), 2011, 65 – 74. Behavior on noisy data set Context of the study The proposed method A brief state of the art We propose to develop a fast method of approximation by polynomial spline with no Gibbs phenomenon near to abrupt changes in the shape of the data [Gib1899]. Comparison between L1 and L2 regression line Fig2. Stability of L1 regression line against outliers . Cubic spline interpolation The L1 norm is robust against outliers. Sliding window for L1 interpolation The Lp smoothing splines Aims : Using theoretical results, develop a spline approximation method : • With shape-preserving properties. L1 norm. • With prescribed error. The control. • Fast for real-time use. Sliding window process. The problem and existence theory Fig3. A cubic spline is defined by its knots and associated first derivative values. Let (xi,yi), i=1,2,…,n, be n points in the plane, the Lp regression line problem is to find a line y=a*x+b* solution of : Let qi, ui i=1,2,…,n, be respectively n points in Rd and the associated parameters, The cubic spline interpolation problem is to find a curve  such that: • (ui) = qi for all i=1,2,…,n. •  is a polynomial function of degree at most 3 on each interval [ui,ui+1], The solution set is infinite. The C2 cubic spline interpolation problem leads to a unique solution which satisfies the following minimization property : Fig4. Gibbs phenomenon with cubic spline interpolation. It is a least square method or a L2 method. The L1 cubic spline interpolation Fig5. No Gibbs phenomenon with the L1 cubic interpolation. The L1 cubic spline interpolation introduced in [Lav00] consist in mininizing the following functional : over the set of C1 cubic splines which interpolates the points qi. + No Gibbs phenomenon. - Non-linear problem with non-unique solution. A sliding window process proposed in [Nyi2011] admits a linear complexity with the number of data points. Fig6. the sliding window process. We solve a sequence of local L1 problems and we keep only the derivative value at the middle point of each window. This method enables to obtain an algebraic solution and to manage the non-unicity of solutions. Fig7. Global method (left) and local method (right). The Lp smoothing splines are obtained by minimization of : Fig8. Overshoot with L2 smoothing spline. In this method, the parameter is not easy to choose. We have no control to the initial data points. Algebraic resolution on three points Iteration 1 Iteration 3 Iteration 6 Iteration 1 Iteration 3 Iteration 6 The three-point algorithm The -controlled L1 regression line for non-unicity cases Behavior of the method on a Heaviside data set When (3) is satisfied, the solution set may be infinite. To compute a relevant solution, we extend the window with supplementary more neighbours and solve : We consider the case n = 5 and we give more importance to the three middle point. Then we choose w2 = w3 = w4 > w1 = w5 = 1. Fig.11. The -controlled L1 regression line step in the three-point algorithm. Fig.12. Approximation of a Heaviside data set (left) and zoom on the top part of the jump (right). After applying the algorithm on the initial noisy data set, the spline solution is not smooth. However we can iterate the method. At step k+1, data points are the approximation points computed at step k. Using this process, we emphasize a smoothing phenomenon while keeping jumps. We are currently studying the proposed approach for the problem of approximation of functions. Fig.13. Approximation over noisy data sets. We apply these results to the treatment of sensor data from a low-cost computer vision system, the Leap Motion (See [Her13]). This system is able to capture very accurately fingers motion. When the user’s moves are too fast, we may observe holes in the data. Fig1. Motion capture by the leap motion and industrial robot steering. Fig.9. We look for spline solutions that do not deviate to initial data more than a given tolerance.

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

Target detection in non-stationary clutter background and Riemannian geometry Haiyan Fan 2013.05.21 Contents 1 Background 2 Methodology & Technology Road 3 Experiment Program & Results ContentsContents 4 Conclusions Background BackgroundBackground  The emerging of Riemannian geometry approach brings out a new era of statistical signal processing  The emerging of Riemannian geometry approach brings out a new era of statistical signal processing  Non-stationary signal detection is gradually gaining importance  Non-stationary signal detection is gradually gaining importance Many kinds of signal we meet today are non-stationary, for example  Non-Gaussian sea clutter has essential non-stationarity  Ultrasound Doppler Signals, obtained from the Physiological flows, are also non-stationary  Riemannian manifold has a more natural description of the signal structure  Barbaresco et al. has done much work in applying Riemannian metric to target detection of Radar signal Background Background  Existing methods Review The understanding of the sentence “Riemannian manifold has a more natural description of the signal structure”  Measured signals often belong to manifolds that are not vector spaces. In that case, processing the signal in flat Euclidean space is imprecise.  Riemannian manifold satisfies the invariance requirements to build some statistical tools on transformation groups and homogeneous manifolds that avoids paradoxes. Background  Existing methods Review The RG approach proposed by Barbaresco Autoregressive coefficient parameterization Riemannian metric & Riemannian distance Riemannian geodesico Riemannian median Targets detection Step 1 Step 2 Step 3 Step 4Step 5 In Step 1, a regularized Burg algorithm was used for parameterization of the signal by Barbaresco. Then the signal is mapped into a complex Riemannian manifold identified by the autoregressive coefficients. Riemannian distance and Riemannian median is derived for the manifold. The Principle of targets detection is : if a location has a good Riemannian distance from its Riemannian median, targets are supposed to appear in this location. Background  Existing methods Review Methodology & Technology Road RG+SLAR Smooth Prior long AR model (SLAR) Riemannian geometry method (RG) Based on Barbaresco’s work, we extend the Riemannian geometry method for targets detection of non-stationary signal Methodology & Technology Road Better accommodate the non-stationarity of signal Inherit the RG technical road of Barbaresco Methodology  smooth prior long AR model (SLAR) Pseudo-stationary spectral analysis for non-stationary signal in short analysis window Large model order fitted to relatively short analysis window Smoothness constraint to overcome the ill-posedness and spurious peaks brought by high order SLAR model Avoid underestim ation of m odel order as order selection criterion does Methodology  Riemannian geometry approach(RG) Observation flow Complex Riemannian manifold Autoregressive coefficient parameterization by SLAR model SLAR part ... Guard cell Cell under test Guard cell …1R iR 1iR NR Computing Riemannian median Riemannian distance Threshold Detec tion RG approach Threshold is set by empirical value Technology roadmap Experiment Program & Results Experiment Program Experiment Program One typical instance of target detection in non-stationary clutter background is the problem of radar detection in non-Gaussian sea clutter the experiment part will use target detection in the presence of non-Gaussian sea clutter to demonstrate the performance of our proposed method  Numeric experiments: simulated examples are given to validate performance of RG+SLAR method proposed in the paper, by comparing with Doppler filtering with DFT method and the RG approach with Regularized Burg algorithm(RG+ReBurg)  Real targets detection: RG+SLAR method will applied to real target detection within sea clutter with McMaster IPIX radar data. Experiment Results  Numeric experiment results The simulated Radar & targets parameters: Table 1 Radar Parameters Carrier Frequency Bandwidth Pulse repetition frequency Unambiguous Range interval Unambiguous velocity interval 10Ghz 10Mhz 10Khz 15Km 150m/s Table 2 Target Parameters Range SNR Rel_RCS velocity 2km -3dB -26.7dB 60m/s 3.8km 5dB -7.55dB 30m/s 4.4km 10dB 0dB -30m/s 4.4km 7dB -3dB -60m/s [1] SNR is the abbreviation of “Signal to Noise Ratio”. [2] Rel_RCS means relative RCS,. The relative RCS is RCS/ max (RCS) in dB. Max (RCS) is the maximum RCS of the 4 targets. Experiment Results  Numeric experiment results RG+SLAR Figure 1 the range-velocity map of clutter cancelled data obtained from SLAR modeled spectral estimation. Here, the velocity axis is linearly mapped from the frequency. , is the speed of light, is the carrier frequency. Figure 2 range-velocity map obtained from the Riemannian median of the clutter canceled data based on the reflection coefficients parameterization using SLAR model Experiment Results  Numeric experiment results RG+SLAR Table 3 Detected Target Parameters (RG+SLAR) Range Rel_RCS velocity 2km -30.8dB 61.81m/s 3.8km -12.5dB 31.39m/s 4.4km 0dB -29.89m/s 4.4km -3.38dB -62.05m/s Figure 3 Range with targets using RG+SLAR method Experiment Results  Numeric experiment results Doppler filtering method & RG+ReBurg Figure 5 the Range-velocity map of clutter cancelled data. (a) The Range-velocity map of clutter cancelled data through spectral estimation of each range bin using regularized Burg algorithm. (b) The Range-velocity contour using Doppler filtering in the slow time. Experiment Results  Numeric experiment results Doppler filtering method & RG+ReBurg Figure 6 the ambient estimation of clutter cancelled data. (a) The Range-velocity map of the Riemannian median of clutter-cancelled data parameterized by RG+ReBurg method (b) The estimation using Doppler filtering method. Experiment Results  Numeric experiment results Doppler filtering method & RG+ReBurg Figure 7 detected Range peaks (a) The Range peaks detected by RG+ReBurg method. (b) The Range peaks detected by Doppler filtering method. Experiment Results  Real targets detection The measured data we use is the file 19931118_023604_stare C0000.cdf collected by McMaster IPIX radar. Table 4 IPIX Radar Parameters Environment Value Geometry Value Radar Value Wind condition 0~60km/h Antenna azm. 170.2606º Unambig. Vel. 7.9872m/s Wind gust 90km/h Antenna elv. 359.5605º Range res. 15m Wave condition 0.8~3.8m Beam width 0.9º Carrier freq. 9,39Ghz Wave peak 5.5m Antenna gain 45.7dB PRF 1Khz [1] Unambig. Vel. is the abbreviation of “Unambiguous velocity”. [2] Range res. is the abbreviation of “Range resolution”. The average target to clutter ratio varies in the range 0-6 dB, and only one weak static target with small fluctuation is available in the range bin 8 (Primary target bin), with neighboring range bins 7-10, where the target may also be visible (Secondary target bin). Experiment Results  Real targets detection Real targets detection results Figure 8 (a) is the Range-velocity contour of pre-processed data (b) The ambient estimation of pre-processed data based on the reflection coefficients parameterization using SLAR Experiment Results  Real targets detection Real targets detection results Figure 9 Range bins with target. Primary target bin appears in range bin 8; the secondary target region spreads in 7-9 range bins. Figure 10 the velocity detection of the primary range bin 8 Conclusions Conclusions Conclusions A. Numeric and Real target detection experiments show that the proposed RG+SLAR method can attenuate the contamination brought by non-stationary clutter disturbance. B. The statistic depicting based on Riemannian geometry has higher accuracy of target detection than Doppler filtering based on DFT dose. C. The innovative idea of combing SLAR model and Riemannian geometry can achieve precise measurement of target location and velocity for non-stationary signal. Acknowledgement ! Reflection coefficients parameterization Riemannian metric Geodesic Riemannian median p 1 Step 2 Step 4 Riemannian distance ... CUT … Observation flow Complex Riemannian manifold Reflection coefficients parameterization by SLAR model 1θ iθ 1iθ Nθ Computing Riemannian median RG approach Riema dist ...... ...... guard cells guard cells threshold

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

Visual Point Set Processing with Lattice Structures : Application to Parsimonious Representations of Digital Histopathology Images Nicolas Lom´enie Universit´e Paris Descartes, LIPADE, SIP Group nicolas.lomenie@parisdescartes.fr Digital tissue images are too big to be processed with traditional image processing pipelines. We resort to the nuclear architecture within the tissue to explore such big images with geometrical and topological representations based on Delaunay triangulations of seed points. Then, we relate this representation to the parsimonious paradigm. Finally, we develop specific mathematical morphology operators to analyze any point set and contribute to the exploration of these huge medical images. Preliminary results proved good performance for both focusing on areas of interest and discrimination between slightly but significantly varying nuclear geometric configurations. Keywords : Digital Histopathology ; Point Set Processing ; Mathematical Morphology Sparsity and Digital Histopathology The rationale : 1. Shape as a geometric visual point set vs. an assembly of radiometric pixels ; 2. Image Analysis/Pattern Recognition Issues over Geometric and hence Sparse repre- sentations ; 3. Versatile nature of digital high-content histopathological images : staining procedure, biopsy techniques → structural analysis. The statement : Promoting new representations for the exploration of Whole Slide Images (WSIs) by using the recently acknowledged sparsity paradigm based on geometric representations. In [Chen et al. 2001], Chen et al. relates Huo’s findings about general image analysis : ”In one experiment, Huo analyzed a digitized image and found that the humanly interpretable information was really carried by the edgelet component of the de- composition. This surprising finding shows that, in a certain sense, images are not made of wavelets, but instead, the perceptually important components of the image are carried by edgelets. This contradicts the frequent claim that wavelets are the optimal basis for image representation, which may stimulate discussion.” We propose a sparse representation of a WSI based on a codebook of representative cells that are translated over the seed points detected by a low level processing operator as illustrated below. We use a semantic sparse representation relying on the most robustly detected significant tissue elements : the nuclei. WSInuclear(x, y) = (i,j)∈S δi,j(x, y) ∗ Cell Atom where S is a geometric point set corresponding to the nucleus seeds and Cell Atom is an atomic cell element image in the specific case of a 1-cell dictionary. S can be considered as a sparse representation of a WSI according to the given definition of a s-sparse vector x ∈ ℜd as given in [Needell & Ward 2012] : ||x||0 = |supp(x)| ≤ d << s (a) (b) (c) (d) (e) (f) (g) (h) (i) Sparse representation of a WSI illustrated with a tubule/gland structure ; (a) based on the (b) 1- atomic cell dictionary and the sparse represen- tation in (c) as a point set binary matrix S1 ; (d) Reconstruction of the tubule by convolution with a point set S1 obtained with a specific seed extractor ; (e) Superimposed with the gland ; (f) Reconstruction of the tubule by convolution with a point set S2 obtained with another specific seed extractor ; (g) superimposed with the gland struc- ture ; (h)(i) Sparse representations over a 1024 × 1024 sub-image of a more complex view out of a WSI (about 50 000 × 70 000 pixels size). In the field of computational pathology, graph-based representations and geometric science of information are gaining momentum [Doyle et al. 2008]. R´ef´erences [Chen et al. 2001] Chen SS, Donoho DL, Saunders MA. (2001) Atomic Decomposition Basis Pursuit, SIAM Review, 3(1), 129-159. [Doyle et al. 2008] Doyle, S., Agner, S., Madabhushi, A., Feldman, M. and Tomaszewski, J. (2008). Automated Grading of Breast Cancer Histopathology Using Spectral Clustering with Textural and Architectural Image Features, 5th IEEE Interna- tional Symposium on Biomedical Imaging, 29 :496-499. [Needell & Ward 2012] Needell D, Ward, R. (2012) Stable image reconstruction using total variation minimization http://arxiv.org/abs/1202 Point Set Processing Point set processing in the manner of image processing is gaining momentum in the computer graphics community [Rusu & Cousins 2011] with the example of the Point Cloud Library (PCL : http://www.pointclouds.org) inspired by the GNU Image Manipulation Program (GIMP : http://www.gimp.org). At the same time, in the field of applied mathematics, a new trend consists in adapting mature image analysis algorithms working on regular grids to parsimonious representa- tions like graphs of interest points or superpixels [Ta et al. 2009]. Applying mathematical morphology to graphs was first suggested in [Heijmans et al. 1992] but never really came up with tractable applications. Nevertheless, the idea is emerging again with recent works by the mathematical morphology pioneers [Cousty et al. 2009] and was also related to the concept of α-objects in [Lom´enie & Stamon 2008] based on seminal ideas in [Lom´enie et al. 2000] and then applied to the modeling of spatial relations and histopathology in [Lom´enie & Racoceanu 2012]. Lattice Structures for Point Set Processing We refer the reader to [Lom´enie & Stamon 2011] for a detailed presentation of the ma- thematical morphology framework operating on point sets. But formally it is enough to define a lattice structure operating on unorganized point sets, or more precisely, on a tessellation of the space that embeds any point set S in a neighborhood system. For any point set S ⊂ ℜ2, it exists a Delaunay triangulation Del(S) defining the aforementioned topology of the workspace. This mesh acts as the regular grid for a radiometric image. Then we define the complete lattice algebraic structure called L = (M(Del), ≤), where M(Del) is the set of meshes defined on Del, that means the set of mappings from a triangle T in Del(S) to a φT value in ℜ that is M ∈ M(Del) = {(T, φ)}T∈Del, and where the partial ordering relation ≤ is defined as follows : ∀M1 et M2 ∈ M(Del), M1 ≤ M2 ⇐⇒ ∀T ∈ Del, φ1 T ≤ φ2 T where φT is a positive measure of the k-simplex T in Del(S) related to the size, shape, area or visibility of the triangle [Lom´enie & Racoceanu 2012].The infimum operators are defined as follows : ∀M1 et M2 ∈ M(Del), inf(M1, M2) = {T ∈ Del, min(φ1 T , φ2 T )} sup(M1, M2) = {T ∈ Del, max(φ1 T , φ2 T )} Then, given the basic definition of an erosion and an involution c operators, we inherit the infinite spectrum of theoretical well-sounded range of operators from mathematical mor- phology : ∀M ∈ M(Del), e(M) = {T ∈ Del, eT } and Mc = {T ∈ Del, 1 − φT } Left : The pyramid of structural operators we can obtain ranging from the fundamen- tal low-level erosion operator to the se- mantic high-level Ductal Carcinoma In Situ (DCIS) characterization and the represen- tation of spatial relationships like ’between’. Structural Analysis for Digital Histopathology Focusing Operators : (Top) Focusing on a tumorous area at magnification ×1 of the WSI ; (Down) Focusing on a small part of the WSI at ×20 Pattern Recognition Operators : (Above) Characterizing a DCIS structure with a structural bio-code’110’ based on our operators with a precise (Method 1) and a coarse seed nu- clei extractor (Method 2) at magnification ×40. (Below) New results on a small database. Type Nb samples Correct Biocodes Method 1 Method 2 DCIS(S) =′ 110′ 10 9 8 DCISpost(S) =′ 110′ 10 9 9 Tubule(S) =′ 101′ 10 10 10 Digital Histopathology and Geometric Information Science : great challenges to tackle in the coming decade [GE Healthcare 2012]. R´ef´erences [Cousty et al. 2009] Cousty, J., Najman, L., and Serra, J. (2009). Some morphological operators in graph spaces, Lecture Notes in Computer Science, Mathemati- cal Morphology and Its Application to Signal and Image Processing, Springer, 5720 :149-160. [GE Healthcare 2012] Pathology Innovation Centre of Excellence (PICOE). Digital Histopathology : A New Frontier in Canadian Healthcare. White Paper. Ja- nuary 2012. GE Healthcare. http://www.gehealthcare.com/canada/it/ downloads/digitalpathology/GE_PICOE_Digital_Pathology_A_New_ Frontier_in_Canadian_Healthcare.pdf . Accessed December 2012. [Heijmans et al. 1992] Heijmans, H., Nacken, P., Toet, A., & Vincent, L. (1992). Graph Morphology. Journal of Visual Communication and Image Representation, 3(1) :24-38. [Lom´enie et al. 2000] Lom´enie, N., Gallo, L., Cambou, N. & Stamon, G. (2000). Mor- phological Operations on Delaunay Triangulations. International Conference on Pattern Recognition, 556-59. [Lom´enie & Stamon 2008] Lom´enie, N. and Stamon, G. (2008). Morphological Mesh fil- tering and alpha-objects, Pattern Recognition Letters, 29(10) :1571-79. [Lom´enie & Stamon 2011] Lom´enie, N. and Stamon, G. (2011). Point Set Analysis, Ad- vances in Imaging and Electron Physics, Peter W. Hawkes, San Diego : Academic Press, vol. 167, pp. 255-294. [Lom´enie & Racoceanu 2012] Lom´enie, N. and Racoceanu, D. (2012). Point set morpho- logical filtering and semantic spatial configuration modeling : application to micro- scopic image and bio-structure analysis, Pattern Recognition, 45(8) :2894-2911. [Rusu & Cousins 2011] , Rusu, R.B. and Cousins S. (2011) 3D is here : Point Cloud Library (PCL), IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China. [Ta et al. 2009] Ta, V.T., Lezoray, O., Elmoataz, A. and Schupp, S. (2009). Graph-based Tools for Microscopic Cellular Image Segmentation, Pattern Recognition, Special Issue on Digital Image Processing and Pattern Recognition Techniques for the Detection of Cancer, 42(6) :1113-25. Acknowledgement : This work is part of the SPIRIT project, program JCJC 2011 - ref : ANR-11-JS02-008-01 and of the MICO project, program TecSan 2010 - ref : ANR- 10-TECS-015. A free demonstrator can be downloaded at http://www.math-info. univ-paris5.fr/~lomn/Data/MorphoMesh.zip as an imageJ plugin.

ORAL SESSION 1 Geometric Statistics on manifolds and Lie groups (Xavier Pennec)

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

10/15/13 1 A Subspace Learning of Dynamics on a Shape Manifold: A Generative Modeling Approach Sheng Yi* and H. Krim VISSTA, ECE Dept., NCSU Raleigh NC 27695 *GE Research Center, NY Thanks to AFOSR Outline •  Motivation •  Statement of the problem •  Highlight key issues and brief review •  Proposed model and solution •  Experiments 10/15/13 2 Problem Statement X(t) Z(t) Looking for a subspace that preserve geometrical properties of data in the original space Related Work •  Point-wise subspace learning –  PCA, MDS, LLE, ISOMAP, Hessian LLE, Laplacian Mapping, Diffusion Map, LTSA [T. Wittman, "Manifold Learning Techniques: So Which is the Best?“, UCLA ] •  Curve-wise subspace learning –  Whitney embedding [D. Aouada, and H. K., IEEE Trans. IP, 2010] •  Shape manifold –  Kendall’s shape space •  Based on landmarks –  Klassen et al. shape space •  Functional representation •  Concise description of tangent space & Fast Implementation –  Michor&Mumford’s shape space •  Focus on parameterization •  Complex description of tangent space& Heavy computation –  Trouve and Younes Diff. Hom Approach 10/15/13 3 Contribution Summary •  Proposed subspace learning is Invertible Original seq. Reconstructed seq. Subspace seq.
 Contribution Summary •  The parallel transport of representative frames defined by a metric on the shape manifold preserves curvatures in the subspace •  Ability to apply an ambient space calculus instead of relying essentially on manifold calculus 10/15/13 4 Shape Representation •  From curve to shape [Klassen et al.] α(s)= x(s),y(s)( )∈R2 ⇒ ∂ ∂s ∂α ∂s = cosθ(s),sinθ(s)( ) (simpleandclosedθ(s))\Sim(n) Closed: cosθ(s)ds 0 2π ∫ = 0 sinθ(s)ds 0 2π ∫ = 0 Rotation: 1 2π θ(s)ds 0 2π ∫ = π Dynamic Modeling on a Manifold •  The Core idea27 ( ) ti t XV X T M∈ dXt Process on M Driving Process on Rn dXt = Vi (Xt )dZi (t) i=1 dim( M ) ∑ ∈TXt M dim( M ) dZii (t) Zii (t) 10/15/13 5 Parallel Transport span Tangent along curve Tangent along curve Parallel Transport X0 X1 M [ Yi et al. IEEE IP, 2012] The core idea •  Adaptively select frame to represent in a lower dimensional space ( ) ti t XV X T M∈ dXt Process on M Driving Process on Rn dXt PCA on vectors parallel transported to a tangent space dZi (t) Rdim( M ) 10/15/13 6 Formulation of Curves on Shape Manifold A shape at the shape manifold Vectors span the tangent space of the shape manifold[ Yi et al. IEEE IP, 2012] A Euclidean driving process Vectors span a subspace A driving process in a subspace In original space: In a subspace: Core Idea •  Restrict the selection of V to be parallel frames on the manifold •  Advantage of using parallel moving frame: •  Angles between tangent vectors are preserved. With the same sampling scheme, curvatures are preserved as well. •  Given the initial frame and initial location on manifold, the original curve could be reconstructed 10/15/13 7 Core Idea •  Find an L2 optimal V Euclidean distance is used here because it is within a tangent space of the manifold Core Idea Given a parallel transport on shape manifold, with some mild assumption we can obtain a solution as a PCA 10/15/13 8 Parallel Transport Flow Chart By definition of parallel transport Discrete approx. of derivation Tangent space of shape manifold is normal to b1,b2,b3 Tangent space of shape manifold is normal to b1,b2,b3 A linear system Experiments •  Data Ø Moshe Blank et al., ICCV 2005 http://www.wisdom.weizmann.ac.il/~vision/SpaceTimeActions.html Ø Kimia Shape Database Sharvit, D. et al.,. Content-Based Access of Image and Video Libraries,1998 •  Walk •  Run •  Jump •  Gallop sideways •  Bend •  One-hand wave •  Two-hands wave •  Jump in place •  Jumping Jack •  Skip 10/15/13 9 Reconstruction Experiment PCA in Euclidean space The proposed method More Reconstructions and Embeddings 10/15/13 10 Other Embedding Result Experiment on curvature preservation 10/15/13 11 Experiment on curvature preservation 10/15/13 12 Generative Reconstruction 10/15/13 13 Conclusions •  A low dimensional embedding of a parallelly transported shape flow proposed •  A learning-based inference framework achieved •  A generative model for various shape- based activities is obtained

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

Xavier Pennec Asclepios team, INRIA Sophia- Antipolis – Mediterranée, France Bi-invariant Means on Lie groups with Cartan-Schouten connections GSI, August 2013 X. Pennec - GSI, Aug. 30, 2013 2 Design Mathematical Methods and Algorithms to Model and Analyze the Anatomy  Statistics of organ shapes across subjects in species, populations, diseases…  Mean shape  Shape variability (Covariance)  Model organ development across time (heart-beat, growth, ageing, ages…)  Predictive (vs descriptive) models of evolution  Correlation with clinical variables Computational Anatomy Statistical Analysis of Geometric Features Noisy Geometric Measures  Tensors, covariance matrices  Curves, tracts  Surfaces  Transformations  Rigid, affine, locally affine, diffeomorphisms Goal:  Deal with noise consistently on these non-Euclidean manifolds  A consistent statistical (and computing) framework X. Pennec - GSI, Aug. 30, 2013 3 X. Pennec - GSI, Aug. 30, 2013 4 Statistical Analysis of the Scoliotic Spine Data  307 Scoliotic patients from the Montreal’s St-Justine Hosp  3D Geometry from multi-planar X-rays  Articulated model:17 relative pose of successive vertebras Statistics  Main translation variability is axial (growth?)  Main rot. var. around anterior-posterior axis  4 first variation modes related to King’s classes [ J. Boisvert et al. ISBI’06, AMDO’06 and IEEE TMI 27(4), 2008 ] Morphometry through Deformations 5X. Pennec - GSI, Aug. 30, 2013 Measure of deformation [D’Arcy Thompson 1917, Grenander & Miller]  Observation = “random” deformation of a reference template  Deterministic template = anatomical invariants [Atlas ~ mean]  Random deformations = geometrical variability [Covariance matrix] Patient 3 Atlas Patient 1 Patient 2 Patient 4 Patient 5 1 2 3 4 5 Hierarchical Deformation model Varying deformation atoms for each subject M3 M4 M5 M6 M1 M2 M0 K M3 M4 M5 M6 M1 M2 M0 1 … Subject level: 6 Spatial structure of the anatomy common to all subjects w0 w1 w2 w3 w4 w5 w6 Population level: Aff(3) valued trees X. Pennec - GSI, Aug. 30, 2013[Seiler, Pennec, Reyes, Medical Image Analysis 16(7):1371-1384, 2012] X. Pennec - GSI, Aug. 30, 2013 7 Outline Riemannian frameworks on Lie groups Lie groups as affine connection spaces A glimpse of applications in infinite dimensions Conclusion and challenges X. Pennec - GSI, Aug. 30, 2013 8 Riemannian geometry is a powerful structure to build consistent statistical computing algorithms Shape spaces & directional statistics  [Kendall StatSci 89, Small 96, Dryden & Mardia 98] Numerical integration, dynamical systems & optimization  [Helmke & Moore 1994, Hairer et al 2002]  Matrix Lie groups [Owren BIT 2000, Mahony JGO 2002]  Optimization on Matrix Manifolds [Absil, Mahony, Sepulchre, 2008] Information geometry (statistical manifolds)  [Amari 1990 & 2000, Kass & Vos 1997]  [Oller & Corcuera Ann. Stat. 1995, Battacharya & Patrangenaru, Ann. Stat. 2003 & 2005] Statistics for image analysis  Rigid body transformations [Pennec PhD96]  General Riemannian manifolds [Pennec JMIV98, NSIP99, JMIV06]  PGA for M-Reps [Fletcher IPMI03, TMI04]  Planar curves [Klassen & Srivastava PAMI 2003] Geometric computing  Subdivision scheme [Rahman,…Donoho, Schroder SIAM MMS 2005] X. Pennec - GSI, Aug. 30, 2013 9 The geometric framework: Riemannian Manifolds Riemannian metric :  Dot product on tangent space  Speed, length of a curve  Geodesics are length minimizing curves  Riemannian Distance Operator Euclidean space Riemannian manifold Subtraction Addition Distance Gradient descent )( ttt xCxx   )(yLogxy x xyxy  xyyx ),(dist x xyyx ),(dist )(xyExpy x ))(( txt xCExpx t   xyxy  Unfolding (Logx), folding (Expx)  Vector -> Bipoint (no more equivalent class) Exponential map (Normal coord. syst.) :  Geodesic shooting: Expx(v) = g(x,v)(1)  Log: find vector to shoot right (geodesic completeness!) 10 Statistical tools: Moments Frechet / Karcher mean minimize the variance Existence and uniqueness : Karcher / Kendall / Le / Afsari Gauss-Newton Geodesic marching Covariance (PCA) [higher moments]  xyEwith)(expx x1  vvtt        M M )().(.x.xx.xE TT zdzpzz xxx xx         0)(0)().(.xxE),dist(Eargmin 2   CPzdzpy y MM MxxxxxΕ X. Pennec - GSI, Aug. 30, 2013 [Oller & Corcuera 95, Battacharya & Patrangenaru 2002, Pennec, JMIV06, NSIP’99 ] 11 Distributions for parametric tests Generalization of the Gaussian density:  Stochastic heat kernel p(x,y,t) [complex time dependency]  Wrapped Gaussian [Infinite series difficult to compute]  Maximal entropy knowing the mean and the covariance Mahalanobis D2 distance / test:  Any distribution:  Gaussian:           2/x..xexp.)( T xΓxkyN       rOk n /1.)det(.2 32/12/    Σ    rO /Ric3 1)1(    ΣΓ yx..yx)y( )1(2   xxx t    n)(E 2 xx  rOn /)()( 322  xx [ Pennec, NSIP’99, JMIV 2006 ] X. Pennec - GSI, Aug. 30, 2013 Natural Riemannian Metrics on Lie Groups Lie groups: Smooth manifold G compatible with group structure  Composition g o h and inversion g-1 are smooth  Left and Right translation Lg(f) = g o f Rg (f) = f o g Natural Riemannian metric choices  Chose a metric at Id: Id  Propagate at each point g using left (or right) translation g = < DLg (-1) .x , DLg (-1) .y >Id  Practical computations using left (or right) translations X. Pennec - GSI, Aug. 30, 2013 12   g)(f.LogDL(g)Logfgx).DL(ExpfxExp 1)( Idffff 1)(    Id 13 Example on 3D rotations Space of rotations SO(3):  Manifold: RT.R=Id and det(R)=+1  Lie group ( R1 o R2 = R1.R2 & Inversion: R(-1) = RT ) Metrics on SO(3): compact space, there exists a bi-invariant metric  Left / right invariant / induced by ambient space = Tr(XT Y) Group exponential  One parameter subgroups = bi-invariant Geodesic starting at Id  Matrix exponential and Rodrigue’s formula: R=exp(X) and X = log(R)  Geodesic everywhere by left (or right) translation LogR(U) = R log(RT U) / ExpR(X) = R exp(RT X) / Bi-invariant Riemannian distance  d(R,U) = ||log(RT U)|| = q( RT U ) X. Pennec - GSI, Aug. 30, 2013 14 General Non-Compact and Non-Commutative case No Bi-invariant Mean for 2D Rigid Body Transformations  Metric at Identity:

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

Faculty of Science Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Stefan Sommer Department of Computer Science, University of Copenhagen August 30, 2013 Slide 1/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 PGA: analysis relative to the data mean Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 PGA: analysis relative to the data mean data points on non-linear manifold Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 PGA: analysis relative to the data mean intrinsic mean µ Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 PGA: analysis relative to the data mean tangent space TµM Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 PGA: analysis relative to the data mean projection of data point to TµM Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 PGA: analysis relative to the data mean Euclidean PCA in tangent space Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 PGA: analysis relative to the data mean What happens when µ is a poor zero- dimensional descrip- tor? Curvature Skews Centered Analysis Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 3/14 Bimodal distribution on S2 , var. 0.52 . Curvature Skews Centered Analysis Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 3/14 Bimodal distribution on S2 , var. 0.52 . −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 PGA, est. var. 1.072 Curvature Skews Centered Analysis Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 3/14 Bimodal distribution on S2 , var. 0.52 . −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 PGA, est. var. 1.072 −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 HCA, est. var. 0.492 Curvature Skews Centered Analysis Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 3/14 Bimodal distribution on S2 , var. 0.52 . −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 PGA, est. var. 1.072 −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 HCA, est. var. 0.492 HCA - Horizontal Com- ponent Analysis HCA Properties • data-adapted coordinate system r : RD → M • preserves distances orthogonal to lower order components: x −y = dM(r(x),r(y)) x = (x1 ,...,xd ,0,...,0), y = (x1 ,...,xd ,0,...,0,y ˜d ,0,...,0), 1 ≤ ˜d < d • intrinsic interpretation of covariance: cov(Xd ,X ˜d ) = R2 Xd X ˜d p(Xd ,X ˜d )d(Xd ,X ˜d ) = γd γ˜d ±dM(µ,r(Xd ))dM(r(Xd ),r(X ˜d ))p(r(Xd ,X ˜d ))ds ˜d dsd • orthogonal coordinates/subspaces • coordinate-wise decorrelation with respect to curvature-adapted measure Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 4/14 HCA Properties • data-adapted coordinate system r : RD → M • preserves distances orthogonal to lower order components: x −y = dM(r(x),r(y)) x = (x1 ,...,xd ,0,...,0), y = (x1 ,...,xd ,0,...,0,y ˜d ,0,...,0), 1 ≤ ˜d < d • intrinsic interpretation of covariance: cov(Xd ,X ˜d ) = R2 Xd X ˜d p(Xd ,X ˜d )d(Xd ,X ˜d ) = γd γ˜d ±dM(µ,r(Xd ))dM(r(Xd ),r(X ˜d ))p(r(Xd ,X ˜d ))ds ˜d dsd • orthogonal coordinates/subspaces • coordinate-wise decorrelation with respect to curvature-adapted measure Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 4/14 HCA Properties • data-adapted coordinate system r : RD → M • preserves distances orthogonal to lower order components: x −y = dM(r(x),r(y)) x = (x1 ,...,xd ,0,...,0), y = (x1 ,...,xd ,0,...,0,y ˜d ,0,...,0), 1 ≤ ˜d < d • intrinsic interpretation of covariance: cov(Xd ,X ˜d ) = R2 Xd X ˜d p(Xd ,X ˜d )d(Xd ,X ˜d ) = γd γ˜d ±dM(µ,r(Xd ))dM(r(Xd ),r(X ˜d ))p(r(Xd ,X ˜d ))ds ˜d dsd • orthogonal coordinates/subspaces • coordinate-wise decorrelation with respect to curvature-adapted measure Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 4/14 HCA Properties • data-adapted coordinate system r : RD → M • preserves distances orthogonal to lower order components: x −y = dM(r(x),r(y)) x = (x1 ,...,xd ,0,...,0), y = (x1 ,...,xd ,0,...,0,y ˜d ,0,...,0), 1 ≤ ˜d < d • intrinsic interpretation of covariance: cov(Xd ,X ˜d ) = R2 Xd X ˜d p(Xd ,X ˜d )d(Xd ,X ˜d ) = γd γ˜d ±dM(µ,r(Xd ))dM(r(Xd ),r(X ˜d ))p(r(Xd ,X ˜d ))ds ˜d dsd • orthogonal coordinates/subspaces • coordinate-wise decorrelation with respect to curvature-adapted measure Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 4/14 HCA Properties • data-adapted coordinate system r : RD → M • preserves distances orthogonal to lower order components: x −y = dM(r(x),r(y)) x = (x1 ,...,xd ,0,...,0), y = (x1 ,...,xd ,0,...,0,y ˜d ,0,...,0), 1 ≤ ˜d < d • intrinsic interpretation of covariance: cov(Xd ,X ˜d ) = R2 Xd X ˜d p(Xd ,X ˜d )d(Xd ,X ˜d ) = γd γ˜d ±dM(µ,r(Xd ))dM(r(Xd ),r(X ˜d ))p(r(Xd ,X ˜d ))ds ˜d dsd • orthogonal coordinates/subspaces • coordinate-wise decorrelation with respect to curvature-adapted measure Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 4/14 HCA Properties • data-adapted coordinate system r : RD → M • preserves distances orthogonal to lower order components: x −y = dM(r(x),r(y)) x = (x1 ,...,xd ,0,...,0), y = (x1 ,...,xd ,0,...,0,y ˜d ,0,...,0), 1 ≤ ˜d < d • intrinsic interpretation of covariance: cov(Xd ,X ˜d ) = R2 Xd X ˜d p(Xd ,X ˜d )d(Xd ,X ˜d ) = γd γ˜d ±dM(µ,r(Xd ))dM(r(Xd ),r(X ˜d ))p(r(Xd ,X ˜d ))ds ˜d dsd • orthogonal coordinates/subspaces • coordinate-wise decorrelation with respect to curvature-adapted measure Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 4/14 HCA Properties • data-adapted coordinate system r : RD → M • preserves distances orthogonal to lower order components: x −y = dM(r(x),r(y)) x = (x1 ,...,xd ,0,...,0), y = (x1 ,...,xd ,0,...,0,y ˜d ,0,...,0), 1 ≤ ˜d < d • intrinsic interpretation of covariance: cov(Xd ,X ˜d ) = R2 Xd X ˜d p(Xd ,X ˜d )d(Xd ,X ˜d ) = γd γ˜d ±dM(µ,r(Xd ))dM(r(Xd ),r(X ˜d ))p(r(Xd ,X ˜d ))ds ˜d dsd • orthogonal coordinates/subspaces • coordinate-wise decorrelation with respect to curvature-adapted measure Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 4/14 The Frame Bundle • the manifold and frames (bases) for the tangent spaces TpM • F(M) consists of pairs (p,u), p ∈ M, u frame for TpM • curves in the horizontal part of F(M) correspond to curves in M and parallel transport of frames Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 5/14 The Frame Bundle • the manifold and frames (bases) for the tangent spaces TpM • F(M) consists of pairs (p,u), p ∈ M, u frame for TpM • curves in the horizontal part of F(M) correspond to curves in M and parallel transport of frames Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 5/14 The Frame Bundle • the manifold and frames (bases) for the tangent spaces TpM • F(M) consists of pairs (p,u), p ∈ M, u frame for TpM • curves in the horizontal part of F(M) correspond to curves in M and parallel transport of frames Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 5/14 The Frame Bundle • the manifold and frames (bases) for the tangent spaces TpM • F(M) consists of pairs (p,u), p ∈ M, u frame for TpM • curves in the horizontal part of F(M) correspond to curves in M and parallel transport of frames Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 5/14 The Frame Bundle • the manifold and frames (bases) for the tangent spaces TpM • F(M) consists of pairs (p,u), p ∈ M, u frame for TpM • curves in the horizontal part of F(M) correspond to curves in M and parallel transport of frames Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 5/14 The Frame Bundle • the manifold and frames (bases) for the tangent spaces TpM • F(M) consists of pairs (p,u), p ∈ M, u frame for TpM • curves in the horizontal part of F(M) correspond to curves in M and parallel transport of frames Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 5/14 The Subspaces: Iterated Development • manifolds in general provides no canonical generalization of affine subspaces • SDEs are defined in the frame bundle using development of curves wt = t 0 u−1 s ˙xsds , wt ∈ Rη i.e. pull-back to Euclidean space using parallel transported frames ut • iterated development constructs subspaces of dimension > 1 (geodesic, polynomial, etc.) • geodesic developments (multi-step Fermi coordinates) generalize geodesic subspaces Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 6/14 The Subspaces: Iterated Development • manifolds in general provides no canonical generalization of affine subspaces • SDEs are defined in the frame bundle using development of curves wt = t 0 u−1 s ˙xsds , wt ∈ Rη i.e. pull-back to Euclidean space using parallel transported frames ut • iterated development constructs subspaces of dimension > 1 (geodesic, polynomial, etc.) • geodesic developments (multi-step Fermi coordinates) generalize geodesic subspaces Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 6/14 The Subspaces: Iterated Development • manifolds in general provides no canonical generalization of affine subspaces • SDEs are defined in the frame bundle using development of curves wt = t 0 u−1 s ˙xsds , wt ∈ Rη i.e. pull-back to Euclidean space using parallel transported frames ut • iterated development constructs subspaces of dimension > 1 (geodesic, polynomial, etc.) • geodesic developments (multi-step Fermi coordinates) generalize geodesic subspaces Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 6/14 The Subspaces: Iterated Development • manifolds in general provides no canonical generalization of affine subspaces • SDEs are defined in the frame bundle using development of curves wt = t 0 u−1 s ˙xsds , wt ∈ Rη i.e. pull-back to Euclidean space using parallel transported frames ut • iterated development constructs subspaces of dimension > 1 (geodesic, polynomial, etc.) • geodesic developments (multi-step Fermi coordinates) generalize geodesic subspaces Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 6/14 The Subspaces: Iterated Development • manifolds in general provides no canonical generalization of affine subspaces • SDEs are defined in the frame bundle using development of curves wt = t 0 u−1 s ˙xsds , wt ∈ Rη i.e. pull-back to Euclidean space using parallel transported frames ut • iterated development constructs subspaces of dimension > 1 (geodesic, polynomial, etc.) • geodesic developments (multi-step Fermi coordinates) generalize geodesic subspaces Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 6/14 Iterated Development • V ⊂ Rη linear subspace, V = V1 ⊥ V2, η = dimM • f : V1 → F(M) smooth map (e.g. an immersion) • Df (v1 +tv2) development starting at f(v1) • vector fields W1 ,...,WdimV2 : V → V, columns of W • Euclidean integral curve ˙wt = W(wt )v2 • development Df,W (v1 +tv2) = (xt ,ut ) ∈ F(M) defined by ˙ut = ut ˙wt , xt = πF(M)(ut ) • immersion for small v = v1 +v2 if full V2 rank on W • Wi constant: geodesic development; Wi = Dei p polynomial submanifolds; etc. Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 7/14 Iterated Development • V ⊂ Rη linear subspace, V = V1 ⊥ V2, η = dimM • f : V1 → F(M) smooth map (e.g. an immersion) • Df (v1 +tv2) development starting at f(v1) • vector fields W1 ,...,WdimV2 : V → V, columns of W • Euclidean integral curve ˙wt = W(wt )v2 • development Df,W (v1 +tv2) = (xt ,ut ) ∈ F(M) defined by ˙ut = ut ˙wt , xt = πF(M)(ut ) • immersion for small v = v1 +v2 if full V2 rank on W • Wi constant: geodesic development; Wi = Dei p polynomial submanifolds; etc. Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 7/14 Iterated Development • V ⊂ Rη linear subspace, V = V1 ⊥ V2, η = dimM • f : V1 → F(M) smooth map (e.g. an immersion) • Df (v1 +tv2) development starting at f(v1) • vector fields W1 ,...,WdimV2 : V → V, columns of W • Euclidean integral curve ˙wt = W(wt )v2 • development Df,W (v1 +tv2) = (xt ,ut ) ∈ F(M) defined by ˙ut = ut ˙wt , xt = πF(M)(ut ) • immersion for small v = v1 +v2 if full V2 rank on W • Wi constant: geodesic development; Wi = Dei p polynomial submanifolds; etc. Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 7/14 Iterated Development • V ⊂ Rη linear subspace, V = V1 ⊥ V2, η = dimM • f : V1 → F(M) smooth map (e.g. an immersion) • Df (v1 +tv2) development starting at f(v1) • vector fields W1 ,...,WdimV2 : V → V, columns of W • Euclidean integral curve ˙wt = W(wt )v2 • development Df,W (v1 +tv2) = (xt ,ut ) ∈ F(M) defined by ˙ut = ut ˙wt , xt = πF(M)(ut ) • immersion for small v = v1 +v2 if full V2 rank on W • Wi constant: geodesic development; Wi = Dei p polynomial submanifolds; etc. Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 7/14 Iterated Development • V ⊂ Rη linear subspace, V = V1 ⊥ V2, η = dimM • f : V1 → F(M) smooth map (e.g. an immersion) • Df (v1 +tv2) development starting at f(v1) • vector fields W1 ,...,WdimV2 : V → V, columns of W • Euclidean integral curve ˙wt = W(wt )v2 • development Df,W (v1 +tv2) = (xt ,ut ) ∈ F(M) defined by ˙ut = ut ˙wt , xt = πF(M)(ut ) • immersion for small v = v1 +v2 if full V2 rank on W • Wi constant: geodesic development; Wi = Dei p polynomial submanifolds; etc. Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 7/14 Iterated Development • V ⊂ Rη linear subspace, V = V1 ⊥ V2, η = dimM • f : V1 → F(M) smooth map (e.g. an immersion) • Df (v1 +tv2) development starting at f(v1) • vector fields W1 ,...,WdimV2 : V → V, columns of W • Euclidean integral curve ˙wt = W(wt )v2 • development Df,W (v1 +tv2) = (xt ,ut ) ∈ F(M) defined by ˙ut = ut ˙wt , xt = πF(M)(ut ) • immersion for small v = v1 +v2 if full V2 rank on W • Wi constant: geodesic development; Wi = Dei p polynomial submanifolds; etc. Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 7/14 Iterated Development • V ⊂ Rη linear subspace, V = V1 ⊥ V2, η = dimM • f : V1 → F(M) smooth map (e.g. an immersion) • Df (v1 +tv2) development starting at f(v1) • vector fields W1 ,...,WdimV2 : V → V, columns of W • Euclidean integral curve ˙wt = W(wt )v2 • development Df,W (v1 +tv2) = (xt ,ut ) ∈ F(M) defined by ˙ut = ut ˙wt , xt = πF(M)(ut ) • immersion for small v = v1 +v2 if full V2 rank on W • Wi constant: geodesic development; Wi = Dei p polynomial submanifolds; etc. Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 7/14 Iterated Development • V ⊂ Rη linear subspace, V = V1 ⊥ V2, η = dimM • f : V1 → F(M) smooth map (e.g. an immersion) • Df (v1 +tv2) development starting at f(v1) • vector fields W1 ,...,WdimV2 : V → V, columns of W • Euclidean integral curve ˙wt = W(wt )v2 • development Df,W (v1 +tv2) = (xt ,ut ) ∈ F(M) defined by ˙ut = ut ˙wt , xt = πF(M)(ut ) • immersion for small v = v1 +v2 if full V2 rank on W • Wi constant: geodesic development; Wi = Dei p polynomial submanifolds; etc. Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 7/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 1 find geodesic h1 with d dt h1|t=0 = u1 that 1 minimizes res(h1) = ∑N i=1 dM (xi ,πh1 (xi ))2 2 find u2 ⊥ u1 such that x h2 t (xi) are geodesics • that pass πh1 (xi ) • with derivatives ˙x h2 0 (xi ) equal trans. Ph1 u2 • that minimize resh1 (h2) = ∑N i=1 dM (xi ,πh2 (xi ))2 3 find u3 ⊥ {u1,u2} such that x h3 t (xi) are geodesics • that pass πh2 (xi ) • with derivatives ˙x h3 0 (xi ) par. transp. in h2 • that minimize resh2 (h3) = ∑N i=1 dM (xi ,πh3 (xi ))2 4 and so on . . . Parallel Transport and Local Analysis Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 9/14 Sample on S2 , horz.: uniform, vert.: normal −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 PGA −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 HCA Parallel trans- port along first component: Conditional Congruency • data/geometry congruency: data can be approximated by geodesics (Huckemann et al.) • one-dimensional concept • conditional congruency: X ˜d |X1 ,...,Xd is congruent • HCA defines a data-adapted coordinate system that provides a conditionally congruent splitting Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 10/14 Conditional Congruency • data/geometry congruency: data can be approximated by geodesics (Huckemann et al.) • one-dimensional concept • conditional congruency: X ˜d |X1 ,...,Xd is congruent • HCA defines a data-adapted coordinate system that provides a conditionally congruent splitting Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 10/14 Conditional Congruency • data/geometry congruency: data can be approximated by geodesics (Huckemann et al.) • one-dimensional concept • conditional congruency: X ˜d |X1 ,...,Xd is congruent • HCA defines a data-adapted coordinate system that provides a conditionally congruent splitting Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 10/14 Conditional Congruency • data/geometry congruency: data can be approximated by geodesics (Huckemann et al.) • one-dimensional concept • conditional congruency: X ˜d |X1 ,...,Xd is congruent • HCA defines a data-adapted coordinate system that provides a conditionally congruent splitting Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 10/14 Components May Flip Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 11/14 x2 x3 x4 (p) x1 = 0 slice x4 x3 x1 (q) x2 = 0 slice −1 0 1 −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 x3 x 2 x1 (r) HCA visualization Figure: 3-dim manifold 2x2 1 −2x2 2 +x2 3 +x2 4 = 1 in R4 with samples from two Gaussians with largest variance in the x2 direction (0.62 vs. 0.42 ). (a,b) Slices x1 = 0 and x2 = 0. (c) The second HCA horizontal component has largest x2 component (blue vector) whereas the second PGA component has largest x1 component (red vector). Corpora Callosa Corpus callosum variation: 3σ1 along h1, 3σ2 along h2 Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 12/14 Corpora Callosa Corpus callosum variation: 3σ1 along h1, 3σ2 along h2 (Loading corporacallosa.mp4) Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 12/14 Summary • Horizontal Component Analysis performs PCA-like dimensionality reduction in Riemannian manifolds • subspaces constructed from iterated frame bundle development • the implied coordinate system • is data adapted • preserves certain pairwise-distances and orthogonality • provides covariance interpretation • decorrelates curvature-adapted measure • provides conditionally congruent components • handles multi-modal distribution with spread over large-curvature areas Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 13/14 References - Sommer: Horizontal Dimensionality Reduction and Iterated Frame Bundle Development, GSI 2013. - Sommer et al.: Optimization over Geodesics for Exact Principal Geodesic Analysis, ACOM, in press. - Sommer et al.: Manifold Valued Statistics, Exact Principal Geodesic Analysis and the Effect of Linear Approximations, ECCV 2010. http://github.com/nefan/smanifold Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 14/14

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

- 1 Parallel Transport with the Pole Ladder: Application to Deformations of time Series of Images Marco Lorenzi, Xavier Pennec Asclepios research group - INRIA Sophia Antipolis, France GSI 2013 - 2GSI 2013 Paradigms of Deformation-based Morphometry Cross sectional Longitudinal t1 t2Sub A Sub B Different topologies Large deformations Biological interpretation is not obvious Within-subject Subtle changes Biologically meaningful - 3GSI 2013 Sub B Template Combining longitudinal and cross-sectional t1 t2 Sub A t1 t2 ? - 4GSI 2013 Sub B Template Sub A Combining longitudinal and cross-sectional Standard TBM approach Focuses on volume changes only Scalar analysis (statistical power) No modeling Jacobian determinant analysis - 5GSI 2013 Sub A Template Combining longitudinal and cross-sectional Longitudinal trajectories Sub B Vector transport is not uniquely defined Missing theoretical insights - 6GSI 2013 Diffeomorphic registration  Stationary Velocity Field setting [Arsigny 2006] v(x) stationary velocity field Lie group Exp(v) is geodesic wrt Cartan connections (non-metric) Geodesic defined by SVF Stationary Velocity Field setting [Arsigny 2006] v(x) stationary velocity field Lie group Exp(v) is geodesic wrt Cartan connections (non-metric) Geodesic defined by SVF LDDMM setting [Trouvé, 1998] v(x,t) time-varying velocity field Riemannian expid(v) is a metric geodesic wrt Levi-Civita connection Geodesic defined by initial momentum LDDMM setting [Trouvé, 1998] v(x,t) time-varying velocity field Riemannian expid(v) is a metric geodesic wrt Levi-Civita connection Geodesic defined by initial momentum Transporting trajectories: Parallel transport of initial tangent vectors M id v  - 7GSI 2013 [Schild, 1970] P0 P’0 P1 A C curve P2     P’1 A’ From relativity to image processing The Schild’s Ladder - 8GSI 2013 Schild’s Ladder Intuitive application to images P0 P’0 T0 A T’0 SLA) time Inter-subjectregistration [Lorenzi et al, IPMI 2011] - 9GSI 2013 t0 t1 t2 t3 - 10GSI 2013 t0 t1 t2 t3 • Evaluation of multiple geodesics for each time-point • Parallel transport is not consistently computed among time-points - 11 P0 P’0 T0 A T’0 A) The Pole Ladder optimized Schild’s ladder -A’ A’ C geodesic GSI 2013 - 12GSI 2013 Pole Ladder Equivalence to Schild’s ladder Symmetric connection: B is the parallel transport of A Locally linear construction  Pole ladder is the Schild’s ladder - 13GSI 2013 t1 t2 t3 t0 - 14GSI 2013 t0 t1 t2 t3 • Minimize the number of geodesics required • Parallel transport consistently computed amongst time-points - 15GSI 2013 Pole Ladder Application to SVF Setting [Lorenzi et al, IPMI 2011] B A + [ v , A ] + ½ [ v , [ v , A ] ] Baker-Campbell-Hausdorff formula (BCH) (Bossa 2007) - 16GSI 2013 Pole Ladder Iterative computation [Lorenzi et al, IPMI 2011] B A + [ v , A ] + ½ [ v , [ v , A ] ] A … v/n - 17 baseline Time 1 Time 4 … …ventricles expansion from the real time series Synthetic example GSI 2013 - 18 Comparison: •Schild’s ladder • Vector reorientation • Conjugate action • Scalar transport GSI 2013 Synthetic example EMETTEUR - NOM DE LA PRESENTATION - 19 Transport consistency Deformation Vector transport Scalar transport Scalar summary Scalar summary ( logJacobian det, …) Vector measure GSI 2013 Synthetic example - 20GSI 2013 Synthetic example - 21GSI 2013 Synthetic example Quantitative analysis • Pole ladder compares well with respect to scalar transport • High variability led by Schild’s ladder - 22 … … • Group-wise Statistics • Extrapolation Application on Alzheimer’s disease Group-wise analysis of longitudinal trajectories GSI 2013 - 23GSI 2013 Longitudinal changes in Alzheimer’s disease (141 subjects – ADNI data) ContractionExpansion Student’s t statistic - 24GSI 2013 Longitudinal changes in Alzheimer’s disease (141 subjects – ADNI data) Comparison with standard TBM Student’s t statistic Pole ladder Scalar transport • Consistent results • Equivalent statistical power - 25GSI 2013 Conclusions • General framework for the parallel transport of deformations (not necessarily requires the choice of a metric) • Minimal number of computations for the transport of time series of deformations • Efficient solution with the SVF setting • Consistent statistical results • Multivariate group-wise analysis of longitudinal changes Perspectives • Further investigations of numerical issues (step-size) • Comparison with other numerical methods for the parallel transport in diffeomorphic registration (Younes, 2007) - 26 Thank you GSI 2013

Keynote speech 1 (Yann Ollivier)

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

Objective Improvement in Information-Geometric Optimization Youhei Akimoto Project TAO – INRIA Saclay LRI, Bât. 490, Univ. Paris-Sud 91405 Orsay, France Youhei.Akimoto@lri.fr Yann Ollivier CNRS & Univ. Paris-Sud LRI, Bât. 490 91405 Orsay, France yann.ollivier@lri.fr ABSTRACT Information-Geometric Optimization (IGO) is a unified frame- work of stochastic algorithms for optimization problems. Given a family of probability distributions, IGO turns the original optimization problem into a new maximization prob- lem on the parameter space of the probability distributions. IGO updates the parameter of the probability distribution along the natural gradient, taken with respect to the Fisher metric on the parameter manifold, aiming at maximizing an adaptive transform of the objective function. IGO re- covers several known algorithms as particular instances: for the family of Bernoulli distributions IGO recovers PBIL, for the family of Gaussian distributions the pure rank-

ORAL SESSION 2 Deformations in Shape Spaces (Alain Trouvé)

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

On the geometry and the deformation of shapes represented by piecewise continuous Bézier curves with application to shape optimization Olivier Ruatta XLIM, UNR 7252 Université de Limoges CNRS Geometric Science of Information 2013 Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 1 / 32 Motivation: Shapes optimisation problems Let Ω ⊂ P R2 such that for each ω ∈ Ω the frontier ∂ω of this "region" is a regular curve (i.e. piecewise continuous here). Let F : Ω −→ R+ be a positive real valued function. Problem Find ω0 ∈ Ω such that F(ω0) ≤ F(ω) for all ω ∈ Ω. Very often, the computation of F(ω) requires to solve a system of PDE. Two problems : The cost of the computation of F(ω) and its differentiability (and computation of derivatives also). Compatibility of the space of shape and discretization of R2 for the system of PDE. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 2 / 32 Shapes optimization problems R+ M ∂ω F F(ω) ω Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 3 / 32 Shapes optimization methods Geometric gradient techniques (level sets, . . .) : compute how to deform the frontier of the shape and try to deform in a coherent way [Hadamard, Pierre, Henrot, Allaire, Jouve, . . .]. Relaxation method (SIMP, . . .) : compute a density that represent the support of the shape [Bensœ,Sigmund, . . .]. Topologic gradient : generally for PDEs, remove or add finites elements contening the shape [Masmoudi, Sokolowski, . . .]. Parametric optimization : reduce the shapes to a little space controlled by few parameters and look the problem as a parametric optimization problem [Elyssa (Didon in latin, 4 century before J.-C.),. . .,Goldberg,. . .]. Our approach : try to mix the best aspects of the first (level sets) and the last (parametric) approaches. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 4 / 32 Bézier curves Let P0, . . . , Pd ∈ R2, we define a family of curves parametrized over [0, 1] : B([P0], t) := P0 (degree 0 Bézier curve). B([P0, P1], t) := (1 − t)B([P0], t) + tB([P1], t) = (1 − t)P0 + tP1 (degree 1 Bézier curve) . . . B([P0, . . . , Pd ], t) := (1 − t)B([P0, . . . , Pd−1], t) + tB([P1, . . . , Pd ], t) (degree d Bézier curve). Those are polynomial curves. The points P0, . . . , Pd ∈ R2 are called the control polygon of the curve defined by B([P0, . . . , Pd ], t). Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 5 / 32 Bézier curves P0 P1 P2 P3 Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 6 / 32 Bernstein polynomials Definition Let d be a positive integer, for all i ∈ {0, . . . , d} we define: bi,d (t) = d i (1 − t)d−i ti . The polynomials b0,d , . . . , bd,d are called the Bernstein polynomials of degree d. Proposition The Bernstein polynomials of degree d, b0,d , . . . , bd,d , form a basis of the vector space of polynomials of degree less or equal to d. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 7 / 32 Bernstein polynomials and Bézier curves Theorem B([P0, . . . , Pd ], t) = d i=0 Pibi,d (t) Corollary Every parametrized curve with polynomial parametrization of degree at most can be represented as a Bézier curve of degree at most d. Definition We define Bd the space of all Bézier curves of degree at most d. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 8 / 32 Structure of Bd We denote E = R2 and we consider the following map: Ψd : Ed+1 −→ Bd defined by Ψd (P0, . . . , Pd ) = B([P0, . . . , Pd ], t). Proposition Ψd is a linear isomorphism between Ed+1 and Bd . Let t = t0 = 0 < t1 < · · · < td = 1 be a subdivision of [0, 1]. We define the sampling map: St,d : Γ(t) ∈ Bd −→ (Γ(t0), · · · , Γ(td )) ∈ Ed+1 . Proposition St,d is a linear isomorphism between Bd and Ed+1. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 9 / 32 Evaluation-Interpolation Let t = t0 = 0 ≤ t1 ≤ · · · ≤ td = 1 be a subdivision of [0, 1] and let P0, . . . , Pd ∈ E. Bt,d :=    b0,d (t0) · · · bd,d (t0) ... ... ... b0,d (td ) · · · bd,d (td )    . Proposition (Evaluation) Bt,d    PT 0 ... PT d    =    B([P0, . . . , Pd ], t0)T ... B([P0, . . . , Pd ], td )T    . Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 10 / 32 Multi-evaluation t = (0, 1/3, 2/3, 1),     MT 0 MT 1 MT 2 MT 3     = Bt,3     PT 0 PT 1 PT 2 PT 3     P0 P1 P2 P3 M0 M1 M2 M3 Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 11 / 32 Evaluation-Interpolation Let t = t0 = 0 ≤ t1 ≤ · · · ≤ td = 1 be a subdivision of [0, 1] and let M0, . . . , Md ∈ E. Problem Find P0, . . . , Pd ∈ E such that B([P0, . . . , Pd ], ti) = Mi for all i ∈ {0, . . . , d}. Proposition (Interpolation) The points defined by:    PT 0 ... PT d    = B−1 t,d    B([P0, . . . , Pd ], t0)T ... B([P0, . . . , Pd ], td )T    solve the problem. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 12 / 32 Summary 1 We have 3 spaces: Pd Ed+1 the vector space of the control polygons, St,d Ed+1 the vector space of the sampling of Bézier curves associated to a subdivision t, Bd the vector space of the degree d Bézier parametrizations. Proposition The following diagram of isomorphisms is commutative: Pd Ψd −→ Bd Bt,d ↓ Et,d St,d . Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 13 / 32 Deformation problem Let Γ(t) := B([P0, . . . , Pd ], t) be a degree d Bézier curve s.t. Et,d (Γ) = M =    MT 0 ... MT d    and let δM :=    δMT 0 ... δMT d    ∈ TMSt,d , we consider the following problem: Problem (Deformation problem) Denoting P =    PT 0 ... PT d    find δP ∈ TPPd such that Λ(t) := B([P0 + δP0, . . . , Pd + δPd ], t) satisfies Λ(ti) = Mi + δMi for all i ∈ {0, . . . , d}. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 14 / 32 Deformation problem P1 P2 P3 M0 M1 M2 M3 P0 δM0 δM1 δM2 δM3 Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 15 / 32 Deformation curve Proposition (Deformation polygon) Taking δP = B−1 t,d δM, the curve Ψd (P + δP) is a solution of the "Deformation problem". δP ∈ TPPd is called the deformation polygon and Ψd (δP) ∈ TB([P0,...,Pd ],t)Bd is called the deformation curve. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 16 / 32 Piecewize Bézier curves Let P1 := (P1,0, . . . , P1,d ) ∈ Pd , P2 := (P2,0, . . . , P2,d ) ∈ Pd , . . . and PN := (PN,0, . . . , PN,d ) ∈ Pd be such that P1,d = P2,0, . . . , PN−1,d = PN,0. We define the following parametrization: B((P1, . . . , PN), t) =    B([P1,0, . . . , P1,d ], N ∗ t) if t ∈ [0, 1/N[ B([P2,0, . . . , P2,d ], N ∗ t − 1) if t ∈ [1/N, 2/N[ ... B([PN,0, . . . , PN,d ], N ∗ t − (N − 1)) if t ∈ [N−1 N , N] We denote Ψ(P1, . . . , PN) := B((P1, . . . , PN), t). This is a continuous curve joining P1,0 to PN,d and the curve is a loop if P1,0 = PN,d . Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 17 / 32 Piecewize Bézier curves P1,3 = P2,0 P1,0 P1,1 P1,2 P2,1 P2,2 P2,3 Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 18 / 32 Vector space of Piecewize Bézier curves We define : The vector space PN,d = {(P1, . . . , PN)|P1,d = P2,0, . . . , PN−1,d = PN,0} ⊂ PN d . The vector space LN,d = {(P1, . . . , PN)|P1,d = P2,0, . . . , PN−1,d = PN,0, P1,0 = PN,d } ⊂ PN d . The vector space of PBC BN,d = {B((P1, . . . , Pd ), t)|(P1, . . . , Pd ) ∈ PN,d }. The vector space of PBL Bc N,d = {B((P1, . . . , Pd ), t)|(P1, . . . , Pd ) ∈ LN,d }. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 19 / 32 Sampling piecewize Bézier curves Let t = (t1,0 := 0, t1,1 := 1 N∗d , . . . , t1,d := 1 N , t2,0 = 1 N , . . . , tN−1,d := N−1 N , tN,0 := N−1 N , . . . , tN,d := 1) be a multi-regular subdivision and denote ti := (ti,0, . . . , ti,d ). We define the following linear map: Et,N,d : λ(t) ∈ BN,d −→ (λ(t1,0), . . . , λ(tN,d )) ∈ St,N,d ⊂ SN d The same way we define: Ec t,N,d : λ(t) ∈ Bc N,d −→ (λ(t1,0), . . . , λ(tN,d )) ∈ Sc t,N,d ⊂ SN d Finally, we define: Bt,N,d : (P1,0, . . . , PN,d ) ∈ PN,d −→ Bt1,d × · · · × BtN ,d    PT 1,0 ... PT N,d    ∈ St,N,d . Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 20 / 32 Summary 2 Proposition The following diagram of isomorphisms is commutative: PN,d ΨN,d −→ BN,d Bt,N,d ↓ Et,N,d St,N,d . Remark B−1 t,N,d := B−1 t1,d × · · · × B−1 tN ,d Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 21 / 32 Deformation problem for PBC Let Γ(t) ∈ BN,d be s.t. Et,N,d (Γ) = M :=    MT 1,0 ... MT N,d    ∈ St,N,d and let δM =    δMT 1,0 ... δMT N,d    ∈ TMSt,N,d , we consider the following problem: Problem (Deformation problem for PBC) Denoting P =    PT 1,0 ... PT N,d    find δP ∈ TPPN,d such that Λ(t) := B((P0 + δP0, . . . , Pd + δPd ), t) satisfies Λ(ti,j) = Mi,j + δMi,j for all i ∈ {1, . . . , N} and j ∈ {0, . . . , d}. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 22 / 32 Deformation Piecewize Bézier curve Proposition (Deformation polygons) Taking δP = B−1 t,N,d δM, the curve ΨN,d (P + δP) is a solution of the "Deformation problem for PBC". δP ∈ TPPN,d is called the deformation polygons and ΨN,d (δP) ∈ TB((P0,...,Pd ),t)BN,d is called the deformation piecewize Bézier curve. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 23 / 32 Back to shapes optimization Let ω ⊂ E be such that ∂ω is a piecewise continuous curve and F : P(E) −→ R+ the objective functional. The geometric gradient F(ω) : M ∈ ∂ω −→ F(ω)(M) ∈ TME give a perturbation for each point of the frontier to decrease the objective functional. R+ M ∂ω F F(ω) ω F(ω)(M) Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 24 / 32 Basic idea of the approach The space of admissible shape is ΩN,d := ω ∈ P(E)|∂ω ∈ Bc N,d and let ω ∈ ΩN,d such that ∂ω = B((P1, . . . , PN), t). Let M =    MT 1,0 ... MT N,d    = Et,N,d (B((P1, . . . , PN), t), to obtain a better shape we compute δM =    F(ω)(M1,0)T ... F(ω)(MN,d )T   . Then we compute δP = B−1 t,N,d δM and let λ(t) = B((P + δP), t), we have: Proposition λ(ti,j) = Mi,j + F(ω)(Mi,j) for all i ∈ {1, . . . , N} and j ∈ {0, . . . , d}. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 25 / 32 Basic theoretical contribution Let Ω = {ω ∈ P(E)|∂ω ∈ C0 p([0, 1], E)} and F : Ω −→ R+ be a smooth function such that F(ω) : ∂ω −→ TE is everywhere well defined. For every γ ∈ BN,d we associate "the" shape γ such that ∂γ = γ. Proposition For every N and d integer and every compatible subdivision t, we associate to F a vector field VF : BN,d −→ TBN,d by: B((P), t) −→ ΨN,d   B−1 t,N,d    ( F(B(P, t))(B((P), t1,0)))T ... ( F(B(P, t))(B((P), tN,d )))T       . Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 26 / 32 Optimal shapes and fixed points Proposition Let ω ∈ Ω such that F(ω) ≡ 0 then, for all N and d and every compatible subdivision t there is γ ∈ BN,d satisfying VF (γ) = 0. In other words, every optimum of F induce at list a fixed point of VF over BN,d . Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 27 / 32 Meta-algorithm for shapes optimization Input: An initial shape ω s.t. ∂ω = B((P), t) ∈ BN,d Output: The control polygon P a the frontier of a local minimum of F(ω). λ ← B((P), t) while criterium not satified do δP ← B−1 t,N,d (Et,N,d (λ)) P ← P + δP λ ← ΨN,d (P) end while Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 28 / 32 Snake-like algorithm for omnidirectional images segmentation Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 29 / 32 Snake-like algorithm for omnidirectional images segmentation Image segmentation can be interpreted as a shapes optimization problem using "snake-approach". The geometric gradient is built from: a "balloon" force making the contour expand, the gradient of the intensity of the image (vanishing at the contours). We use a classical approach: Canny filter to detect contours. This problem is used to detect free space for an autonomous robot with a catadioptric sensor. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 30 / 32 Snake-like algorithm for classical images segmentation Joint work with Ouiddad Labbani-I. and Pauline Merveilleux-O. of "Université de Picardie - Jules Vernes". Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 31 / 32 Snake-like algorithm for omnidirectional images segmentation Joint work with Ouiddad Labbani-I. and Pauline Merveilleux-O. of "Université de Picardie - Jules Vernes". Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 32 / 32

Keynote speech 2 (Hirohiko Shima)

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

. . . . . . . . . .. . . Geometry of Hessian Structures Hirohiko Shima h-shima@c-able.ne.jp Yamaguchi University 2013/8/29 Hirohiko Shima (Yamaguchi University) Geometry of Hessian Structures 2013/8/29 1 / 27 . . . . . . . ..1 Hessian Structures . ..2 Hessian Structures and K¨ahlerian Structures . ..3 Dual Hessian Structures . ..4 Hessian Curvature Tensor . ..5 Regular Convex Cones . ..6 Hessian Structures and Affine Differential Geometry . ..7 Hessian Structures and Information Geometry . ..8 Invariant Hessian Structures Hirohiko Shima (Yamaguchi University) Geometry of Hessian Structures 2013/8/29 2 / 27 . . . . . . Preface In 1964 Prof. Koszul was sent to Japan by the French Government and gave lectures at Osaka University. I was a student in those days and attended the lectures together with the late Professor Matsushima and Murakami. The topics of the lectures were a theory of flat manifolds with flat connection D and closed 1-form α such that Dα is positive definite. α being a closed 1-form it is locally expressed as α = dφ, and so Dα = Ddφ is just a Hessian metric in our view point. This is the ultimate origin of the notion of Hessian structures and the starting point of my research. Hirohiko Shima (Yamaguchi University) Geometry of Hessian Structures 2013/8/29 3 / 27 . . . . . . 1. Hessian structures . Definition (Hessian metric) .. . . .. . . M : manifold with flat connection D A Riemannian metric g on M is said to be a Hessian metric if g can be locally expressed by g = Ddφ, gij = ∂2 φ ∂xi ∂xj , where {x1 , · · · , xn } is an affine coordinate system w.r.t. D. (D, g) : Hessian structure on M (M, D, g) : Hessian manifold The function φ is called a potential of (D, g). Hirohiko Shima (Yamaguchi University) Geometry of Hessian Structures 2013/8/29 4 / 27 . . . . . . . Definition (difference tensor γ) .. . . .. . . Let γ be the difference tensor between the Levi-Civita connection ∇ for g and the flat connection D; γ = ∇ − D, γX Y = ∇X Y − DX Y . γi jk(component of γ)=Γi jk(Christoffel symbol for g) . Proposition (characterizations of Hessian metric) .. . . . Let (M, D) be a flat manifold and g a Riemannian metric on M. The following conditions are equivalent. (1) g is a Hessian metric. (2) (DX g)(Y , Z) = (DY g)(X, Z) Codazzi equation i.e. the covariant tensor Dg is symmetric. (3) g(γX Y , Z) = g(Y , γX Z) Hirohiko Shima (Yamaguchi University) Geometry of Hessian Structures 2013/8/29 5 / 27 . . . . . . 2. Hessian Structures and K¨ahlerian Structures . Definition (K¨ahlerian metric) .. . . .. . . A complex manifold with Hermitian metric g = ∑ i,j gij dzi d¯zj is called a K¨ahlerian metric if g is expressed by complex Hessian gij = ∂2 ψ ∂zi ∂¯zj , where {z1 , · · · , zn } is a holomorphic coordinate system. Chen and Yau called Hessian metrics affine K¨ahlerian metrics. . Proposition .. . . .. . . Let TM be the tangent bundle over Hessian manifold (M, D, g). Then TM is a complex manifold with K¨ahlerian metric gT = ∑n i,j=1 gij dzi d¯zj where zi = xi + √ −1dxi . Hirohiko Shima (Yamaguchi University) Geometry of Hessian Structures 2013/8/29 6 / 27 . . . . . . . Example (tangent bundle of paraboloid) .. . . .. . . Ω = { x ∈ Rn | xn − 1 2 n−1∑ i=1 (xi )2 > 0 } paraboloid φ = log { xn − 1 2 n−1∑ i=1 (xi )2 }−1 g = Ddφ : Hessian metric on Ω TΩ ∼= Ω + √ −1Rn ⊂ Cn : tube domain over Ω TΩ ∼

ORAL SESSION 3 Differential Geometry in Signal Processing (Michel Berthier)

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

A Riemannian Fourier Transform via Spin Representations Geometric Science of Information 2013 T. Batard - M. Berthier - Outline of the talk The Fourier transform for multidimensional signals - Examples Three simple ideas The Riemannian Fourier transform via spin representations Applications to filtering The Fourier transform for multidimensional signals The problem : How to define a Fourier transform for a signal φ : Rd −→ Rn that does not reduce to componentwise Fourier transforms and that takes into account the (local) geometry of the graph associated to the signal ? Framework of the talk : the signal φ : Ω ⊂ R2 −→ Rn is a grey-level image, n = 1, or a color image, n = 3. In the latter case, we want to deal with the full color information in a really non marginal way. Many already existing propositions (without geometric considerations) : • T. Ell and S.J. Sangwine transform : Fµφ(U) = ∫ R2 φ(X)exp(−µ⟨X, U⟩)dX (1) where φ : R2 −→ H0 is a color image and µ is a pure unitary quaternion encoding the grey axis. Fµφ = A∥ exp[µθ∥] + A⊥ exp[µθ⊥]ν (2) where ν is a unitary quaternion orthogonal to µ. Allows to define an am- plitude and a phase in the chrominance and in the luminance. The Fourier transform for multidimensional signals The problem : How to define a Fourier transform for a signal φ : Rd −→ Rn that does not reduce to componentwise Fourier transforms and that takes into account the (local) geometry of the graph associated to the signal ? Framework of the talk : the signal φ : Ω ⊂ R2 −→ Rn is a grey-level image, n = 1, or a color image, n = 3. In the latter case, we want to deal with the full color information in a really non marginal way. Many already existing propositions (without geometric considerations) : • T. Bülow transform : Fijφ(U) = ∫ R2 exp(−2iπx1u1)φ(X)exp(−2jπu2x2)dX (1) where φ : R2 −→ R. Fijφ(U) = Fccφ(U) − iFscφ(U) − jFcsφ(U) + kFssφ(U) (2) Allows to analyse the symetries of the signal with respect to the x and y variables. The Fourier transform for multidimensional signals The problem : How to define a Fourier transform for a signal φ : Rd −→ Rn that does not reduce to componentwise Fourier transforms and that takes into account the (local) geometry of the graph associated to the signal ? Framework of the talk : the signal φ : Ω ⊂ R2 −→ Rn is a grey-level image, n = 1, or a color image, n = 3. In the latter case, we want to deal with the full color information in a really non marginal way. Many already existing propositions (without geometric considerations) : • M. Felsberg transform : Fe1e2e3 φ(U) = ∫ R2 exp(−2πe1e2e3⟨U, X⟩)φ(X)dX (1) where φ(X) = φ(x1e1 + x2e2) = φ(x1, x2)e3 is a real valued function defined on R2 (a grey level image). The coefficient e1e2e3 is the pseudos- calar of the Clifford algebra R3,0. This transform is well adapted to the monogenic signal. The Fourier transform for multidimensional signals The problem : How to define a Fourier transform for a signal φ : Rd −→ Rn that does not reduce to componentwise Fourier transforms and that takes into account the (local) geometry of the graph associated to the signal ? Framework of the talk : the signal φ : Ω ⊂ R2 −→ Rn is a grey-level image, n = 1, or a color image, n = 3. In the latter case, we want to deal with the full color information in a really non marginal way. Many already existing propositions (without geometric considerations) : • F. Brackx et al. transform : F±φ(U) = ( 1 √ 2π )n ∫ Rn exp( i π 2 ΓU) × exp(−i⟨U, X⟩)φ(X)dX (1) where ΓU is the angular Dirac operator. For φ : R2 −→ R0,2 ⊗ C F±φ(U) = 1 2π ∫ R2 exp(±U ∧ X)φ(X)dX (2) where exp(±U ∧ X) is a bivector. Three simple ideas ..1 The abstract Fourier transform is defined through the action of a group. • Shift theorem : Fφα(u) = e2iπαu Fφ(u) (3) where φα(x) = φ(x + α). Here, the involved group is the group of trans- lations of R. The action is given by (α, x) −→ x + α := τα(x) (4) The mapping (group morphism) χu : τα −→ e2iπuα = χu(α) ∈ S1 (5) is a so-called character of the group (R, +). The Fourier transform reads Fφ(u) = ∫ R χu(−x)φ(x)dx (6) .. Spinor Fourier Three simple ideas ..1 The abstract Fourier transform is defined through the action of a group. • More precisely : – By means of χu, every element of the group is represented as a unit complex number that acts by multiplication on the values of the function. Every u gives a representation and the Fourier transform is defined on the set of representations. – If the group G is abelian, we only deal with the group morphisms from G to S1 (characters). Three simple ideas ..1 The abstract Fourier transform is defined through the action of a group. • Some transforms : – G = (Rn, +) : we recover the usual Fourier transform. – G = SO(2, R) : this corresponds to the theory of Fourier series. – G = Z/nZ : we obtain the discrete Fourier transform. – In the non abelian case one has to deal with the equivalence classes of unitary irreducible representations (Pontryagin dual). Some of these irreducible representations are infinite dimensional. Applications to ge- neralized Fourier descriptors with the group of motions of the plane, to shearlets,... Three simple ideas ..1 The abstract Fourier transform is defined through the action of a group. • The problem : Find a good way to represent the group of translations (R2, +) in order to make it act naturally on the values (in Rn) of a multidimensional function Three simple ideas ..2 The vectors of Rn can be considered as generalized numbers. • Usual identifications : X = (x1, x2) ∈ R2 ↔ z = x1 + ix2 ∈ C (3) X = (x1, x2, x3, x4) ∈ R4 ↔ q = x1 + ix2 + jx3 + kx4 ∈ H (4) The fields C and H are the Clifford algebras R0,1 (of the vector space R with the quadratic form Q(x) = −x2) and R0,2 (of the vector space R2 with the quadratic form Q(x1, x2) = −x2 1 − x2 2). • Clifford algebras : the vector space Rn with the quadratic form Qp,q is embedded in an algebra Rp,q of dimension 2n that contains scalars, vectors and more generally multivectors such as the bivector u ∧ v = 1 2 (uv − vu) (5) Three simple ideas ..2 The vectors of Rn can be considered as generalized numbers. • The spin groups : the group Spin(n) is the group of elements of R0,n that are products x = n1n2 · · · n2k (3) of an even number of unit vectors of Rn. • Some identifications : Spin(2) ≃ S1 (4) Spin(3) ≃ H1 (5) Spin(4) ≃ H1 × H1 (6) • Natural idea : replace the group morphisms from (R2, +) to S1 , the cha- racters, by group morphisms from (R2, +) to Spin(n), the spin characters. Three simple ideas ..2 The vectors of Rn can be considered as generalized numbers. • The problem : Compute the spin characters, i.e. the group morphisms from (R2, +) to Spin(n) Find meaningful representation spaces for the action of the spin characters Three simple ideas ..2 The vectors of Rn can be considered as generalized numbers. • Spin(3) characters : χu1,u2,B : (x1, x2) −→ exp 1 2 [ B A ( x1 x2 )] = exp 1 2 [(x1u1 + x2u2)B] (3) where A = (u1 u2) is the matrix of frequencies and B = ef with e and f two orthonormal vectors of R3. .. Spinor Fourier Three simple ideas ..2 The vectors of Rn can be considered as generalized numbers. • Spin(4) and Spin(6) characters : (x1, x2) −→ exp 1 2 [ (B1 B2) A ( x1 x2 )] (3) (x1, x2) −→ exp 1 2 [ (B1 B2 B3) A ( x1 x2 )] (4) where A is a 2 × 2, resp. 2 × 3, real matrix and Bi = eifi for i = 1, 2, resp. i = 1, 2, 3, with (e1, e2, f1, f2), resp. (e1, e2, e3, f1, f2, f3), an orthonormal basis of R4, resp. R6. Three simple ideas ..3 The spin characters are parametrized by bivectors. • Fundamental remark : the spin characters are as usual parametrized by frequencies, the entries of the matrix A. But they are also parametrized by bivectors, B, B1 and B2, B1, B2 and B3, depending on the context. • How to involve the geometry ? it seems natural to parametrize the spin characters by the bivector corresponding to the tangent plane of the image graph, more precisely by the field of bivectors corresponding to the fiber bundle of the image graph. Three simple ideas ..3 The spin characters are parametrized by bivectors. • Several possibilities for dealing with representation spaces for the action of the spin characters : – Using Spin(3) characters and the generalized Weierstrass representa- tion of surface (T. Friedrich) : in “Quaternion and Clifford Fourier Trans- form and Wavelets (E. Hitzer and S.J. Sangwine Eds), Trends in Mathe- matics, Birkhauser, 2013. – Using Spin(4) and Spin(6) characters and the so-called standard re- presentations of the spin groups : in IEEE Journal of Selected Topics in Signal Processing, Special Issue on Differential Geometry in Signal Pro- cessing, Vol 7, Issue 4, 2013. The Riemannian Fourier transform The spin representations of Spin(n) are defined through complex represen- tations of Clifford algebras. They do not “descend” to the orthogonal group SO(n, R) (since they send −1 to −Identity contrary to the standard represen- tations). These are the representations used in physics. The complex spin representation of Spin(3) is the group morphism ζ3 : Spin(3) −→ C(2) (5) obtained by restricting to Spin(3) ⊂ (R3,0 ⊗ C)0 a complex irreducible repre- sentation of R3,0. An color image is considered as a section .. Spinor Fourier σφ : (x1, x2) −→ 3∑ k=1 (0, φk (x1, x2)) ⊗ gk (6) of the spinor bundle PSpin(E3(Ω)) ×ζ3 C2 (7) where E3(Ω) = Ω × R3 and (g1, g2, g3) is the canonical basis of R3. The Riemannian Fourier transform Dealing with spinor bundles allows varying spin characters and the most natural choice for the field of bivectors B := B(x1, x2) which generalized the field of tangent planes is B = γ1g1g2 + γ2g1g3 + γ3g2g3 (8) with γ1 = 1 δ γ2 = √∑3 k=1 φ2 k,x2 δ γ3 = − √∑3 k=1 φ2 k,x1 δ δ = 1 + 2∑ j=1 3∑ k=1 φ2 k,xj (9) The operator B· acting on the sections of S(E3(Ω)), where · denotes the Clif- ford multiplication, is represented by the 2 × 2 complex matrix field B· = ( iγ1 −γ2 − iγ3 γ2 − iγ3 −iγ1 ) (10) Since B2 = −1 this operator has two eigenvalue fields i and −i. Consequently, every section σ of S(E3(Ω)) can be decomposed into σ = σB + + σB − where σB + = 1 2 (σ − iB · σ) σB − = 1 2 (σ + iB · σ) (11) The Riemannian Fourier transform The Riemannian Fourier transform of σφ is given by .. Usual Fourier FBσφ(u1, u2) = ∫ R2 χu1,u2,B(x1,x2)(−x1, −x2) · σφ(x1, x2)dx1dx2 (12) .. Spin characters .. Image section The decomposition of a section σφ associated to a color image leads to φ(x1, x2) = ∫ R2 3∑ k=1 [ φk+ (u1, u2)eu1,u2 (x1, x2) √ 1 − γ1 2 +φk− −1 (u1, u2)e−u1,−u2 (x1, x2) √ 1 + γ1 2 ] ⊗ gkdu1du2 (13) where φk+ = φk √ 1 − γ1 2 φk− = φk √ 1 + γ1 2 (14) Low-pass filtering Figure: Left : Original - Center : + Component - Right : - Component Low-pass filtering (a) + Component (b) Variance : 10000 (c) Variance : 1000 (d) Variance : 100 (e) Variance : 10 (f) Variance → 0 Figure: Low-pass filtering on the + component Low-pass filtering (a) - Component (b) Variance : 10000 (c) Variance : 1000 (d) Variance : 100 (e) Variance : 10 (f) Variance → 0 Figure: Low-pass filtering on the - component Thank you for your attention !

Keynote speech 3 (Giovanni Pistone)

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

GSI2013 - Geometric Science of Information, Paris, 28-30 August 2013 Dimensionality reduction for classification of stochastic fibre radiographs C.T.J. Dodson1 and W.W. Sampson2 School of Mathematics1 and School of Materials2 University of Manchester UK ctdodson@manchester.ac.uk Abstract Dimensionality reduction helps to identify small numbers of essential features of stochastic fibre networks for classification of image pixel density datasets from experimental radiographic measurements of commercial samples and simulations. Typical commercial macro-fibre networks use finite length fibres suspended in a fluid from which they are continuously deposited onto a moving bed to make a continuous web; the fibres can cluster to differing degrees, primarily depending on the fluid turbulence, fibre dimensions and flexibility. Here we use information geometry of trivariate Gaussian spatial distributions of pixel density among first and second neighbours to reveal features related to sizes and density of fibre clusters. Introduction Much analytic work has been done on modelling of the statistical geometry of stochastic fibre networks and their behaviour in regard to strength, fluid ingress or transfer [1, 5, 7]. Using complete sampling by square cells, their areal density distribution is typically well represented by a log-gamma or a (truncated) Gaussian distribution of variance that decreases monotonically with increasing cell size; the rate of decay is dependent on fibre and fibre cluster dimensions. Clustering of fibres is well-approximated by Poisson processes of Poisson clusters of differing density and size. A Poisson fibre network is a standard reference structure for any given size distribution of fibres; its statistical geometry is well-understood for finite and infinite fibres. Figure : 1. Electron micrographs of four stochastic fibrous materials. Top left: Nonwoven carbon fibre mat; Top right: glass fibre filter; Bottom left: electrospun nylon nanofibrous network (Courtesy S.J. Eichhorn and D.J. Scurr); Bottom right: paper using wood cellulose fibres—typically flat ribbonlike, of length 1 to 2mm and width 0.02 to 0.03mm. Figure : 2. Areal density radiographs of three paper networks made from natural wood cellulose fibres, of order 1mm in length, with constant mean density but different distributions of fibres. Each image represents a square region of side length 5 cm; darker regions correspond to higher coverage. The left image is similar to that expected for a Poisson process of the same fibres, so typical real samples exhibit clustering of fibres. Spatial statistics We use information geometry of trivariate Gaussian spatial distributions of pixel density with covariances among first and second neighbours to reveal features related to sizes and density of fibre clusters, which could arise in one, two or three dimensions—the graphic shows a grey level barcode for the ordered sequence of the 20 amino acids in a yeast genome, a 1-dimensional stochastic texture. Saccharomyces CerevisiaeAmino Acids SC1 For isotropic spatial processes, which we consider here, the variables are means over shells of first and second neighbours, respectively, which share the population mean with the central pixel. For anisotropic networks the neighbour groups would be split into more, orthogonal, new variables to pick up the spatial anisotropy in the available spatial directions. Typical sample data Figure : 3. Trivariate distribution of areal density values for a typical newsprint sample. Left: source radiograph; centre: histogram of pixel densities ˜βi , average of first neighbours ˜β1,i and second neighbours ˜β2,i ; right: 3D scatter plot of ˜βi , ˜β1,i and ˜β2,i . Information geodesic distances between multivariate Gaussians What we know analytically is the geodesic distance between two multivariate Gaussians, fA, fB, of the same number n of variables in two particular cases [2]: Dµ(fA, fB) when they have a common mean µ but different covariances ΣA, ΣB and DΣ(fA, fB) when they have a common covariance Σ but different means µA, µB. The general case is not known analytically but for the purposes of studying the stochastic textures arising from areal density arrays of samples of stochastic fibre networks, a satisfactorily discriminating approximation is D(fA , fB ) ≈ Dµ(fA , fB ) + DΣ(fA , fB ). Information geodesic distance between multivariate Gaussians [2] (1). µA = µB, ΣA = ΣB = Σ : fA = (n, µA, Σ), fB = (n, µB, Σ) Dµ(fA , fB ) = µA − µB T · Σ−1 · µA − µB . (1) (2). µA = µB = µ, ΣA = ΣB : fA = (n, µ, ΣA), fB = (n, µ, ΣB) DΣ(fA , fB ) = 1 2 n j=1 log2 (λj), (2) with {λj} = Eig(ΣA−1/2 · ΣB · ΣA−1/2 ). From the form of DΣ(fA, fB) in (2) it may be seen that an approximate monotonic relationship arises with a more easily computed symmetrized log-trace function given by ∆Σ(fA, fB) = log 1 2n Tr(ΣA−1/2 · ΣB · ΣA−1/2 ) + Tr(ΣB−1/2 · ΣA · ΣB−1/2 ) . (3) This is illustrated by the plot of DΣ(fA, fB) from equation (2) on ∆Σ(fA, fB) from equation (3) in Figure 4 for 185 trivariate Gaussian covariance matrices. For comparing relative proximity, this is a better measure near zero than the symmetrized Kullback-Leibler distance [6] in those multivariate Gaussian cases so far tested and may be quicker for handling large batch processes. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.0 0.2 0.4 0.6 0.8 1.0 1.2 DΣ(fA, fB) ∆Σ(fA, fB) Figure : 4. Plot of DΣ(fA , fB ) from (2) on ∆Σ(fA , fB ) from (3) for 185 trivariate Gaussian covariance matrices. Dimensionality reduction for data sets 1. Obtain mutual ‘information distances’ D(i, j) among the members of the data set of textures X1, X2, .., XN each with 250×250 pixel density values. 2. The array of N × N differences D(i, j) is a symmetric positive definite matrix with zero diagonal. This is centralized by subtracting row and column means and then adding back the grand mean to give CD(i, j). 3. The centralized matrix CD(i, j) is again symmetric positive definite with diagonal zero. We compute its N eigenvalues ECD(i), which are necessarily real, and find the N corresponding N-dimensional eigenvectors VCD(i). 4. Make a 3 × 3 diagonal matrix A of the first three eigenvalues of largest absolute magnitude and a 3 × N matrix B of the corresponding eigenvectors. The matrix product A · B yields a 3 × N matrix and its transpose is an N × 3 matrix T, which gives us N coordinate values (xi, yi, zi) to embed the N samples in 3-space. 2 4 6 8 10 12 14 2 4 6 8 10 12 14 -0.5 0.0 0.5 -0.2 0.0 0.2 -0.2 -0.1 0.0 0.1 Figure : 5. DΣ(fA , fB ) as a cubic-smoothed surface (left), contour plot (right), trivariate Gaussian information distances among 16 datasets of 1mm pixel density differences between a Poisson network and simulated networks from 1mm fibres, same mean density different clustering. Embedding: subgroups show numbers of fibres in clusters and cluster densities. 2 4 6 8 10 12 2 4 6 8 10 12 -1 0 1 0.0 0.5 -0.2 0.0 0.2 Figure : 6. DΣ(fA , fB ) as a cubic-smoothed surface (left), contour plot (right), for trivariate Gaussian information distances among 16 datasets of 1mm pixel density arrays for simulated networks made from 1mm fibres, each network with the same mean density but with different clustering. Embedding: subgroups show numbers of fibres in clusters and cluster densities; the solitary point is an unclustered Poisson network. 2 4 6 8 10 12 2 4 6 8 10 12 0 1 0.0 0.5 -0.2 0.0 0.2 Figure : 7. DΣ(fA , fB ) as a cubic-smoothed surface (left), and as a contour plot (right), for trivariate Gaussian information distances among 16 simulated Poisson networks made from 1mm fibres, with different mean density, using pixels at 1mm scale. Second row: Embedding of the same Poisson network data, showing the effect of mean network density. -5 0 5 0 2 4 -1.0 -0.5 0.0 0.5 Figure : 8. Embedding using 182 trivariate Gaussian distributions for samples from a data set of radiographs of commercial papers. The embedding separates different forming methods into subgroups. References [1] K. Arwini and C.T.J. Dodson. Information Geometry Near Randomness and Near Independence. Lecture Notes in Mathematics. Springer-Verlag, New York, Berlin, 2008, Chapter 9 with W.W. Sampson, Stochasic Fibre Networks pp 161-194. [2] C. Atkinson and A.F.S. Mitchell. Rao’s distance measure. Sankhya: Indian Journal of Statistics 48, A, 3 (1981) 345-365. [3] K.M. Carter, R. Raich and A.O. Hero. Learning on statistical manifolds for clustering and visualization. In 45th Allerton Conference on Communication, Control, and Computing, Monticello, Illinois, 2007. https://wiki.eecs.umich.edu/global/data/hero/images/c/c6/Kmcarter- learnstatman.pdf [4] K.M. Carter Dimensionality reduction on statistical manifolds. PhD thesis, University of Michigan, 2009. http://tbayes.eecs.umich.edu/kmcarter/thesis [5] M. Deng and C.T.J. Dodson. Paper: An Engineered Stochastic Structure. Tappi Press, Atlanta, 1994. [6] F. Nielsen, V. Garcia and R. Nock. Simplifying Gaussian mixture models via entropic quantization. In Proc. 17th European Signal Processing Conference, Glasgow, Scotland 24-28 August 2009, pp 2012-2016. [7] W.W. Sampson. Modelling Stochastic Fibre Materials with Mathematica. Springer-Verlag, New York, Berlin, 2009.

Auteurs

Yann Ollivier

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

Objective Improvement in Information-Geometric Optimization Youhei Akimoto Project TAO – INRIA Saclay LRI, Bât. 490, Univ. Paris-Sud 91405 Orsay, France Youhei.Akimoto@lri.fr Yann Ollivier CNRS & Univ. Paris-Sud LRI, Bât. 490 91405 Orsay, France yann.ollivier@lri.fr ABSTRACT Information-Geometric Optimization (IGO) is a unified frame- work of stochastic algorithms for optimization problems. Given a family of probability distributions, IGO turns the original optimization problem into a new maximization prob- lem on the parameter space of the probability distributions. IGO updates the parameter of the probability distribution along the natural gradient, taken with respect to the Fisher metric on the parameter manifold, aiming at maximizing an adaptive transform of the objective function. IGO re- covers several known algorithms as particular instances: for the family of Bernoulli distributions IGO recovers PBIL, for the family of Gaussian distributions the pure rank-

Hirohiko Shima

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

. . . . . . . . . .. . . Geometry of Hessian Structures Hirohiko Shima h-shima@c-able.ne.jp Yamaguchi University 2013/8/29 Hirohiko Shima (Yamaguchi University) Geometry of Hessian Structures 2013/8/29 1 / 27 . . . . . . . ..1 Hessian Structures . ..2 Hessian Structures and K¨ahlerian Structures . ..3 Dual Hessian Structures . ..4 Hessian Curvature Tensor . ..5 Regular Convex Cones . ..6 Hessian Structures and Affine Differential Geometry . ..7 Hessian Structures and Information Geometry . ..8 Invariant Hessian Structures Hirohiko Shima (Yamaguchi University) Geometry of Hessian Structures 2013/8/29 2 / 27 . . . . . . Preface In 1964 Prof. Koszul was sent to Japan by the French Government and gave lectures at Osaka University. I was a student in those days and attended the lectures together with the late Professor Matsushima and Murakami. The topics of the lectures were a theory of flat manifolds with flat connection D and closed 1-form α such that Dα is positive definite. α being a closed 1-form it is locally expressed as α = dφ, and so Dα = Ddφ is just a Hessian metric in our view point. This is the ultimate origin of the notion of Hessian structures and the starting point of my research. Hirohiko Shima (Yamaguchi University) Geometry of Hessian Structures 2013/8/29 3 / 27 . . . . . . 1. Hessian structures . Definition (Hessian metric) .. . . .. . . M : manifold with flat connection D A Riemannian metric g on M is said to be a Hessian metric if g can be locally expressed by g = Ddφ, gij = ∂2 φ ∂xi ∂xj , where {x1 , · · · , xn } is an affine coordinate system w.r.t. D. (D, g) : Hessian structure on M (M, D, g) : Hessian manifold The function φ is called a potential of (D, g). Hirohiko Shima (Yamaguchi University) Geometry of Hessian Structures 2013/8/29 4 / 27 . . . . . . . Definition (difference tensor γ) .. . . .. . . Let γ be the difference tensor between the Levi-Civita connection ∇ for g and the flat connection D; γ = ∇ − D, γX Y = ∇X Y − DX Y . γi jk(component of γ)=Γi jk(Christoffel symbol for g) . Proposition (characterizations of Hessian metric) .. . . . Let (M, D) be a flat manifold and g a Riemannian metric on M. The following conditions are equivalent. (1) g is a Hessian metric. (2) (DX g)(Y , Z) = (DY g)(X, Z) Codazzi equation i.e. the covariant tensor Dg is symmetric. (3) g(γX Y , Z) = g(Y , γX Z) Hirohiko Shima (Yamaguchi University) Geometry of Hessian Structures 2013/8/29 5 / 27 . . . . . . 2. Hessian Structures and K¨ahlerian Structures . Definition (K¨ahlerian metric) .. . . .. . . A complex manifold with Hermitian metric g = ∑ i,j gij dzi d¯zj is called a K¨ahlerian metric if g is expressed by complex Hessian gij = ∂2 ψ ∂zi ∂¯zj , where {z1 , · · · , zn } is a holomorphic coordinate system. Chen and Yau called Hessian metrics affine K¨ahlerian metrics. . Proposition .. . . .. . . Let TM be the tangent bundle over Hessian manifold (M, D, g). Then TM is a complex manifold with K¨ahlerian metric gT = ∑n i,j=1 gij dzi d¯zj where zi = xi + √ −1dxi . Hirohiko Shima (Yamaguchi University) Geometry of Hessian Structures 2013/8/29 6 / 27 . . . . . . . Example (tangent bundle of paraboloid) .. . . .. . . Ω = { x ∈ Rn | xn − 1 2 n−1∑ i=1 (xi )2 > 0 } paraboloid φ = log { xn − 1 2 n−1∑ i=1 (xi )2 }−1 g = Ddφ : Hessian metric on Ω TΩ ∼= Ω + √ −1Rn ⊂ Cn : tube domain over Ω TΩ ∼

Christopher TJ Dodson, William W. Sampson

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

GSI2013 - Geometric Science of Information, Paris, 28-30 August 2013 Dimensionality reduction for classification of stochastic fibre radiographs C.T.J. Dodson1 and W.W. Sampson2 School of Mathematics1 and School of Materials2 University of Manchester UK ctdodson@manchester.ac.uk Abstract Dimensionality reduction helps to identify small numbers of essential features of stochastic fibre networks for classification of image pixel density datasets from experimental radiographic measurements of commercial samples and simulations. Typical commercial macro-fibre networks use finite length fibres suspended in a fluid from which they are continuously deposited onto a moving bed to make a continuous web; the fibres can cluster to differing degrees, primarily depending on the fluid turbulence, fibre dimensions and flexibility. Here we use information geometry of trivariate Gaussian spatial distributions of pixel density among first and second neighbours to reveal features related to sizes and density of fibre clusters. Introduction Much analytic work has been done on modelling of the statistical geometry of stochastic fibre networks and their behaviour in regard to strength, fluid ingress or transfer [1, 5, 7]. Using complete sampling by square cells, their areal density distribution is typically well represented by a log-gamma or a (truncated) Gaussian distribution of variance that decreases monotonically with increasing cell size; the rate of decay is dependent on fibre and fibre cluster dimensions. Clustering of fibres is well-approximated by Poisson processes of Poisson clusters of differing density and size. A Poisson fibre network is a standard reference structure for any given size distribution of fibres; its statistical geometry is well-understood for finite and infinite fibres. Figure : 1. Electron micrographs of four stochastic fibrous materials. Top left: Nonwoven carbon fibre mat; Top right: glass fibre filter; Bottom left: electrospun nylon nanofibrous network (Courtesy S.J. Eichhorn and D.J. Scurr); Bottom right: paper using wood cellulose fibres—typically flat ribbonlike, of length 1 to 2mm and width 0.02 to 0.03mm. Figure : 2. Areal density radiographs of three paper networks made from natural wood cellulose fibres, of order 1mm in length, with constant mean density but different distributions of fibres. Each image represents a square region of side length 5 cm; darker regions correspond to higher coverage. The left image is similar to that expected for a Poisson process of the same fibres, so typical real samples exhibit clustering of fibres. Spatial statistics We use information geometry of trivariate Gaussian spatial distributions of pixel density with covariances among first and second neighbours to reveal features related to sizes and density of fibre clusters, which could arise in one, two or three dimensions—the graphic shows a grey level barcode for the ordered sequence of the 20 amino acids in a yeast genome, a 1-dimensional stochastic texture. Saccharomyces CerevisiaeAmino Acids SC1 For isotropic spatial processes, which we consider here, the variables are means over shells of first and second neighbours, respectively, which share the population mean with the central pixel. For anisotropic networks the neighbour groups would be split into more, orthogonal, new variables to pick up the spatial anisotropy in the available spatial directions. Typical sample data Figure : 3. Trivariate distribution of areal density values for a typical newsprint sample. Left: source radiograph; centre: histogram of pixel densities ˜βi , average of first neighbours ˜β1,i and second neighbours ˜β2,i ; right: 3D scatter plot of ˜βi , ˜β1,i and ˜β2,i . Information geodesic distances between multivariate Gaussians What we know analytically is the geodesic distance between two multivariate Gaussians, fA, fB, of the same number n of variables in two particular cases [2]: Dµ(fA, fB) when they have a common mean µ but different covariances ΣA, ΣB and DΣ(fA, fB) when they have a common covariance Σ but different means µA, µB. The general case is not known analytically but for the purposes of studying the stochastic textures arising from areal density arrays of samples of stochastic fibre networks, a satisfactorily discriminating approximation is D(fA , fB ) ≈ Dµ(fA , fB ) + DΣ(fA , fB ). Information geodesic distance between multivariate Gaussians [2] (1). µA = µB, ΣA = ΣB = Σ : fA = (n, µA, Σ), fB = (n, µB, Σ) Dµ(fA , fB ) = µA − µB T · Σ−1 · µA − µB . (1) (2). µA = µB = µ, ΣA = ΣB : fA = (n, µ, ΣA), fB = (n, µ, ΣB) DΣ(fA , fB ) = 1 2 n j=1 log2 (λj), (2) with {λj} = Eig(ΣA−1/2 · ΣB · ΣA−1/2 ). From the form of DΣ(fA, fB) in (2) it may be seen that an approximate monotonic relationship arises with a more easily computed symmetrized log-trace function given by ∆Σ(fA, fB) = log 1 2n Tr(ΣA−1/2 · ΣB · ΣA−1/2 ) + Tr(ΣB−1/2 · ΣA · ΣB−1/2 ) . (3) This is illustrated by the plot of DΣ(fA, fB) from equation (2) on ∆Σ(fA, fB) from equation (3) in Figure 4 for 185 trivariate Gaussian covariance matrices. For comparing relative proximity, this is a better measure near zero than the symmetrized Kullback-Leibler distance [6] in those multivariate Gaussian cases so far tested and may be quicker for handling large batch processes. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.0 0.2 0.4 0.6 0.8 1.0 1.2 DΣ(fA, fB) ∆Σ(fA, fB) Figure : 4. Plot of DΣ(fA , fB ) from (2) on ∆Σ(fA , fB ) from (3) for 185 trivariate Gaussian covariance matrices. Dimensionality reduction for data sets 1. Obtain mutual ‘information distances’ D(i, j) among the members of the data set of textures X1, X2, .., XN each with 250×250 pixel density values. 2. The array of N × N differences D(i, j) is a symmetric positive definite matrix with zero diagonal. This is centralized by subtracting row and column means and then adding back the grand mean to give CD(i, j). 3. The centralized matrix CD(i, j) is again symmetric positive definite with diagonal zero. We compute its N eigenvalues ECD(i), which are necessarily real, and find the N corresponding N-dimensional eigenvectors VCD(i). 4. Make a 3 × 3 diagonal matrix A of the first three eigenvalues of largest absolute magnitude and a 3 × N matrix B of the corresponding eigenvectors. The matrix product A · B yields a 3 × N matrix and its transpose is an N × 3 matrix T, which gives us N coordinate values (xi, yi, zi) to embed the N samples in 3-space. 2 4 6 8 10 12 14 2 4 6 8 10 12 14 -0.5 0.0 0.5 -0.2 0.0 0.2 -0.2 -0.1 0.0 0.1 Figure : 5. DΣ(fA , fB ) as a cubic-smoothed surface (left), contour plot (right), trivariate Gaussian information distances among 16 datasets of 1mm pixel density differences between a Poisson network and simulated networks from 1mm fibres, same mean density different clustering. Embedding: subgroups show numbers of fibres in clusters and cluster densities. 2 4 6 8 10 12 2 4 6 8 10 12 -1 0 1 0.0 0.5 -0.2 0.0 0.2 Figure : 6. DΣ(fA , fB ) as a cubic-smoothed surface (left), contour plot (right), for trivariate Gaussian information distances among 16 datasets of 1mm pixel density arrays for simulated networks made from 1mm fibres, each network with the same mean density but with different clustering. Embedding: subgroups show numbers of fibres in clusters and cluster densities; the solitary point is an unclustered Poisson network. 2 4 6 8 10 12 2 4 6 8 10 12 0 1 0.0 0.5 -0.2 0.0 0.2 Figure : 7. DΣ(fA , fB ) as a cubic-smoothed surface (left), and as a contour plot (right), for trivariate Gaussian information distances among 16 simulated Poisson networks made from 1mm fibres, with different mean density, using pixels at 1mm scale. Second row: Embedding of the same Poisson network data, showing the effect of mean network density. -5 0 5 0 2 4 -1.0 -0.5 0.0 0.5 Figure : 8. Embedding using 182 trivariate Gaussian distributions for samples from a data set of radiographs of commercial papers. The embedding separates different forming methods into subgroups. References [1] K. Arwini and C.T.J. Dodson. Information Geometry Near Randomness and Near Independence. Lecture Notes in Mathematics. Springer-Verlag, New York, Berlin, 2008, Chapter 9 with W.W. Sampson, Stochasic Fibre Networks pp 161-194. [2] C. Atkinson and A.F.S. Mitchell. Rao’s distance measure. Sankhya: Indian Journal of Statistics 48, A, 3 (1981) 345-365. [3] K.M. Carter, R. Raich and A.O. Hero. Learning on statistical manifolds for clustering and visualization. In 45th Allerton Conference on Communication, Control, and Computing, Monticello, Illinois, 2007. https://wiki.eecs.umich.edu/global/data/hero/images/c/c6/Kmcarter- learnstatman.pdf [4] K.M. Carter Dimensionality reduction on statistical manifolds. PhD thesis, University of Michigan, 2009. http://tbayes.eecs.umich.edu/kmcarter/thesis [5] M. Deng and C.T.J. Dodson. Paper: An Engineered Stochastic Structure. Tappi Press, Atlanta, 1994. [6] F. Nielsen, V. Garcia and R. Nock. Simplifying Gaussian mixture models via entropic quantization. In Proc. 17th European Signal Processing Conference, Glasgow, Scotland 24-28 August 2009, pp 2012-2016. [7] W.W. Sampson. Modelling Stochastic Fibre Materials with Mathematica. Springer-Verlag, New York, Berlin, 2009.

Giovanni Pistone

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

Nonparametric Information Geometry http://www.giannidiorestino.it/GSI2013-talk.pdf Giovanni Pistone de Castro Statistics Initiative Moncalieri, Italy August 30, 2013 Abstract The differential-geometric structure of the set of positive densities on a given measure space has raised the interest of many mathematicians after the discovery by C.R. Rao of the geometric meaning of the Fisher information. Most of the research is focused on parametric statistical models. In series of papers by author and coworkers a particular version of the nonparametric case has been discussed. It consists of a minimalistic structure modeled according the theory of exponential families: given a reference density other densities are represented by the centered log likelihood which is an element of an Orlicz space. This mappings give a system of charts of a Banach manifold. It has been observed that, while the construction is natural, the practical applicability is limited by the technical difficulty to deal with such a class of Banach spaces. It has been suggested recently to replace the exponential function with other functions with similar behavior but polynomial growth at infinity in order to obtain more tractable Banach spaces, e.g. Hilbert spaces. We give first a review of our theory with special emphasis on the specific issues of the infinite dimensional setting. In a second part we discuss two specific topics, differential equations and the metric connection. The position of this line of research with respect to other approaches is briefly discussed. References in • GP, GSI2013 Proceedings. A few typos corrected in arXiv:1306.0480; • GP, arXiv:1308.5312 • If µ1, µ2 are equivalent measures on the same sample space, a statistical model has two representations L1(x; θ)µ1(dx) = L2(x; θ)µ2(dx). • Fisher’s score is a valid option s(x; θ) = d dθ ln Li (x; θ), i = 1, 2, and Eθ [sθ] = 0. • Each density q equivalent to p is of the form q(x) = ev(x) p(x) Ep [ev ] = exp (v(x) − ln (Ep [ev ])) p(x), where v is a random variable such that Ep [ev ] < +∞. • To avoid borderline cases, we actually require Ep eθv < +∞, θ ∈ I open ⊃ [0, 1]. • Finally, we require Ep [v] = 0. Plan Part I Exponential manifold Part II Vector bundles Part III Deformed exponential Part I Exponential manifold Sets of densities Definition P1 is the set of real random variables f such that f dµ = 1, P≥ the convex set of probability densities, P> the convex set of strictly positive probability densities: P> ⊂ P≥ ⊂ P1 • We define the (differential) geometry of these spaces in a way which is meant to be a non-parametric generalization of Information Geometry • We try to avoid the use of explicit parameterization of the statistical models and therefore we use a parameter free presentation of differential geometry. • We construct a manifold modeled on an Orlicz space. • We look for applications to applications intrisically non parametric, i.e. Statistical Physics, Information Theory, Optimization, Filtering. Banach manifold Definition 1. Let P be a set, E ⊂ P a subset, B a Banach space. A 1-to-1 mapping s : E → B is a chart if the image s(E) = S ⊂ B is open. 2. Two charts s1 : E1 → B1, s2 : E2 → B2, are both defined on E1 ∩ E2 and are compatible if s1(E1 ∩ E2) is an open subset of B1 and the change of chart mapping s2 ◦ s−1 1 : s1(E1 ∩ E2) s−1 1 // E1 ∩ E2 s2 // s2(E1 ∩ E2) is smooth. 3. An atlas is a set of compatible charts. • Condition 2 implies that the model spaces B1 and B2 are isomorphic. • In our case: P = P>, the atlas has a chart sp for each p ∈ P> such that sp(p) = 0 and two domains Ep1 and Ep2 are either equal or disjoint. Charts on P> Model space Orlicz Φ-space If φ(y) = cosh y − 1, the Orlicz Φ-space LΦ (p) is the vector space of all random variables such that Ep [Φ(αu)] is finite for some α > 0. Properties of the Φ-space 1. u ∈ LΦ (p) if, and only if, the moment generating function α → Ep [eαu ] is finite in a neighborhood of 0. 2. The set S≤1 = u ∈ LΦ (p) Ep [Φ(u)] ≤ 1 is the closed unit ball of a Banach space with norm u p = inf ρ > 0 Ep Φ u ρ ≤ 1 . 3. u p = 1 if either Ep [Φ(u)] = 1 or Ep [Φ(u)] < 1 and Ep Φ u ρ = ∞ for ρ < 1. If u p > 1 then u p ≤ Ep [Φ(u)]. In particular, lim u p→∞ Ep [Φ (u)] = ∞. Example: boolean state space • In the case of a finite state space, the moment generating function is finite everywhere, but its computation can be challenging. • Boolean case: Ω = {+1, −1} n , uniform density p(x) = 2−n , x ∈ Ω. A generic real function on Ω has the form u(x) = α∈L ˆu(α)xα , with L = {0, 1} n , xα = n i=1 xαi i , ˆu(α) = 2−n x∈Ω u(x)xα . • The moment generating function of u under the uniform density p is Ep etu = B∈B(ˆu) α∈Bc cosh(tˆu(α)) α∈B sinh(tˆu(α)), where B(ˆu) are those B ⊂ Supp ˆu such that α∈B α = 0 mod 2. • Ep [Φ(tu)] = B∈B0(ˆu) α∈Bc cosh(tˆu(α)) α∈B sinh(tˆu(α)) − 1, where B0(ˆu) are those B ⊂ Supp ˆu such that α∈B α = 0 mod 2 and α∈Supp ˆu α = 0. Example : the sphere is not smooth in general • p(x) ∝ (a + x)−3 2 e−x , x, a > 0. • For the random variable u(x) = x, the function Ep [Φ(αu)] = 1 ea Γ −1 2, a ∞ 0 (a+x)−3 2 e−(1−α)x + e−(1+α)x 2 dx−1 is convex lower semi-continuous on α ∈ R, finite for α ∈ [−1, 1], infinite otherwise, hence not smooth. −1.0 −0.5 0.0 0.5 1.0 0.00.20.40.60.81.0 \alpha E_p(\Phi(\alphau) q q Isomorphism of LΦ spaces Theorem LΦ (p) = LΦ (q) as Banach spaces if p1−θ qθ dµ is finite on an open neighborhood I of [0, 1]. It is an equivalence relation p q and we denote by E(p) the class containing p. The two spaces have equivalent norms Proof. Assume u ∈ LΦ (p) and consider the convex function C : (s, θ) → esu p1−θ qθ dµ. The restriction s → C(s, 0) = esu p dµ is finite on an open neighborhood Jp of 0; the restriction θ → C(0, θ) = p1−θ qθ dµ is finite on the open set I ⊃ [0, 1]. hence, there exists an open interval Jq 0 where s → C(s, 1) = esu q dµ is finite. q q J_p J_q I e-charts Definition (e-chart) For each p ∈ P>, consider the chart sp : E(p) → LΦ 0 (p) by q → sp(q) = log q p + D(p q) = log q p − Ep log q p For u ∈ LΦ 0 (p) let Kp(u) = ln Ep [eu ] the cumulant generating function of u and let Sp the interior of the proper domain. Define ep : Sp u → eu−Kp(u) · p ep ◦ sp is the identity on E(p) and sp ◦ ep is the identity on Sp. Theorem (Exponential manifold) {sp : E (p)|p ∈ P>} is an affine atlas on P>. Cumulant functional • The divergence q → D(p q) is represented in the chart centered at p by Kp(u) = log Ep [eu ], where q = eu−Kp(u) · p, u ∈ Bp = LΦ 0 (p). • Kp : Bp → R≥ ∪ {+∞} is convex and its proper domain Dom (Kp) contains the open unit ball of Tp. • Kp is infinitely Gˆateaux-differentiable on the interior Sp of its proper domain and analytic on the unit ball of Bp. • For all v, v1, v2, v3 ∈ Bp the first derivatives are: d Kpuv = Eq [v] d2 Kpu(v1, v2) = Covq (v1, v2) d3 Kpu(v1, v2, v3) = Covq(v1, v2, v3) Change of coordinate The following statements are equivalent: 1. q ∈ E (p); 2. p q; 3. E (p) = E (q); 4. ln q p ∈ LΦ (p) ∩ LΦ (q). 1. If p, q ∈ E(p) = E(q), the change of coordinate sq ◦ ep(u) = u − Eq [u] + ln p q − Eq ln p q is the restriction of an affine continuous mapping. 2. u → u − Eq [u] is an affine transport from Bp = LΦ 0 (p) unto Bq = LΦ 0 (q). Summary p q =⇒ E (p) sp // Sp sq◦s−1 p  I // Bp d(sq◦s−1 p )  I // LΦ (p) E (q) sq // Sq I // Bq I // LΦ (q) • If p q, then E (p) = E (q) and LΦ (p) = LΦ (q). • Bp = LΦ 0 (p), Bq = LΦ 0 (q) • Sp = Sq and sq ◦ s−1 p : Sp → Sq is affine sq ◦ s−1 p (u) = u − Eq [u] + ln p q − Eq ln p q • The tangent application is d(sq ◦ s−1 p )(v) = v − Eq [v] (does not depend on p) Duality Young pair (N–function) • φ−1 = φ∗, • Φ(x) = |x| 0 φ(u) du • Φ∗(y) = |y| 0 φ∗(v) dv • |xy| ≤ Φ(x) + Φ∗(y) 0 1 2 3 4 5 050100150 v phi φ∗(u) φ(v) Φ∗(x) Φ(y) ln (1 + u) ev − 1 (1 + |x|) ln (1 + |x|) − |x| e|y| − 1 − |y| sinh−1 u sinh v |x| sinh−1 |x| − √ 1 + x2 + 1 cosh y − 1 • LΦ∗ (p) × LΦ (p) (v, u) → u, v p = Ep [uv] • u, v p ≤ 2 u Φ∗,p v Φ,p • (LΦ∗ (p)) = LΦ (p) because Φ∗(ax) ≤ a2 Φ∗(x) if a > 1 (∆2). m-charts For each p ∈ P>, consider a second type of chart on f ∈ P1 : ηp : f → ηp(f ) = f p − 1 Definition (Mixture manifold) The chart is defined for all f ∈ P1 such that f /p − 1 belongs to ∗ Bp = LΦ+ 0 (p). The atlas (ηp : ∗ E (p)), p ∈ P> defines a manifold on P1 . If the sample space is not finite, such a map does not define charts on P>, nor on P≥. Example: N(µ, Σ), det Σ = 0 I G = (2π)− n 2 (det Σ)− 1 2 exp − 1 2 (x − µ)T Σ−1 (x − µ) µ ∈ Rn , Σ ∈ Symn + . ln f (x) f0(x) = − 1 2 ln (det Σ) − 1 2 (x − µ)T Σ−1 (x − µ) + 1 2 xT x = 1 2 xT (I − Σ−1 )x + µT Σ−1 x − 1 2 µT Σ−1 µ − 1 2 ln (det Σ) Ef0 ln f f0 = 1 2 (n − Tr Σ−1 ) − 1 2 µT Σ−1 µ − 1 2 ln (det Σ) u(x) = ln f (x) f0(x) − Ef0 ln f f0 = 1 2 xT (I − Σ−1 )x + µT Σ−1 x − 1 2 (n − Tr Σ−1 ) Kf0 (u) = − 1 2 (n − Tr Σ−1 ) + 1 2 µT Σ−1 µ + 1 2 ln (det Σ) Example: N(µ, Σ), det Σ = 0 II G as a sub-manifold of P> G = x → eu(x)−K(u) f0(x) u ∈ H1,2 ∩ Sf0 • H1,2 is the Hemite space of total degree 1 and 2, that is the vector space generated by the Hermite polynomials X1, . . . , Xn, (X2 1 − 1), . . . , (X2 n − 1), X1X2, . . . , Xn−1Xn • If the matrix S, Sii = βii − 1 2 , Sij

Sheng YI

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

10/15/13 1 A Subspace Learning of Dynamics on a Shape Manifold: A Generative Modeling Approach Sheng Yi* and H. Krim VISSTA, ECE Dept., NCSU Raleigh NC 27695 *GE Research Center, NY Thanks to AFOSR Outline •  Motivation •  Statement of the problem •  Highlight key issues and brief review •  Proposed model and solution •  Experiments 10/15/13 2 Problem Statement X(t) Z(t) Looking for a subspace that preserve geometrical properties of data in the original space Related Work •  Point-wise subspace learning –  PCA, MDS, LLE, ISOMAP, Hessian LLE, Laplacian Mapping, Diffusion Map, LTSA [T. Wittman, "Manifold Learning Techniques: So Which is the Best?“, UCLA ] •  Curve-wise subspace learning –  Whitney embedding [D. Aouada, and H. K., IEEE Trans. IP, 2010] •  Shape manifold –  Kendall’s shape space •  Based on landmarks –  Klassen et al. shape space •  Functional representation •  Concise description of tangent space & Fast Implementation –  Michor&Mumford’s shape space •  Focus on parameterization •  Complex description of tangent space& Heavy computation –  Trouve and Younes Diff. Hom Approach 10/15/13 3 Contribution Summary •  Proposed subspace learning is Invertible Original seq. Reconstructed seq. Subspace seq.
 Contribution Summary •  The parallel transport of representative frames defined by a metric on the shape manifold preserves curvatures in the subspace •  Ability to apply an ambient space calculus instead of relying essentially on manifold calculus 10/15/13 4 Shape Representation •  From curve to shape [Klassen et al.] α(s)= x(s),y(s)( )∈R2 ⇒ ∂ ∂s ∂α ∂s = cosθ(s),sinθ(s)( ) (simpleandclosedθ(s))\Sim(n) Closed: cosθ(s)ds 0 2π ∫ = 0 sinθ(s)ds 0 2π ∫ = 0 Rotation: 1 2π θ(s)ds 0 2π ∫ = π Dynamic Modeling on a Manifold •  The Core idea27 ( ) ti t XV X T M∈ dXt Process on M Driving Process on Rn dXt = Vi (Xt )dZi (t) i=1 dim( M ) ∑ ∈TXt M dim( M ) dZii (t) Zii (t) 10/15/13 5 Parallel Transport span Tangent along curve Tangent along curve Parallel Transport X0 X1 M [ Yi et al. IEEE IP, 2012] The core idea •  Adaptively select frame to represent in a lower dimensional space ( ) ti t XV X T M∈ dXt Process on M Driving Process on Rn dXt PCA on vectors parallel transported to a tangent space dZi (t) Rdim( M ) 10/15/13 6 Formulation of Curves on Shape Manifold A shape at the shape manifold Vectors span the tangent space of the shape manifold[ Yi et al. IEEE IP, 2012] A Euclidean driving process Vectors span a subspace A driving process in a subspace In original space: In a subspace: Core Idea •  Restrict the selection of V to be parallel frames on the manifold •  Advantage of using parallel moving frame: •  Angles between tangent vectors are preserved. With the same sampling scheme, curvatures are preserved as well. •  Given the initial frame and initial location on manifold, the original curve could be reconstructed 10/15/13 7 Core Idea •  Find an L2 optimal V Euclidean distance is used here because it is within a tangent space of the manifold Core Idea Given a parallel transport on shape manifold, with some mild assumption we can obtain a solution as a PCA 10/15/13 8 Parallel Transport Flow Chart By definition of parallel transport Discrete approx. of derivation Tangent space of shape manifold is normal to b1,b2,b3 Tangent space of shape manifold is normal to b1,b2,b3 A linear system Experiments •  Data Ø Moshe Blank et al., ICCV 2005 http://www.wisdom.weizmann.ac.il/~vision/SpaceTimeActions.html Ø Kimia Shape Database Sharvit, D. et al.,. Content-Based Access of Image and Video Libraries,1998 •  Walk •  Run •  Jump •  Gallop sideways •  Bend •  One-hand wave •  Two-hands wave •  Jump in place •  Jumping Jack •  Skip 10/15/13 9 Reconstruction Experiment PCA in Euclidean space The proposed method More Reconstructions and Embeddings 10/15/13 10 Other Embedding Result Experiment on curvature preservation 10/15/13 11 Experiment on curvature preservation 10/15/13 12 Generative Reconstruction 10/15/13 13 Conclusions •  A low dimensional embedding of a parallelly transported shape flow proposed •  A learning-based inference framework achieved •  A generative model for various shape- based activities is obtained

Xavier Pennec

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

Xavier Pennec Asclepios team, INRIA Sophia- Antipolis – Mediterranée, France Bi-invariant Means on Lie groups with Cartan-Schouten connections GSI, August 2013 X. Pennec - GSI, Aug. 30, 2013 2 Design Mathematical Methods and Algorithms to Model and Analyze the Anatomy  Statistics of organ shapes across subjects in species, populations, diseases…  Mean shape  Shape variability (Covariance)  Model organ development across time (heart-beat, growth, ageing, ages…)  Predictive (vs descriptive) models of evolution  Correlation with clinical variables Computational Anatomy Statistical Analysis of Geometric Features Noisy Geometric Measures  Tensors, covariance matrices  Curves, tracts  Surfaces  Transformations  Rigid, affine, locally affine, diffeomorphisms Goal:  Deal with noise consistently on these non-Euclidean manifolds  A consistent statistical (and computing) framework X. Pennec - GSI, Aug. 30, 2013 3 X. Pennec - GSI, Aug. 30, 2013 4 Statistical Analysis of the Scoliotic Spine Data  307 Scoliotic patients from the Montreal’s St-Justine Hosp  3D Geometry from multi-planar X-rays  Articulated model:17 relative pose of successive vertebras Statistics  Main translation variability is axial (growth?)  Main rot. var. around anterior-posterior axis  4 first variation modes related to King’s classes [ J. Boisvert et al. ISBI’06, AMDO’06 and IEEE TMI 27(4), 2008 ] Morphometry through Deformations 5X. Pennec - GSI, Aug. 30, 2013 Measure of deformation [D’Arcy Thompson 1917, Grenander & Miller]  Observation = “random” deformation of a reference template  Deterministic template = anatomical invariants [Atlas ~ mean]  Random deformations = geometrical variability [Covariance matrix] Patient 3 Atlas Patient 1 Patient 2 Patient 4 Patient 5 1 2 3 4 5 Hierarchical Deformation model Varying deformation atoms for each subject M3 M4 M5 M6 M1 M2 M0 K M3 M4 M5 M6 M1 M2 M0 1 … Subject level: 6 Spatial structure of the anatomy common to all subjects w0 w1 w2 w3 w4 w5 w6 Population level: Aff(3) valued trees X. Pennec - GSI, Aug. 30, 2013[Seiler, Pennec, Reyes, Medical Image Analysis 16(7):1371-1384, 2012] X. Pennec - GSI, Aug. 30, 2013 7 Outline Riemannian frameworks on Lie groups Lie groups as affine connection spaces A glimpse of applications in infinite dimensions Conclusion and challenges X. Pennec - GSI, Aug. 30, 2013 8 Riemannian geometry is a powerful structure to build consistent statistical computing algorithms Shape spaces & directional statistics  [Kendall StatSci 89, Small 96, Dryden & Mardia 98] Numerical integration, dynamical systems & optimization  [Helmke & Moore 1994, Hairer et al 2002]  Matrix Lie groups [Owren BIT 2000, Mahony JGO 2002]  Optimization on Matrix Manifolds [Absil, Mahony, Sepulchre, 2008] Information geometry (statistical manifolds)  [Amari 1990 & 2000, Kass & Vos 1997]  [Oller & Corcuera Ann. Stat. 1995, Battacharya & Patrangenaru, Ann. Stat. 2003 & 2005] Statistics for image analysis  Rigid body transformations [Pennec PhD96]  General Riemannian manifolds [Pennec JMIV98, NSIP99, JMIV06]  PGA for M-Reps [Fletcher IPMI03, TMI04]  Planar curves [Klassen & Srivastava PAMI 2003] Geometric computing  Subdivision scheme [Rahman,…Donoho, Schroder SIAM MMS 2005] X. Pennec - GSI, Aug. 30, 2013 9 The geometric framework: Riemannian Manifolds Riemannian metric :  Dot product on tangent space  Speed, length of a curve  Geodesics are length minimizing curves  Riemannian Distance Operator Euclidean space Riemannian manifold Subtraction Addition Distance Gradient descent )( ttt xCxx   )(yLogxy x xyxy  xyyx ),(dist x xyyx ),(dist )(xyExpy x ))(( txt xCExpx t   xyxy  Unfolding (Logx), folding (Expx)  Vector -> Bipoint (no more equivalent class) Exponential map (Normal coord. syst.) :  Geodesic shooting: Expx(v) = g(x,v)(1)  Log: find vector to shoot right (geodesic completeness!) 10 Statistical tools: Moments Frechet / Karcher mean minimize the variance Existence and uniqueness : Karcher / Kendall / Le / Afsari Gauss-Newton Geodesic marching Covariance (PCA) [higher moments]  xyEwith)(expx x1  vvtt        M M )().(.x.xx.xE TT zdzpzz xxx xx         0)(0)().(.xxE),dist(Eargmin 2   CPzdzpy y MM MxxxxxΕ X. Pennec - GSI, Aug. 30, 2013 [Oller & Corcuera 95, Battacharya & Patrangenaru 2002, Pennec, JMIV06, NSIP’99 ] 11 Distributions for parametric tests Generalization of the Gaussian density:  Stochastic heat kernel p(x,y,t) [complex time dependency]  Wrapped Gaussian [Infinite series difficult to compute]  Maximal entropy knowing the mean and the covariance Mahalanobis D2 distance / test:  Any distribution:  Gaussian:           2/x..xexp.)( T xΓxkyN       rOk n /1.)det(.2 32/12/    Σ    rO /Ric3 1)1(    ΣΓ yx..yx)y( )1(2   xxx t    n)(E 2 xx  rOn /)()( 322  xx [ Pennec, NSIP’99, JMIV 2006 ] X. Pennec - GSI, Aug. 30, 2013 Natural Riemannian Metrics on Lie Groups Lie groups: Smooth manifold G compatible with group structure  Composition g o h and inversion g-1 are smooth  Left and Right translation Lg(f) = g o f Rg (f) = f o g Natural Riemannian metric choices  Chose a metric at Id: Id  Propagate at each point g using left (or right) translation g = < DLg (-1) .x , DLg (-1) .y >Id  Practical computations using left (or right) translations X. Pennec - GSI, Aug. 30, 2013 12   g)(f.LogDL(g)Logfgx).DL(ExpfxExp 1)( Idffff 1)(    Id 13 Example on 3D rotations Space of rotations SO(3):  Manifold: RT.R=Id and det(R)=+1  Lie group ( R1 o R2 = R1.R2 & Inversion: R(-1) = RT ) Metrics on SO(3): compact space, there exists a bi-invariant metric  Left / right invariant / induced by ambient space = Tr(XT Y) Group exponential  One parameter subgroups = bi-invariant Geodesic starting at Id  Matrix exponential and Rodrigue’s formula: R=exp(X) and X = log(R)  Geodesic everywhere by left (or right) translation LogR(U) = R log(RT U) / ExpR(X) = R exp(RT X) / Bi-invariant Riemannian distance  d(R,U) = ||log(RT U)|| = q( RT U ) X. Pennec - GSI, Aug. 30, 2013 14 General Non-Compact and Non-Commutative case No Bi-invariant Mean for 2D Rigid Body Transformations  Metric at Identity:

Stefan Sommer

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

Faculty of Science Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Stefan Sommer Department of Computer Science, University of Copenhagen August 30, 2013 Slide 1/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 PGA: analysis relative to the data mean Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 PGA: analysis relative to the data mean data points on non-linear manifold Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 PGA: analysis relative to the data mean intrinsic mean µ Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 PGA: analysis relative to the data mean tangent space TµM Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 PGA: analysis relative to the data mean projection of data point to TµM Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 PGA: analysis relative to the data mean Euclidean PCA in tangent space Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 Dimensionality Reduction in Non-Linear Manifolds • Aim: Find subspaces with coordinates that approximate data in non-linear manifolds • Not learning non-linear subspaces from data in linear Euclidean spaces (ISOMAP, LLE, etc.) • Principal Geodesic Analysis (PGA, Fletcher et al., 2004) • finds geodesic subspaces - geodesics rays originating from a manifold mean µ ∈ M • the non-linear data space is linearized to TµM • Geodesic PCA (GPCA, Huckeman et al., 2010) • finds principal geodesics - geodesics minimizing residual distances that passes principal mean Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 2/14 PGA: analysis relative to the data mean What happens when µ is a poor zero- dimensional descrip- tor? Curvature Skews Centered Analysis Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 3/14 Bimodal distribution on S2 , var. 0.52 . Curvature Skews Centered Analysis Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 3/14 Bimodal distribution on S2 , var. 0.52 . −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 PGA, est. var. 1.072 Curvature Skews Centered Analysis Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 3/14 Bimodal distribution on S2 , var. 0.52 . −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 PGA, est. var. 1.072 −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 HCA, est. var. 0.492 Curvature Skews Centered Analysis Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 3/14 Bimodal distribution on S2 , var. 0.52 . −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 PGA, est. var. 1.072 −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 HCA, est. var. 0.492 HCA - Horizontal Com- ponent Analysis HCA Properties • data-adapted coordinate system r : RD → M • preserves distances orthogonal to lower order components: x −y = dM(r(x),r(y)) x = (x1 ,...,xd ,0,...,0), y = (x1 ,...,xd ,0,...,0,y ˜d ,0,...,0), 1 ≤ ˜d < d • intrinsic interpretation of covariance: cov(Xd ,X ˜d ) = R2 Xd X ˜d p(Xd ,X ˜d )d(Xd ,X ˜d ) = γd γ˜d ±dM(µ,r(Xd ))dM(r(Xd ),r(X ˜d ))p(r(Xd ,X ˜d ))ds ˜d dsd • orthogonal coordinates/subspaces • coordinate-wise decorrelation with respect to curvature-adapted measure Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 4/14 HCA Properties • data-adapted coordinate system r : RD → M • preserves distances orthogonal to lower order components: x −y = dM(r(x),r(y)) x = (x1 ,...,xd ,0,...,0), y = (x1 ,...,xd ,0,...,0,y ˜d ,0,...,0), 1 ≤ ˜d < d • intrinsic interpretation of covariance: cov(Xd ,X ˜d ) = R2 Xd X ˜d p(Xd ,X ˜d )d(Xd ,X ˜d ) = γd γ˜d ±dM(µ,r(Xd ))dM(r(Xd ),r(X ˜d ))p(r(Xd ,X ˜d ))ds ˜d dsd • orthogonal coordinates/subspaces • coordinate-wise decorrelation with respect to curvature-adapted measure Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 4/14 HCA Properties • data-adapted coordinate system r : RD → M • preserves distances orthogonal to lower order components: x −y = dM(r(x),r(y)) x = (x1 ,...,xd ,0,...,0), y = (x1 ,...,xd ,0,...,0,y ˜d ,0,...,0), 1 ≤ ˜d < d • intrinsic interpretation of covariance: cov(Xd ,X ˜d ) = R2 Xd X ˜d p(Xd ,X ˜d )d(Xd ,X ˜d ) = γd γ˜d ±dM(µ,r(Xd ))dM(r(Xd ),r(X ˜d ))p(r(Xd ,X ˜d ))ds ˜d dsd • orthogonal coordinates/subspaces • coordinate-wise decorrelation with respect to curvature-adapted measure Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 4/14 HCA Properties • data-adapted coordinate system r : RD → M • preserves distances orthogonal to lower order components: x −y = dM(r(x),r(y)) x = (x1 ,...,xd ,0,...,0), y = (x1 ,...,xd ,0,...,0,y ˜d ,0,...,0), 1 ≤ ˜d < d • intrinsic interpretation of covariance: cov(Xd ,X ˜d ) = R2 Xd X ˜d p(Xd ,X ˜d )d(Xd ,X ˜d ) = γd γ˜d ±dM(µ,r(Xd ))dM(r(Xd ),r(X ˜d ))p(r(Xd ,X ˜d ))ds ˜d dsd • orthogonal coordinates/subspaces • coordinate-wise decorrelation with respect to curvature-adapted measure Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 4/14 HCA Properties • data-adapted coordinate system r : RD → M • preserves distances orthogonal to lower order components: x −y = dM(r(x),r(y)) x = (x1 ,...,xd ,0,...,0), y = (x1 ,...,xd ,0,...,0,y ˜d ,0,...,0), 1 ≤ ˜d < d • intrinsic interpretation of covariance: cov(Xd ,X ˜d ) = R2 Xd X ˜d p(Xd ,X ˜d )d(Xd ,X ˜d ) = γd γ˜d ±dM(µ,r(Xd ))dM(r(Xd ),r(X ˜d ))p(r(Xd ,X ˜d ))ds ˜d dsd • orthogonal coordinates/subspaces • coordinate-wise decorrelation with respect to curvature-adapted measure Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 4/14 HCA Properties • data-adapted coordinate system r : RD → M • preserves distances orthogonal to lower order components: x −y = dM(r(x),r(y)) x = (x1 ,...,xd ,0,...,0), y = (x1 ,...,xd ,0,...,0,y ˜d ,0,...,0), 1 ≤ ˜d < d • intrinsic interpretation of covariance: cov(Xd ,X ˜d ) = R2 Xd X ˜d p(Xd ,X ˜d )d(Xd ,X ˜d ) = γd γ˜d ±dM(µ,r(Xd ))dM(r(Xd ),r(X ˜d ))p(r(Xd ,X ˜d ))ds ˜d dsd • orthogonal coordinates/subspaces • coordinate-wise decorrelation with respect to curvature-adapted measure Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 4/14 HCA Properties • data-adapted coordinate system r : RD → M • preserves distances orthogonal to lower order components: x −y = dM(r(x),r(y)) x = (x1 ,...,xd ,0,...,0), y = (x1 ,...,xd ,0,...,0,y ˜d ,0,...,0), 1 ≤ ˜d < d • intrinsic interpretation of covariance: cov(Xd ,X ˜d ) = R2 Xd X ˜d p(Xd ,X ˜d )d(Xd ,X ˜d ) = γd γ˜d ±dM(µ,r(Xd ))dM(r(Xd ),r(X ˜d ))p(r(Xd ,X ˜d ))ds ˜d dsd • orthogonal coordinates/subspaces • coordinate-wise decorrelation with respect to curvature-adapted measure Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 4/14 The Frame Bundle • the manifold and frames (bases) for the tangent spaces TpM • F(M) consists of pairs (p,u), p ∈ M, u frame for TpM • curves in the horizontal part of F(M) correspond to curves in M and parallel transport of frames Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 5/14 The Frame Bundle • the manifold and frames (bases) for the tangent spaces TpM • F(M) consists of pairs (p,u), p ∈ M, u frame for TpM • curves in the horizontal part of F(M) correspond to curves in M and parallel transport of frames Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 5/14 The Frame Bundle • the manifold and frames (bases) for the tangent spaces TpM • F(M) consists of pairs (p,u), p ∈ M, u frame for TpM • curves in the horizontal part of F(M) correspond to curves in M and parallel transport of frames Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 5/14 The Frame Bundle • the manifold and frames (bases) for the tangent spaces TpM • F(M) consists of pairs (p,u), p ∈ M, u frame for TpM • curves in the horizontal part of F(M) correspond to curves in M and parallel transport of frames Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 5/14 The Frame Bundle • the manifold and frames (bases) for the tangent spaces TpM • F(M) consists of pairs (p,u), p ∈ M, u frame for TpM • curves in the horizontal part of F(M) correspond to curves in M and parallel transport of frames Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 5/14 The Frame Bundle • the manifold and frames (bases) for the tangent spaces TpM • F(M) consists of pairs (p,u), p ∈ M, u frame for TpM • curves in the horizontal part of F(M) correspond to curves in M and parallel transport of frames Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 5/14 The Subspaces: Iterated Development • manifolds in general provides no canonical generalization of affine subspaces • SDEs are defined in the frame bundle using development of curves wt = t 0 u−1 s ˙xsds , wt ∈ Rη i.e. pull-back to Euclidean space using parallel transported frames ut • iterated development constructs subspaces of dimension > 1 (geodesic, polynomial, etc.) • geodesic developments (multi-step Fermi coordinates) generalize geodesic subspaces Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 6/14 The Subspaces: Iterated Development • manifolds in general provides no canonical generalization of affine subspaces • SDEs are defined in the frame bundle using development of curves wt = t 0 u−1 s ˙xsds , wt ∈ Rη i.e. pull-back to Euclidean space using parallel transported frames ut • iterated development constructs subspaces of dimension > 1 (geodesic, polynomial, etc.) • geodesic developments (multi-step Fermi coordinates) generalize geodesic subspaces Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 6/14 The Subspaces: Iterated Development • manifolds in general provides no canonical generalization of affine subspaces • SDEs are defined in the frame bundle using development of curves wt = t 0 u−1 s ˙xsds , wt ∈ Rη i.e. pull-back to Euclidean space using parallel transported frames ut • iterated development constructs subspaces of dimension > 1 (geodesic, polynomial, etc.) • geodesic developments (multi-step Fermi coordinates) generalize geodesic subspaces Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 6/14 The Subspaces: Iterated Development • manifolds in general provides no canonical generalization of affine subspaces • SDEs are defined in the frame bundle using development of curves wt = t 0 u−1 s ˙xsds , wt ∈ Rη i.e. pull-back to Euclidean space using parallel transported frames ut • iterated development constructs subspaces of dimension > 1 (geodesic, polynomial, etc.) • geodesic developments (multi-step Fermi coordinates) generalize geodesic subspaces Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 6/14 The Subspaces: Iterated Development • manifolds in general provides no canonical generalization of affine subspaces • SDEs are defined in the frame bundle using development of curves wt = t 0 u−1 s ˙xsds , wt ∈ Rη i.e. pull-back to Euclidean space using parallel transported frames ut • iterated development constructs subspaces of dimension > 1 (geodesic, polynomial, etc.) • geodesic developments (multi-step Fermi coordinates) generalize geodesic subspaces Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 6/14 Iterated Development • V ⊂ Rη linear subspace, V = V1 ⊥ V2, η = dimM • f : V1 → F(M) smooth map (e.g. an immersion) • Df (v1 +tv2) development starting at f(v1) • vector fields W1 ,...,WdimV2 : V → V, columns of W • Euclidean integral curve ˙wt = W(wt )v2 • development Df,W (v1 +tv2) = (xt ,ut ) ∈ F(M) defined by ˙ut = ut ˙wt , xt = πF(M)(ut ) • immersion for small v = v1 +v2 if full V2 rank on W • Wi constant: geodesic development; Wi = Dei p polynomial submanifolds; etc. Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 7/14 Iterated Development • V ⊂ Rη linear subspace, V = V1 ⊥ V2, η = dimM • f : V1 → F(M) smooth map (e.g. an immersion) • Df (v1 +tv2) development starting at f(v1) • vector fields W1 ,...,WdimV2 : V → V, columns of W • Euclidean integral curve ˙wt = W(wt )v2 • development Df,W (v1 +tv2) = (xt ,ut ) ∈ F(M) defined by ˙ut = ut ˙wt , xt = πF(M)(ut ) • immersion for small v = v1 +v2 if full V2 rank on W • Wi constant: geodesic development; Wi = Dei p polynomial submanifolds; etc. Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 7/14 Iterated Development • V ⊂ Rη linear subspace, V = V1 ⊥ V2, η = dimM • f : V1 → F(M) smooth map (e.g. an immersion) • Df (v1 +tv2) development starting at f(v1) • vector fields W1 ,...,WdimV2 : V → V, columns of W • Euclidean integral curve ˙wt = W(wt )v2 • development Df,W (v1 +tv2) = (xt ,ut ) ∈ F(M) defined by ˙ut = ut ˙wt , xt = πF(M)(ut ) • immersion for small v = v1 +v2 if full V2 rank on W • Wi constant: geodesic development; Wi = Dei p polynomial submanifolds; etc. Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 7/14 Iterated Development • V ⊂ Rη linear subspace, V = V1 ⊥ V2, η = dimM • f : V1 → F(M) smooth map (e.g. an immersion) • Df (v1 +tv2) development starting at f(v1) • vector fields W1 ,...,WdimV2 : V → V, columns of W • Euclidean integral curve ˙wt = W(wt )v2 • development Df,W (v1 +tv2) = (xt ,ut ) ∈ F(M) defined by ˙ut = ut ˙wt , xt = πF(M)(ut ) • immersion for small v = v1 +v2 if full V2 rank on W • Wi constant: geodesic development; Wi = Dei p polynomial submanifolds; etc. Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 7/14 Iterated Development • V ⊂ Rη linear subspace, V = V1 ⊥ V2, η = dimM • f : V1 → F(M) smooth map (e.g. an immersion) • Df (v1 +tv2) development starting at f(v1) • vector fields W1 ,...,WdimV2 : V → V, columns of W • Euclidean integral curve ˙wt = W(wt )v2 • development Df,W (v1 +tv2) = (xt ,ut ) ∈ F(M) defined by ˙ut = ut ˙wt , xt = πF(M)(ut ) • immersion for small v = v1 +v2 if full V2 rank on W • Wi constant: geodesic development; Wi = Dei p polynomial submanifolds; etc. Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 7/14 Iterated Development • V ⊂ Rη linear subspace, V = V1 ⊥ V2, η = dimM • f : V1 → F(M) smooth map (e.g. an immersion) • Df (v1 +tv2) development starting at f(v1) • vector fields W1 ,...,WdimV2 : V → V, columns of W • Euclidean integral curve ˙wt = W(wt )v2 • development Df,W (v1 +tv2) = (xt ,ut ) ∈ F(M) defined by ˙ut = ut ˙wt , xt = πF(M)(ut ) • immersion for small v = v1 +v2 if full V2 rank on W • Wi constant: geodesic development; Wi = Dei p polynomial submanifolds; etc. Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 7/14 Iterated Development • V ⊂ Rη linear subspace, V = V1 ⊥ V2, η = dimM • f : V1 → F(M) smooth map (e.g. an immersion) • Df (v1 +tv2) development starting at f(v1) • vector fields W1 ,...,WdimV2 : V → V, columns of W • Euclidean integral curve ˙wt = W(wt )v2 • development Df,W (v1 +tv2) = (xt ,ut ) ∈ F(M) defined by ˙ut = ut ˙wt , xt = πF(M)(ut ) • immersion for small v = v1 +v2 if full V2 rank on W • Wi constant: geodesic development; Wi = Dei p polynomial submanifolds; etc. Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 7/14 Iterated Development • V ⊂ Rη linear subspace, V = V1 ⊥ V2, η = dimM • f : V1 → F(M) smooth map (e.g. an immersion) • Df (v1 +tv2) development starting at f(v1) • vector fields W1 ,...,WdimV2 : V → V, columns of W • Euclidean integral curve ˙wt = W(wt )v2 • development Df,W (v1 +tv2) = (xt ,ut ) ∈ F(M) defined by ˙ut = ut ˙wt , xt = πF(M)(ut ) • immersion for small v = v1 +v2 if full V2 rank on W • Wi constant: geodesic development; Wi = Dei p polynomial submanifolds; etc. Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 7/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 Horizontal Component Analysis • distances measured relative to lower order components • iterative definition of hd given hd−1: data points {xi} • curves: geodesics x hd t (xi ) passing points that are closest to xi on d −1st component hd−1 • projection: πhd (xi ) = argmint dM (x hd t (xi ),xi )2 • transport: derivatives ˙x hd 0 (xi ) connected in hd−1 by parallel transport • orthogonality: x hd t orthogonal to d −1 basis vectors transported horizontally in hd−1 • horizontal component hd : subspace containing curves x hd t (xi ) minimizing reshd−1 (hd ) = N ∑ i=1 dM (xi ,πhd (xi ))2 with d −1th coordinates fixed Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 8/14 1 find geodesic h1 with d dt h1|t=0 = u1 that 1 minimizes res(h1) = ∑N i=1 dM (xi ,πh1 (xi ))2 2 find u2 ⊥ u1 such that x h2 t (xi) are geodesics • that pass πh1 (xi ) • with derivatives ˙x h2 0 (xi ) equal trans. Ph1 u2 • that minimize resh1 (h2) = ∑N i=1 dM (xi ,πh2 (xi ))2 3 find u3 ⊥ {u1,u2} such that x h3 t (xi) are geodesics • that pass πh2 (xi ) • with derivatives ˙x h3 0 (xi ) par. transp. in h2 • that minimize resh2 (h3) = ∑N i=1 dM (xi ,πh3 (xi ))2 4 and so on . . . Parallel Transport and Local Analysis Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 9/14 Sample on S2 , horz.: uniform, vert.: normal −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 PGA −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 HCA Parallel trans- port along first component: Conditional Congruency • data/geometry congruency: data can be approximated by geodesics (Huckemann et al.) • one-dimensional concept • conditional congruency: X ˜d |X1 ,...,Xd is congruent • HCA defines a data-adapted coordinate system that provides a conditionally congruent splitting Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 10/14 Conditional Congruency • data/geometry congruency: data can be approximated by geodesics (Huckemann et al.) • one-dimensional concept • conditional congruency: X ˜d |X1 ,...,Xd is congruent • HCA defines a data-adapted coordinate system that provides a conditionally congruent splitting Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 10/14 Conditional Congruency • data/geometry congruency: data can be approximated by geodesics (Huckemann et al.) • one-dimensional concept • conditional congruency: X ˜d |X1 ,...,Xd is congruent • HCA defines a data-adapted coordinate system that provides a conditionally congruent splitting Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 10/14 Conditional Congruency • data/geometry congruency: data can be approximated by geodesics (Huckemann et al.) • one-dimensional concept • conditional congruency: X ˜d |X1 ,...,Xd is congruent • HCA defines a data-adapted coordinate system that provides a conditionally congruent splitting Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 10/14 Components May Flip Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 11/14 x2 x3 x4 (p) x1 = 0 slice x4 x3 x1 (q) x2 = 0 slice −1 0 1 −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 x3 x 2 x1 (r) HCA visualization Figure: 3-dim manifold 2x2 1 −2x2 2 +x2 3 +x2 4 = 1 in R4 with samples from two Gaussians with largest variance in the x2 direction (0.62 vs. 0.42 ). (a,b) Slices x1 = 0 and x2 = 0. (c) The second HCA horizontal component has largest x2 component (blue vector) whereas the second PGA component has largest x1 component (red vector). Corpora Callosa Corpus callosum variation: 3σ1 along h1, 3σ2 along h2 Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 12/14 Corpora Callosa Corpus callosum variation: 3σ1 along h1, 3σ2 along h2 (Loading corporacallosa.mp4) Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 12/14 Summary • Horizontal Component Analysis performs PCA-like dimensionality reduction in Riemannian manifolds • subspaces constructed from iterated frame bundle development • the implied coordinate system • is data adapted • preserves certain pairwise-distances and orthogonality • provides covariance interpretation • decorrelates curvature-adapted measure • provides conditionally congruent components • handles multi-modal distribution with spread over large-curvature areas Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 13/14 References - Sommer: Horizontal Dimensionality Reduction and Iterated Frame Bundle Development, GSI 2013. - Sommer et al.: Optimization over Geodesics for Exact Principal Geodesic Analysis, ACOM, in press. - Sommer et al.: Manifold Valued Statistics, Exact Principal Geodesic Analysis and the Effect of Linear Approximations, ECCV 2010. http://github.com/nefan/smanifold Stefan Sommer (sommer@diku.dk) (Department of Computer Science, University of Copenhagen) — Horizontal Dimensionality Reduction and Iterated Frame Bundle Development Slide 14/14

Marco Lorenzi, Xavier Pennec

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

- 1 Parallel Transport with the Pole Ladder: Application to Deformations of time Series of Images Marco Lorenzi, Xavier Pennec Asclepios research group - INRIA Sophia Antipolis, France GSI 2013 - 2GSI 2013 Paradigms of Deformation-based Morphometry Cross sectional Longitudinal t1 t2Sub A Sub B Different topologies Large deformations Biological interpretation is not obvious Within-subject Subtle changes Biologically meaningful - 3GSI 2013 Sub B Template Combining longitudinal and cross-sectional t1 t2 Sub A t1 t2 ? - 4GSI 2013 Sub B Template Sub A Combining longitudinal and cross-sectional Standard TBM approach Focuses on volume changes only Scalar analysis (statistical power) No modeling Jacobian determinant analysis - 5GSI 2013 Sub A Template Combining longitudinal and cross-sectional Longitudinal trajectories Sub B Vector transport is not uniquely defined Missing theoretical insights - 6GSI 2013 Diffeomorphic registration  Stationary Velocity Field setting [Arsigny 2006] v(x) stationary velocity field Lie group Exp(v) is geodesic wrt Cartan connections (non-metric) Geodesic defined by SVF Stationary Velocity Field setting [Arsigny 2006] v(x) stationary velocity field Lie group Exp(v) is geodesic wrt Cartan connections (non-metric) Geodesic defined by SVF LDDMM setting [Trouvé, 1998] v(x,t) time-varying velocity field Riemannian expid(v) is a metric geodesic wrt Levi-Civita connection Geodesic defined by initial momentum LDDMM setting [Trouvé, 1998] v(x,t) time-varying velocity field Riemannian expid(v) is a metric geodesic wrt Levi-Civita connection Geodesic defined by initial momentum Transporting trajectories: Parallel transport of initial tangent vectors M id v  - 7GSI 2013 [Schild, 1970] P0 P’0 P1 A C curve P2     P’1 A’ From relativity to image processing The Schild’s Ladder - 8GSI 2013 Schild’s Ladder Intuitive application to images P0 P’0 T0 A T’0 SLA) time Inter-subjectregistration [Lorenzi et al, IPMI 2011] - 9GSI 2013 t0 t1 t2 t3 - 10GSI 2013 t0 t1 t2 t3 • Evaluation of multiple geodesics for each time-point • Parallel transport is not consistently computed among time-points - 11 P0 P’0 T0 A T’0 A) The Pole Ladder optimized Schild’s ladder -A’ A’ C geodesic GSI 2013 - 12GSI 2013 Pole Ladder Equivalence to Schild’s ladder Symmetric connection: B is the parallel transport of A Locally linear construction  Pole ladder is the Schild’s ladder - 13GSI 2013 t1 t2 t3 t0 - 14GSI 2013 t0 t1 t2 t3 • Minimize the number of geodesics required • Parallel transport consistently computed amongst time-points - 15GSI 2013 Pole Ladder Application to SVF Setting [Lorenzi et al, IPMI 2011] B A + [ v , A ] + ½ [ v , [ v , A ] ] Baker-Campbell-Hausdorff formula (BCH) (Bossa 2007) - 16GSI 2013 Pole Ladder Iterative computation [Lorenzi et al, IPMI 2011] B A + [ v , A ] + ½ [ v , [ v , A ] ] A … v/n - 17 baseline Time 1 Time 4 … …ventricles expansion from the real time series Synthetic example GSI 2013 - 18 Comparison: •Schild’s ladder • Vector reorientation • Conjugate action • Scalar transport GSI 2013 Synthetic example EMETTEUR - NOM DE LA PRESENTATION - 19 Transport consistency Deformation Vector transport Scalar transport Scalar summary Scalar summary ( logJacobian det, …) Vector measure GSI 2013 Synthetic example - 20GSI 2013 Synthetic example - 21GSI 2013 Synthetic example Quantitative analysis • Pole ladder compares well with respect to scalar transport • High variability led by Schild’s ladder - 22 … … • Group-wise Statistics • Extrapolation Application on Alzheimer’s disease Group-wise analysis of longitudinal trajectories GSI 2013 - 23GSI 2013 Longitudinal changes in Alzheimer’s disease (141 subjects – ADNI data) ContractionExpansion Student’s t statistic - 24GSI 2013 Longitudinal changes in Alzheimer’s disease (141 subjects – ADNI data) Comparison with standard TBM Student’s t statistic Pole ladder Scalar transport • Consistent results • Equivalent statistical power - 25GSI 2013 Conclusions • General framework for the parallel transport of deformations (not necessarily requires the choice of a metric) • Minimal number of computations for the transport of time series of deformations • Efficient solution with the SVF setting • Consistent statistical results • Multivariate group-wise analysis of longitudinal changes Perspectives • Further investigations of numerical issues (step-size) • Comparison with other numerical methods for the parallel transport in diffeomorphic registration (Younes, 2007) - 26 Thank you GSI 2013

Olivier Ruatta

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

On the geometry and the deformation of shapes represented by piecewise continuous Bézier curves with application to shape optimization Olivier Ruatta XLIM, UNR 7252 Université de Limoges CNRS Geometric Science of Information 2013 Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 1 / 32 Motivation: Shapes optimisation problems Let Ω ⊂ P R2 such that for each ω ∈ Ω the frontier ∂ω of this "region" is a regular curve (i.e. piecewise continuous here). Let F : Ω −→ R+ be a positive real valued function. Problem Find ω0 ∈ Ω such that F(ω0) ≤ F(ω) for all ω ∈ Ω. Very often, the computation of F(ω) requires to solve a system of PDE. Two problems : The cost of the computation of F(ω) and its differentiability (and computation of derivatives also). Compatibility of the space of shape and discretization of R2 for the system of PDE. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 2 / 32 Shapes optimization problems R+ M ∂ω F F(ω) ω Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 3 / 32 Shapes optimization methods Geometric gradient techniques (level sets, . . .) : compute how to deform the frontier of the shape and try to deform in a coherent way [Hadamard, Pierre, Henrot, Allaire, Jouve, . . .]. Relaxation method (SIMP, . . .) : compute a density that represent the support of the shape [Bensœ,Sigmund, . . .]. Topologic gradient : generally for PDEs, remove or add finites elements contening the shape [Masmoudi, Sokolowski, . . .]. Parametric optimization : reduce the shapes to a little space controlled by few parameters and look the problem as a parametric optimization problem [Elyssa (Didon in latin, 4 century before J.-C.),. . .,Goldberg,. . .]. Our approach : try to mix the best aspects of the first (level sets) and the last (parametric) approaches. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 4 / 32 Bézier curves Let P0, . . . , Pd ∈ R2, we define a family of curves parametrized over [0, 1] : B([P0], t) := P0 (degree 0 Bézier curve). B([P0, P1], t) := (1 − t)B([P0], t) + tB([P1], t) = (1 − t)P0 + tP1 (degree 1 Bézier curve) . . . B([P0, . . . , Pd ], t) := (1 − t)B([P0, . . . , Pd−1], t) + tB([P1, . . . , Pd ], t) (degree d Bézier curve). Those are polynomial curves. The points P0, . . . , Pd ∈ R2 are called the control polygon of the curve defined by B([P0, . . . , Pd ], t). Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 5 / 32 Bézier curves P0 P1 P2 P3 Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 6 / 32 Bernstein polynomials Definition Let d be a positive integer, for all i ∈ {0, . . . , d} we define: bi,d (t) = d i (1 − t)d−i ti . The polynomials b0,d , . . . , bd,d are called the Bernstein polynomials of degree d. Proposition The Bernstein polynomials of degree d, b0,d , . . . , bd,d , form a basis of the vector space of polynomials of degree less or equal to d. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 7 / 32 Bernstein polynomials and Bézier curves Theorem B([P0, . . . , Pd ], t) = d i=0 Pibi,d (t) Corollary Every parametrized curve with polynomial parametrization of degree at most can be represented as a Bézier curve of degree at most d. Definition We define Bd the space of all Bézier curves of degree at most d. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 8 / 32 Structure of Bd We denote E = R2 and we consider the following map: Ψd : Ed+1 −→ Bd defined by Ψd (P0, . . . , Pd ) = B([P0, . . . , Pd ], t). Proposition Ψd is a linear isomorphism between Ed+1 and Bd . Let t = t0 = 0 < t1 < · · · < td = 1 be a subdivision of [0, 1]. We define the sampling map: St,d : Γ(t) ∈ Bd −→ (Γ(t0), · · · , Γ(td )) ∈ Ed+1 . Proposition St,d is a linear isomorphism between Bd and Ed+1. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 9 / 32 Evaluation-Interpolation Let t = t0 = 0 ≤ t1 ≤ · · · ≤ td = 1 be a subdivision of [0, 1] and let P0, . . . , Pd ∈ E. Bt,d :=    b0,d (t0) · · · bd,d (t0) ... ... ... b0,d (td ) · · · bd,d (td )    . Proposition (Evaluation) Bt,d    PT 0 ... PT d    =    B([P0, . . . , Pd ], t0)T ... B([P0, . . . , Pd ], td )T    . Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 10 / 32 Multi-evaluation t = (0, 1/3, 2/3, 1),     MT 0 MT 1 MT 2 MT 3     = Bt,3     PT 0 PT 1 PT 2 PT 3     P0 P1 P2 P3 M0 M1 M2 M3 Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 11 / 32 Evaluation-Interpolation Let t = t0 = 0 ≤ t1 ≤ · · · ≤ td = 1 be a subdivision of [0, 1] and let M0, . . . , Md ∈ E. Problem Find P0, . . . , Pd ∈ E such that B([P0, . . . , Pd ], ti) = Mi for all i ∈ {0, . . . , d}. Proposition (Interpolation) The points defined by:    PT 0 ... PT d    = B−1 t,d    B([P0, . . . , Pd ], t0)T ... B([P0, . . . , Pd ], td )T    solve the problem. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 12 / 32 Summary 1 We have 3 spaces: Pd Ed+1 the vector space of the control polygons, St,d Ed+1 the vector space of the sampling of Bézier curves associated to a subdivision t, Bd the vector space of the degree d Bézier parametrizations. Proposition The following diagram of isomorphisms is commutative: Pd Ψd −→ Bd Bt,d ↓ Et,d St,d . Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 13 / 32 Deformation problem Let Γ(t) := B([P0, . . . , Pd ], t) be a degree d Bézier curve s.t. Et,d (Γ) = M =    MT 0 ... MT d    and let δM :=    δMT 0 ... δMT d    ∈ TMSt,d , we consider the following problem: Problem (Deformation problem) Denoting P =    PT 0 ... PT d    find δP ∈ TPPd such that Λ(t) := B([P0 + δP0, . . . , Pd + δPd ], t) satisfies Λ(ti) = Mi + δMi for all i ∈ {0, . . . , d}. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 14 / 32 Deformation problem P1 P2 P3 M0 M1 M2 M3 P0 δM0 δM1 δM2 δM3 Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 15 / 32 Deformation curve Proposition (Deformation polygon) Taking δP = B−1 t,d δM, the curve Ψd (P + δP) is a solution of the "Deformation problem". δP ∈ TPPd is called the deformation polygon and Ψd (δP) ∈ TB([P0,...,Pd ],t)Bd is called the deformation curve. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 16 / 32 Piecewize Bézier curves Let P1 := (P1,0, . . . , P1,d ) ∈ Pd , P2 := (P2,0, . . . , P2,d ) ∈ Pd , . . . and PN := (PN,0, . . . , PN,d ) ∈ Pd be such that P1,d = P2,0, . . . , PN−1,d = PN,0. We define the following parametrization: B((P1, . . . , PN), t) =    B([P1,0, . . . , P1,d ], N ∗ t) if t ∈ [0, 1/N[ B([P2,0, . . . , P2,d ], N ∗ t − 1) if t ∈ [1/N, 2/N[ ... B([PN,0, . . . , PN,d ], N ∗ t − (N − 1)) if t ∈ [N−1 N , N] We denote Ψ(P1, . . . , PN) := B((P1, . . . , PN), t). This is a continuous curve joining P1,0 to PN,d and the curve is a loop if P1,0 = PN,d . Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 17 / 32 Piecewize Bézier curves P1,3 = P2,0 P1,0 P1,1 P1,2 P2,1 P2,2 P2,3 Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 18 / 32 Vector space of Piecewize Bézier curves We define : The vector space PN,d = {(P1, . . . , PN)|P1,d = P2,0, . . . , PN−1,d = PN,0} ⊂ PN d . The vector space LN,d = {(P1, . . . , PN)|P1,d = P2,0, . . . , PN−1,d = PN,0, P1,0 = PN,d } ⊂ PN d . The vector space of PBC BN,d = {B((P1, . . . , Pd ), t)|(P1, . . . , Pd ) ∈ PN,d }. The vector space of PBL Bc N,d = {B((P1, . . . , Pd ), t)|(P1, . . . , Pd ) ∈ LN,d }. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 19 / 32 Sampling piecewize Bézier curves Let t = (t1,0 := 0, t1,1 := 1 N∗d , . . . , t1,d := 1 N , t2,0 = 1 N , . . . , tN−1,d := N−1 N , tN,0 := N−1 N , . . . , tN,d := 1) be a multi-regular subdivision and denote ti := (ti,0, . . . , ti,d ). We define the following linear map: Et,N,d : λ(t) ∈ BN,d −→ (λ(t1,0), . . . , λ(tN,d )) ∈ St,N,d ⊂ SN d The same way we define: Ec t,N,d : λ(t) ∈ Bc N,d −→ (λ(t1,0), . . . , λ(tN,d )) ∈ Sc t,N,d ⊂ SN d Finally, we define: Bt,N,d : (P1,0, . . . , PN,d ) ∈ PN,d −→ Bt1,d × · · · × BtN ,d    PT 1,0 ... PT N,d    ∈ St,N,d . Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 20 / 32 Summary 2 Proposition The following diagram of isomorphisms is commutative: PN,d ΨN,d −→ BN,d Bt,N,d ↓ Et,N,d St,N,d . Remark B−1 t,N,d := B−1 t1,d × · · · × B−1 tN ,d Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 21 / 32 Deformation problem for PBC Let Γ(t) ∈ BN,d be s.t. Et,N,d (Γ) = M :=    MT 1,0 ... MT N,d    ∈ St,N,d and let δM =    δMT 1,0 ... δMT N,d    ∈ TMSt,N,d , we consider the following problem: Problem (Deformation problem for PBC) Denoting P =    PT 1,0 ... PT N,d    find δP ∈ TPPN,d such that Λ(t) := B((P0 + δP0, . . . , Pd + δPd ), t) satisfies Λ(ti,j) = Mi,j + δMi,j for all i ∈ {1, . . . , N} and j ∈ {0, . . . , d}. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 22 / 32 Deformation Piecewize Bézier curve Proposition (Deformation polygons) Taking δP = B−1 t,N,d δM, the curve ΨN,d (P + δP) is a solution of the "Deformation problem for PBC". δP ∈ TPPN,d is called the deformation polygons and ΨN,d (δP) ∈ TB((P0,...,Pd ),t)BN,d is called the deformation piecewize Bézier curve. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 23 / 32 Back to shapes optimization Let ω ⊂ E be such that ∂ω is a piecewise continuous curve and F : P(E) −→ R+ the objective functional. The geometric gradient F(ω) : M ∈ ∂ω −→ F(ω)(M) ∈ TME give a perturbation for each point of the frontier to decrease the objective functional. R+ M ∂ω F F(ω) ω F(ω)(M) Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 24 / 32 Basic idea of the approach The space of admissible shape is ΩN,d := ω ∈ P(E)|∂ω ∈ Bc N,d and let ω ∈ ΩN,d such that ∂ω = B((P1, . . . , PN), t). Let M =    MT 1,0 ... MT N,d    = Et,N,d (B((P1, . . . , PN), t), to obtain a better shape we compute δM =    F(ω)(M1,0)T ... F(ω)(MN,d )T   . Then we compute δP = B−1 t,N,d δM and let λ(t) = B((P + δP), t), we have: Proposition λ(ti,j) = Mi,j + F(ω)(Mi,j) for all i ∈ {1, . . . , N} and j ∈ {0, . . . , d}. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 25 / 32 Basic theoretical contribution Let Ω = {ω ∈ P(E)|∂ω ∈ C0 p([0, 1], E)} and F : Ω −→ R+ be a smooth function such that F(ω) : ∂ω −→ TE is everywhere well defined. For every γ ∈ BN,d we associate "the" shape γ such that ∂γ = γ. Proposition For every N and d integer and every compatible subdivision t, we associate to F a vector field VF : BN,d −→ TBN,d by: B((P), t) −→ ΨN,d   B−1 t,N,d    ( F(B(P, t))(B((P), t1,0)))T ... ( F(B(P, t))(B((P), tN,d )))T       . Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 26 / 32 Optimal shapes and fixed points Proposition Let ω ∈ Ω such that F(ω) ≡ 0 then, for all N and d and every compatible subdivision t there is γ ∈ BN,d satisfying VF (γ) = 0. In other words, every optimum of F induce at list a fixed point of VF over BN,d . Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 27 / 32 Meta-algorithm for shapes optimization Input: An initial shape ω s.t. ∂ω = B((P), t) ∈ BN,d Output: The control polygon P a the frontier of a local minimum of F(ω). λ ← B((P), t) while criterium not satified do δP ← B−1 t,N,d (Et,N,d (λ)) P ← P + δP λ ← ΨN,d (P) end while Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 28 / 32 Snake-like algorithm for omnidirectional images segmentation Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 29 / 32 Snake-like algorithm for omnidirectional images segmentation Image segmentation can be interpreted as a shapes optimization problem using "snake-approach". The geometric gradient is built from: a "balloon" force making the contour expand, the gradient of the intensity of the image (vanishing at the contours). We use a classical approach: Canny filter to detect contours. This problem is used to detect free space for an autonomous robot with a catadioptric sensor. Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 30 / 32 Snake-like algorithm for classical images segmentation Joint work with Ouiddad Labbani-I. and Pauline Merveilleux-O. of "Université de Picardie - Jules Vernes". Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 31 / 32 Snake-like algorithm for omnidirectional images segmentation Joint work with Ouiddad Labbani-I. and Pauline Merveilleux-O. of "Université de Picardie - Jules Vernes". Ruatta (XLIM) Free Forms for Shapes Optimisation GSI 2013 32 / 32

Michel Berthier

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

A Riemannian Fourier Transform via Spin Representations Geometric Science of Information 2013 T. Batard - M. Berthier - Outline of the talk The Fourier transform for multidimensional signals - Examples Three simple ideas The Riemannian Fourier transform via spin representations Applications to filtering The Fourier transform for multidimensional signals The problem : How to define a Fourier transform for a signal φ : Rd −→ Rn that does not reduce to componentwise Fourier transforms and that takes into account the (local) geometry of the graph associated to the signal ? Framework of the talk : the signal φ : Ω ⊂ R2 −→ Rn is a grey-level image, n = 1, or a color image, n = 3. In the latter case, we want to deal with the full color information in a really non marginal way. Many already existing propositions (without geometric considerations) : • T. Ell and S.J. Sangwine transform : Fµφ(U) = ∫ R2 φ(X)exp(−µ⟨X, U⟩)dX (1) where φ : R2 −→ H0 is a color image and µ is a pure unitary quaternion encoding the grey axis. Fµφ = A∥ exp[µθ∥] + A⊥ exp[µθ⊥]ν (2) where ν is a unitary quaternion orthogonal to µ. Allows to define an am- plitude and a phase in the chrominance and in the luminance. The Fourier transform for multidimensional signals The problem : How to define a Fourier transform for a signal φ : Rd −→ Rn that does not reduce to componentwise Fourier transforms and that takes into account the (local) geometry of the graph associated to the signal ? Framework of the talk : the signal φ : Ω ⊂ R2 −→ Rn is a grey-level image, n = 1, or a color image, n = 3. In the latter case, we want to deal with the full color information in a really non marginal way. Many already existing propositions (without geometric considerations) : • T. Bülow transform : Fijφ(U) = ∫ R2 exp(−2iπx1u1)φ(X)exp(−2jπu2x2)dX (1) where φ : R2 −→ R. Fijφ(U) = Fccφ(U) − iFscφ(U) − jFcsφ(U) + kFssφ(U) (2) Allows to analyse the symetries of the signal with respect to the x and y variables. The Fourier transform for multidimensional signals The problem : How to define a Fourier transform for a signal φ : Rd −→ Rn that does not reduce to componentwise Fourier transforms and that takes into account the (local) geometry of the graph associated to the signal ? Framework of the talk : the signal φ : Ω ⊂ R2 −→ Rn is a grey-level image, n = 1, or a color image, n = 3. In the latter case, we want to deal with the full color information in a really non marginal way. Many already existing propositions (without geometric considerations) : • M. Felsberg transform : Fe1e2e3 φ(U) = ∫ R2 exp(−2πe1e2e3⟨U, X⟩)φ(X)dX (1) where φ(X) = φ(x1e1 + x2e2) = φ(x1, x2)e3 is a real valued function defined on R2 (a grey level image). The coefficient e1e2e3 is the pseudos- calar of the Clifford algebra R3,0. This transform is well adapted to the monogenic signal. The Fourier transform for multidimensional signals The problem : How to define a Fourier transform for a signal φ : Rd −→ Rn that does not reduce to componentwise Fourier transforms and that takes into account the (local) geometry of the graph associated to the signal ? Framework of the talk : the signal φ : Ω ⊂ R2 −→ Rn is a grey-level image, n = 1, or a color image, n = 3. In the latter case, we want to deal with the full color information in a really non marginal way. Many already existing propositions (without geometric considerations) : • F. Brackx et al. transform : F±φ(U) = ( 1 √ 2π )n ∫ Rn exp( i π 2 ΓU) × exp(−i⟨U, X⟩)φ(X)dX (1) where ΓU is the angular Dirac operator. For φ : R2 −→ R0,2 ⊗ C F±φ(U) = 1 2π ∫ R2 exp(±U ∧ X)φ(X)dX (2) where exp(±U ∧ X) is a bivector. Three simple ideas ..1 The abstract Fourier transform is defined through the action of a group. • Shift theorem : Fφα(u) = e2iπαu Fφ(u) (3) where φα(x) = φ(x + α). Here, the involved group is the group of trans- lations of R. The action is given by (α, x) −→ x + α := τα(x) (4) The mapping (group morphism) χu : τα −→ e2iπuα = χu(α) ∈ S1 (5) is a so-called character of the group (R, +). The Fourier transform reads Fφ(u) = ∫ R χu(−x)φ(x)dx (6) .. Spinor Fourier Three simple ideas ..1 The abstract Fourier transform is defined through the action of a group. • More precisely : – By means of χu, every element of the group is represented as a unit complex number that acts by multiplication on the values of the function. Every u gives a representation and the Fourier transform is defined on the set of representations. – If the group G is abelian, we only deal with the group morphisms from G to S1 (characters). Three simple ideas ..1 The abstract Fourier transform is defined through the action of a group. • Some transforms : – G = (Rn, +) : we recover the usual Fourier transform. – G = SO(2, R) : this corresponds to the theory of Fourier series. – G = Z/nZ : we obtain the discrete Fourier transform. – In the non abelian case one has to deal with the equivalence classes of unitary irreducible representations (Pontryagin dual). Some of these irreducible representations are infinite dimensional. Applications to ge- neralized Fourier descriptors with the group of motions of the plane, to shearlets,... Three simple ideas ..1 The abstract Fourier transform is defined through the action of a group. • The problem : Find a good way to represent the group of translations (R2, +) in order to make it act naturally on the values (in Rn) of a multidimensional function Three simple ideas ..2 The vectors of Rn can be considered as generalized numbers. • Usual identifications : X = (x1, x2) ∈ R2 ↔ z = x1 + ix2 ∈ C (3) X = (x1, x2, x3, x4) ∈ R4 ↔ q = x1 + ix2 + jx3 + kx4 ∈ H (4) The fields C and H are the Clifford algebras R0,1 (of the vector space R with the quadratic form Q(x) = −x2) and R0,2 (of the vector space R2 with the quadratic form Q(x1, x2) = −x2 1 − x2 2). • Clifford algebras : the vector space Rn with the quadratic form Qp,q is embedded in an algebra Rp,q of dimension 2n that contains scalars, vectors and more generally multivectors such as the bivector u ∧ v = 1 2 (uv − vu) (5) Three simple ideas ..2 The vectors of Rn can be considered as generalized numbers. • The spin groups : the group Spin(n) is the group of elements of R0,n that are products x = n1n2 · · · n2k (3) of an even number of unit vectors of Rn. • Some identifications : Spin(2) ≃ S1 (4) Spin(3) ≃ H1 (5) Spin(4) ≃ H1 × H1 (6) • Natural idea : replace the group morphisms from (R2, +) to S1 , the cha- racters, by group morphisms from (R2, +) to Spin(n), the spin characters. Three simple ideas ..2 The vectors of Rn can be considered as generalized numbers. • The problem : Compute the spin characters, i.e. the group morphisms from (R2, +) to Spin(n) Find meaningful representation spaces for the action of the spin characters Three simple ideas ..2 The vectors of Rn can be considered as generalized numbers. • Spin(3) characters : χu1,u2,B : (x1, x2) −→ exp 1 2 [ B A ( x1 x2 )] = exp 1 2 [(x1u1 + x2u2)B] (3) where A = (u1 u2) is the matrix of frequencies and B = ef with e and f two orthonormal vectors of R3. .. Spinor Fourier Three simple ideas ..2 The vectors of Rn can be considered as generalized numbers. • Spin(4) and Spin(6) characters : (x1, x2) −→ exp 1 2 [ (B1 B2) A ( x1 x2 )] (3) (x1, x2) −→ exp 1 2 [ (B1 B2 B3) A ( x1 x2 )] (4) where A is a 2 × 2, resp. 2 × 3, real matrix and Bi = eifi for i = 1, 2, resp. i = 1, 2, 3, with (e1, e2, f1, f2), resp. (e1, e2, e3, f1, f2, f3), an orthonormal basis of R4, resp. R6. Three simple ideas ..3 The spin characters are parametrized by bivectors. • Fundamental remark : the spin characters are as usual parametrized by frequencies, the entries of the matrix A. But they are also parametrized by bivectors, B, B1 and B2, B1, B2 and B3, depending on the context. • How to involve the geometry ? it seems natural to parametrize the spin characters by the bivector corresponding to the tangent plane of the image graph, more precisely by the field of bivectors corresponding to the fiber bundle of the image graph. Three simple ideas ..3 The spin characters are parametrized by bivectors. • Several possibilities for dealing with representation spaces for the action of the spin characters : – Using Spin(3) characters and the generalized Weierstrass representa- tion of surface (T. Friedrich) : in “Quaternion and Clifford Fourier Trans- form and Wavelets (E. Hitzer and S.J. Sangwine Eds), Trends in Mathe- matics, Birkhauser, 2013. – Using Spin(4) and Spin(6) characters and the so-called standard re- presentations of the spin groups : in IEEE Journal of Selected Topics in Signal Processing, Special Issue on Differential Geometry in Signal Pro- cessing, Vol 7, Issue 4, 2013. The Riemannian Fourier transform The spin representations of Spin(n) are defined through complex represen- tations of Clifford algebras. They do not “descend” to the orthogonal group SO(n, R) (since they send −1 to −Identity contrary to the standard represen- tations). These are the representations used in physics. The complex spin representation of Spin(3) is the group morphism ζ3 : Spin(3) −→ C(2) (5) obtained by restricting to Spin(3) ⊂ (R3,0 ⊗ C)0 a complex irreducible repre- sentation of R3,0. An color image is considered as a section .. Spinor Fourier σφ : (x1, x2) −→ 3∑ k=1 (0, φk (x1, x2)) ⊗ gk (6) of the spinor bundle PSpin(E3(Ω)) ×ζ3 C2 (7) where E3(Ω) = Ω × R3 and (g1, g2, g3) is the canonical basis of R3. The Riemannian Fourier transform Dealing with spinor bundles allows varying spin characters and the most natural choice for the field of bivectors B := B(x1, x2) which generalized the field of tangent planes is B = γ1g1g2 + γ2g1g3 + γ3g2g3 (8) with γ1 = 1 δ γ2 = √∑3 k=1 φ2 k,x2 δ γ3 = − √∑3 k=1 φ2 k,x1 δ δ = 1 + 2∑ j=1 3∑ k=1 φ2 k,xj (9) The operator B· acting on the sections of S(E3(Ω)), where · denotes the Clif- ford multiplication, is represented by the 2 × 2 complex matrix field B· = ( iγ1 −γ2 − iγ3 γ2 − iγ3 −iγ1 ) (10) Since B2 = −1 this operator has two eigenvalue fields i and −i. Consequently, every section σ of S(E3(Ω)) can be decomposed into σ = σB + + σB − where σB + = 1 2 (σ − iB · σ) σB − = 1 2 (σ + iB · σ) (11) The Riemannian Fourier transform The Riemannian Fourier transform of σφ is given by .. Usual Fourier FBσφ(u1, u2) = ∫ R2 χu1,u2,B(x1,x2)(−x1, −x2) · σφ(x1, x2)dx1dx2 (12) .. Spin characters .. Image section The decomposition of a section σφ associated to a color image leads to φ(x1, x2) = ∫ R2 3∑ k=1 [ φk+ (u1, u2)eu1,u2 (x1, x2) √ 1 − γ1 2 +φk− −1 (u1, u2)e−u1,−u2 (x1, x2) √ 1 + γ1 2 ] ⊗ gkdu1du2 (13) where φk+ = φk √ 1 − γ1 2 φk− = φk √ 1 + γ1 2 (14) Low-pass filtering Figure: Left : Original - Center : + Component - Right : - Component Low-pass filtering (a) + Component (b) Variance : 10000 (c) Variance : 1000 (d) Variance : 100 (e) Variance : 10 (f) Variance → 0 Figure: Low-pass filtering on the + component Low-pass filtering (a) - Component (b) Variance : 10000 (c) Variance : 1000 (d) Variance : 100 (e) Variance : 10 (f) Variance → 0 Figure: Low-pass filtering on the - component Thank you for your attention !

Aurélien Schutz, Lionel Bombrun, Yannick Berthoumieu

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives K-Centroids-Based Supervised Classification of Texture Images using the SIRV modeling Aurélien Schutz Lionel Bombrun Yannick Berthoumieu IMS Laboratory - CNRS UMR5218, Groupe Signal 28-30 august 2013 Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 1 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Database classification Database classification Musical genres Images databases Textured images databases Video databases Propositions Information geometry Centroid ¯θi [Choy2007] [Fisher1925], [Burbea1982], [Pennec1999], [Banerjee2005], [Amari2007], [Nielsen2009] Bayesian framework of classification Intrinsic prior p(θ | Hi ) [Bayes1763], [Whittaker1915], [Robert1996], [Bernardo2003] Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 2 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Prior capability of handling the intra-class diversity Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 3 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 4 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 5 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Bayesian decision Data space X where x ∼ P Parameter space Θ PΘ a prior parametric model on Θ Riemannian manifold G the Fisher information matrix Nc classes, D = {Hi } Nc i=1 decision space Prerequisites : likelihood p(x | θ, Hi ), prior p(θ | Hi ), 0-1 loss L Decision rule on X : high computational complexity Xi = x | ˆHi = arg min Hj ∈D − log Θj p(x | θ, Hj )p(θ | Hj ).dθ Decision rule on Θ : minimizing conditional risk Duda, Bayesian decision theory Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 6 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Intra-class parametric p(θ | Hi) Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 7 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Intra-class parametric p(θ | Hi) ¯θi centroid of the class i = 1, . . . , Nc Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 7 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Definition of intrinsic prior based on Jeffrey divergence Prior that follow a Gaussian distribution on manifold Θ p(θ | Hi ) = Zi exp − 1 2 (γ¯θi ,θ(1))T Ci γ¯θi ,θ(1) Pennec, Xavier, "Probabilities and statistics on Riemannian manifolds : basic tools for geometric measurements," NSIP, 1999 Proposition Intrinsic prior as Gaussian distribution on manifold Θ, with λi = (¯θi , σ2 i ) p(θ | λi , Hi ) |G(¯θi )|1/2 (σi √ 2π)d exp − 1 2σ2 i J(p(· | θ), p(· | ¯θi )) Jeffrey divergence J(p(· | θ), p(· | ¯θi )) = X (p(x | θ) − p(x | ¯θi )) log p(x | θ) p(x | ¯θi ) .dx Fisher, R.A., "Theory of statistical estimation," Proc. Cambridge Phil. Soc., 22, pp. 700-–725, 1925 Burbea, Jacob et Rao, C.Radhakrishna, "Entropy differential metric, distance and divergence measures in probability spaces : A unified approach ," Journal of Multivariate Analysis, 4, pp. 575—596, 1982 Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 8 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Optimal decision on Θ Decision on X based on Empirical Bayes, Xi = x Hi = arg min Hj ∈D − log Θi p(Xi | θ, Hi )p(θ | Hi ).dθ Kass, R. E. and Steffey, D., "Approximate Bayesian Inference in Conditionally Independent Hierarchical Models (Parametric Empirical Bayes Models)," 1989 Miyata, Y., "Fully Exponential Laplace Approximations Using Asymptotic Modes," Journal of the American Statistical Association, 2004 Proposition Decision on X, Laplace approximation Xi x ˆλi = arg min λj ∈D d 2 log{2σ2 j + 1} + 1 2σ2 j J(p(· | ˆθ(x)), p(· | ¯θj )) ˆθ could be maximum likelihood estimator for p(x | θ, Hi ) [Miyata2004] Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 9 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 10 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Space/scale decomposition Reel/complex wave- lets Gabor Steerable filters Bandelets Grouplets Dual-Tree Mallat, S. A, "Theory for multiresolution signal decomposition : The wavelet representation," IEEE PAMI, 1989 Do, M. and Vetterli, M., "Wavelet-based texture retrieval using generalized Gaussian density and Kullback-Leibler distance," IEEE IP, 2002 Choy, S.-K. and Tong, C.-S., "Supervised Texture Classification Using Characteristic Generalized Gaussian Density," Journal of Mathematical Imaging and Vision, 2007 Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 11 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Stochastic models for likelihood p(x | θ, Hi) Spherically Invariant Random Vector (SIRV) x = g √ τ : g multivariate gaussian distribution Σ τ Weibull distribution a Joint distribution y = (τ, g), θ = (Σ, a) p(y | θ) = pG (g | Σ)pw (τ | a) Separability of Jeffrey divergence J(p(· | θ), p(· | θ )) = J(pG (· | Σ), pG (· | Σ ))+J(pw (· | a), pw (· | a )) Bombrun, L., Lasmar, N.-E., Berthoumieu, Y. and Verdoolaege, G., Multivariate texture retrieval using the SIRV representation and the geodesic distance, 2011 Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 12 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Centroid ¯θi computation State of the art : exponential families ; centred multivariate Gaussian ¯ΣR,i = 1 Ni Ni n=1 Σ−1 n −1 and ¯ΣL,i = 1 Ni Ni n=1 Σn Banerjee, A., Merugu, S., Dhillon, I. and Ghosh, J., Clustering with Bregman divergences, 2005 Nielsen, F. and Nock, R. Sided and Symmetrized Bregman Centroids, 2009 Steepest descent algorithm for Weibull centroid Dekker, T. J., Finding a zero by means of successive linear interpolation, 1969 Brent, R. P., An algorithm with guaranteed convergence for finding a zero of a function, 1971 Proposition Separated estimation of each centroid. ¯θi = (1 − i )¯ΣR,i + i ¯ΣL,i , arg min a∈R+ 1 Ni Ni n=1 J(pw (· | an), pw (· | a)) Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 13 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 14 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Unique centroid versus multiple centroids (K-CB) Varma, M. and Zisserman, A., A Statistical Approach to Texture Classification from Single Images, 2005 Several centroids per class (¯θi,k )K k=1, likelihood K-CB with binary weights wk pm(θ | (Hi,k )K k=1) = K k=1 wk Zi,k exp − 1 2σ2 i J(p(· | θ), p(· | ¯θi,k )) Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 15 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Algorithms for K-CB K-means ( Hard C-Means ) Proposition 1. Assignment of parametric vector θ Θi,k = θ | ˆθi,k = arg min θi,l ∈Hi 1 2σ2 i J(p(· | θ), p(· | ¯θi,l )) 2. Update ¯θi,k ¯θi,k = arg min ¯θ∈Θ Θi,k 1 2σ2 i J(p(· | θ), p(· | ¯θ)).dθ Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 16 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Textured image database Vision Texture database (VisTex) Brodatz database Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 17 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Vistex, SIRV Weibull, Jeffrey divergence 2/16 5/16 8/16 11/16 14/16 85 90 95 100 No. of training sample Averagekappaindex(%) 1−CB [1] 3−CB 1−NN Spatial Database K 1-NN 1-CB [1] K-CB neigh. NTr = K NTr = NSa/2 NTr = NSa/2 3 × 3 VisTex 3 83.7 % ±2.0 90.4 % ±1.3 96.8 % ±1.2 Brodatz 10 50.6 % ±2.6 79.9 % ±1.5 96.2 % ±1.2 1 × 1 VisTex 3 78.7 % ±2.3 72.7 % ±2.0 88.9 % ±1.7 Brodatz 10 65.8 % ±2.7 70 % ±1 97 % ±2 [1] Choy, S.K., Tong, C.S. : Supervised texture classification using characteristic generalized Gaussian density. Journal of Mathematical Imaging and Vision 29 (Aug. 2007) 35–47 Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 18 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 19 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Conclusions et Perspectives Conclusion 1. Bayesian classification theory and Information geometry 2. Concentred Gaussian distribution as prior p(θ | Hi ) intrinsic when p(θ) or L depends on Fisher information matrix G(θ) Decision rule done on Θ 3. K-Centroids based (K-CB) classification Diversity intra-class too high : a class, K centroids K-means on each class Numerical application : K-CB performances close to 1-NN performances K-CB give a low computing complexity Perspectives 1. K-CB with Possibilistic Fuzzy C-Means (PFCM) algorithm 2. Adapting the number of centroid needed by class Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 20 / 21 Introduction Bayesian classification and Information geometry Textured images K-CB and results Conclusion and Perspectives Content Brodatz, P., Textures : A Photographic Album for Artists and Designers, 1966. Schutz, A. and Bombrun, L. and Berthoumieu, Y. (Labo IMS) K-CB of images with SIRV 28-30 august 2013 21 / 21

Julien Ah-Pine

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

A general framework for comparing heterogeneous binary relations Julien Ah-Pine (julien.ah-pine@eric.univ-lyon2.fr) University of Lyon - ERIC Lab GSI 2013 Paris 28/08/2013 J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 1 Outline 1 Introduction 2 Kendall’s general coefficient Γ 3 Another view of Kendall’s Γ Relational Matrices Reinterpreting Kendall’s Γ using RM The Weighted Indeterminacy Deviation Principle 4 Extending Kendall’s Γ for heterogeneous BR Heterogeneous BR A geometrical framework Similarities of order t > 0 5 A numerical example 6 Conclusion J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 2 Introduction Outline 1 Introduction 2 Kendall’s general coefficient Γ 3 Another view of Kendall’s Γ Relational Matrices Reinterpreting Kendall’s Γ using RM The Weighted Indeterminacy Deviation Principle 4 Extending Kendall’s Γ for heterogeneous BR Heterogeneous BR A geometrical framework Similarities of order t > 0 5 A numerical example 6 Conclusion J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 3 Introduction Binary relations (BR) A Binary Relation (BR) R over a finite set A = {a, . . . , i, j, . . . , n} of n items is a subset of A × A. If (i, j) ∈ R we say “i is in relation with j for R” and this is denoted iRj. J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 4 Introduction Binary relations (BR) A Binary Relation (BR) R over a finite set A = {a, . . . , i, j, . . . , n} of n items is a subset of A × A. If (i, j) ∈ R we say “i is in relation with j for R” and this is denoted iRj. Equivalence Relations (ER) are reflexive, symmetric and transitive BR. J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 4 Introduction Binary relations (BR) A Binary Relation (BR) R over a finite set A = {a, . . . , i, j, . . . , n} of n items is a subset of A × A. If (i, j) ∈ R we say “i is in relation with j for R” and this is denoted iRj. Equivalence Relations (ER) are reflexive, symmetric and transitive BR. Order Relations (OR) are of different types : preorders, partial orders and total (or linear or complete) orders. If ties and missing values : preorders (reflexive, transitive BR) If no tie but missing values : partial orders (reflexive, antisymmetric, transitive BR) If no tie and no missing value : total orders (reflexive, antisymmetric, transitive and complete BR) J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 4 Introduction Equivalence Relations and qualitative variables ER are related to qualitative or nominal categorical variables. J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 5 Introduction Equivalence Relations and qualitative variables ER are related to qualitative or nominal categorical variables. Example : Color of eyes x = a b c d e Brown Brown Blue Blue Green J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 5 Introduction Equivalence Relations and qualitative variables ER are related to qualitative or nominal categorical variables. Example : Color of eyes x = a b c d e Brown Brown Blue Blue Green X is the ER “has the same color of eyes than” and can be represented by a graph and its adjacency matrix (AM) denoted X such that ∀i, j : Xij = 1 if iXj and Xij = 0 otherwise : X =       a b c d e a 1 1 . . . b 1 1 . . . c . . 1 1 . d . . 1 1 . e . . . . 1       J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 5 Introduction Order Relations and quantitative variables OR are related to quantitative variables. J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 6 Introduction Order Relations and quantitative variables OR are related to quantitative variables. Example : Ranking of items x = a b c d e 1 2 4 3 5 J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 6 Introduction Order Relations and quantitative variables OR are related to quantitative variables. Example : Ranking of items x = a b c d e 1 2 4 3 5 X is the OR “has a lower rank than” and its AM X is again such that ∀i, j : Xij = 1 if iXj and Xij = 0 otherwise : X =       a b c d e a 1 1 1 1 1 b . 1 1 1 1 c . . 1 . 1 d . . 1 1 1 e . . . . 1       J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 6 Introduction How to compare the relationships between BR ? We are given two variables of measurements x and y of the same kind (both qualitative or both quantitative). J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 7 Introduction How to compare the relationships between BR ? We are given two variables of measurements x and y of the same kind (both qualitative or both quantitative). How can we measure the proximity between the BR underlying the two variables ? J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 7 Introduction How to compare the relationships between BR ? We are given two variables of measurements x and y of the same kind (both qualitative or both quantitative). How can we measure the proximity between the BR underlying the two variables ? How to deal with heterogeneity ? J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 7 Introduction How to compare the relationships between BR ? We are given two variables of measurements x and y of the same kind (both qualitative or both quantitative). How can we measure the proximity between the BR underlying the two variables ? How to deal with heterogeneity ? When ER have different number of categories and different distributions ? For example : x = (A, A, B, B, C) ; y = (D, D, D, D, E) J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 7 Introduction How to compare the relationships between BR ? We are given two variables of measurements x and y of the same kind (both qualitative or both quantitative). How can we measure the proximity between the BR underlying the two variables ? How to deal with heterogeneity ? When ER have different number of categories and different distributions ? For example : x = (A, A, B, B, C) ; y = (D, D, D, D, E) When OR are of different types ? For example : x = (1, 2, 4, 3, 5) ; y = (1, 1, 1, 4, 5) J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 7 Kendall’s general coefficient Γ Outline 1 Introduction 2 Kendall’s general coefficient Γ 3 Another view of Kendall’s Γ Relational Matrices Reinterpreting Kendall’s Γ using RM The Weighted Indeterminacy Deviation Principle 4 Extending Kendall’s Γ for heterogeneous BR Heterogeneous BR A geometrical framework Similarities of order t > 0 5 A numerical example 6 Conclusion J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 8 Kendall’s general coefficient Γ Kendall’s Γ coefficient In statistics, Kendall in [Kendall(1948)] proposed a general correlation coefficient in order to define a broad family of association measures between x and y : Γ(x, y) = i,j Xij Yij i,j X2 ij i,j Y2 ij (1) where X and Y are two square matrices derived from x and y. J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 9 Kendall’s general coefficient Γ Particular cases of Γ Particular cases of Γ given in [Vegelius and Janson(1982), Kendall(1948)]. Association measure Xij Tchuprow’s T n nx u − 1 if xi = xj −1 if xi = xj J-index px − 1 if xi = xj −1 if xi = xj Table: Particular cases of Γ as for ER nx u is the nb of items in category u of x and px is the nb of categories of x. J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 10 Kendall’s general coefficient Γ Particular cases of Γ Particular cases of Γ given in [Vegelius and Janson(1982), Kendall(1948)]. Association measure Xij Tchuprow’s T n nx u − 1 if xi = xj −1 if xi = xj J-index px − 1 if xi = xj −1 if xi = xj Table: Particular cases of Γ as for ER nx u is the nb of items in category u of x and px is the nb of categories of x. Association measure Xij Kendall’s τa 1 if xi < xj −1 if xi > xj Spearman’s ρa Xij = xi − xj Table: Particular cases of Γ as for OR For Spearman’s ρa, xi is the rank of item i. J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 10 Another view of Kendall’s Γ Outline 1 Introduction 2 Kendall’s general coefficient Γ 3 Another view of Kendall’s Γ Relational Matrices Reinterpreting Kendall’s Γ using RM The Weighted Indeterminacy Deviation Principle 4 Extending Kendall’s Γ for heterogeneous BR Heterogeneous BR A geometrical framework Similarities of order t > 0 5 A numerical example 6 Conclusion J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 11 Another view of Kendall’s Γ Relational Matrices Relational Matrices (RM) and some properties AM of BR have particular properties and they are more specifically called Relational Matrices (RM) by Marcotorchino in the Relational Analysis approach[Marcotorchino and Michaud(1979)]. J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 12 Another view of Kendall’s Γ Relational Matrices Relational Matrices (RM) and some properties AM of BR have particular properties and they are more specifically called Relational Matrices (RM) by Marcotorchino in the Relational Analysis approach[Marcotorchino and Michaud(1979)]. For instance, the relational properties of X can be expressed as linear equations of X : reflexivity, ∀i (Xii = 1) ; symmetry, ∀i, j (Xij − Xji = 0) ; antisymmetry, ∀i, j (Xij + Xji ≤ 1) ; complete (or total), ∀i = j (Xij + Xji ≥ 1) ; transitivity, ∀i, j, k (Xij + Xjk − Xik ≤ 1). J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 12 Another view of Kendall’s Γ Reinterpreting Kendall’s Γ using RM Reinterpreting Kendall’s Γ using RM There have been several works about using RM to reformulate association measures in order to better understand their differences [Marcotorchino(1984-85), Ghashghaie(1990), Najah Idrissi(2000), Ah-Pine and Marcotorchino(2010)]. J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 13 Another view of Kendall’s Γ Reinterpreting Kendall’s Γ using RM Reinterpreting Kendall’s Γ using RM There have been several works about using RM to reformulate association measures in order to better understand their differences [Marcotorchino(1984-85), Ghashghaie(1990), Najah Idrissi(2000), Ah-Pine and Marcotorchino(2010)]. In this work, we propose to reinterpret Kendall’s Γ in terms of RM and which emphasizes the so-called weighted indeterminacy deviation principle. J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 13 Another view of Kendall’s Γ Reinterpreting Kendall’s Γ using RM Reinterpreting Kendall’s Γ using RM There have been several works about using RM to reformulate association measures in order to better understand their differences [Marcotorchino(1984-85), Ghashghaie(1990), Najah Idrissi(2000), Ah-Pine and Marcotorchino(2010)]. In this work, we propose to reinterpret Kendall’s Γ in terms of RM and which emphasizes the so-called weighted indeterminacy deviation principle. 1 We give the definition of the opposite of an ER and of an OR. J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 13 Another view of Kendall’s Γ Reinterpreting Kendall’s Γ using RM Reinterpreting Kendall’s Γ using RM There have been several works about using RM to reformulate association measures in order to better understand their differences [Marcotorchino(1984-85), Ghashghaie(1990), Najah Idrissi(2000), Ah-Pine and Marcotorchino(2010)]. In this work, we propose to reinterpret Kendall’s Γ in terms of RM and which emphasizes the so-called weighted indeterminacy deviation principle. 1 We give the definition of the opposite of an ER and of an OR. 2 We introduce Λ, our formulation of Kendall’s Γ in terms of RM of BR, RM of opposites of BR and weighting schemes. J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 13 Another view of Kendall’s Γ Reinterpreting Kendall’s Γ using RM Reinterpreting Kendall’s Γ using RM There have been several works about using RM to reformulate association measures in order to better understand their differences [Marcotorchino(1984-85), Ghashghaie(1990), Najah Idrissi(2000), Ah-Pine and Marcotorchino(2010)]. In this work, we propose to reinterpret Kendall’s Γ in terms of RM and which emphasizes the so-called weighted indeterminacy deviation principle. 1 We give the definition of the opposite of an ER and of an OR. 2 We introduce Λ, our formulation of Kendall’s Γ in terms of RM of BR, RM of opposites of BR and weighting schemes. 3 We show how Λ yields to well-known association measures. J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 13 Another view of Kendall’s Γ Reinterpreting Kendall’s Γ using RM Reinterpreting Kendall’s Γ using RM There have been several works about using RM to reformulate association measures in order to better understand their differences [Marcotorchino(1984-85), Ghashghaie(1990), Najah Idrissi(2000), Ah-Pine and Marcotorchino(2010)]. In this work, we propose to reinterpret Kendall’s Γ in terms of RM and which emphasizes the so-called weighted indeterminacy deviation principle. 1 We give the definition of the opposite of an ER and of an OR. 2 We introduce Λ, our formulation of Kendall’s Γ in terms of RM of BR, RM of opposites of BR and weighting schemes. 3 We show how Λ yields to well-known association measures. 4 We explain the weighted indeterminacy deviation principle. J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 13 Another view of Kendall’s Γ Reinterpreting Kendall’s Γ using RM Opposite relation of an ER and of an OR We introduce the opposite relation X of an ER or an OR X. J. Ah-Pine (University of Lyon) A gen. fram. for compar. heterog. BR GSI 2013 / 14 Another view of Kendall’s Γ Reinterpreting Kendall’s Γ using RM Opposite relation of an ER and of an OR We introduce the opposite relation X of an ER or an OR X. If x is a categorical variable then X = X (the complement relation) : Xij

Patricia Conde Céspedes

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

Comparison of linear modularization criteria of networks using relational metric Patricia Conde C´espedes LSTA, Paris 6 August 2013 Thesis supervised by J.F. Marcotorchino (Thales Scientific Director) Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 1 / 35 Outline 1 Introduction and objective 2 Mathematical Relational representations of criteria 3 Algorithm and some results 4 Comparison of criteria 5 Applications 6 Conclusions Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 2 / 35 Introduction and objective Plan 1 Introduction and objective 2 Mathematical Relational representations of criteria 3 Algorithm and some results 4 Comparison of criteria 5 Applications 6 Conclusions Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 3 / 35 Introduction and objective Description of the problem Objective: Compare the partitions found by different linear criteria Nowadays, we can find networks everywhere: (biology, computer programming, marketing, etc). Some practical applications are: Cyber-Marketing: Cyber-Security: It is difficult to analyse a network directly because of its big size. Therefore, we need to decompose it in clusters or modules ⇐⇒ modularize it. Different modularization criteria have been formulated in different contexts in the last few years and we need to compare them. Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 4 / 35 Introduction and objective Graph partition Definition of module or community. Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 5 / 35 Mathematical Relational representations of criteria Plan 1 Introduction and objective 2 Mathematical Relational representations of criteria 3 Algorithm and some results 4 Comparison of criteria 5 Applications 6 Conclusions Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 6 / 35 Mathematical Relational representations of criteria Mathematical Relational modeling Modularizing a graph G(V, E) ⇔ defining an equivalence relation on V . Let X be a square matrix of order N = |V | defining an equivalence relation on V as follows: xii = 1 if i and i are in the same cluster ∀i, i ∈ V × V 0 otherwise (1) We present a modularization criterion as a linear function to optimize: Max X F(A, X) (2) subject to the constraints of an equivalence relation: xii ∈ {0, 1} Binarity (3) xii = 1 ∀i Reflexivity xii − xi i = 0 ∀(i, i ) Symmetry xii + xi i − xii ≤ 1 ∀(i, i , i ) Transitivity Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 7 / 35 Mathematical Relational representations of criteria Properties verified by linear criteria Every linear criteria is separable as it can be written in the general form (it is possible to separate the data from the variables): F(X) = N i=1 N i =1 φ(aii )xii + constant (4) where aii is the general term of the adjacency matrix A and φ(aii ) is a function of the adjacency matrix only. Besides, the criterion is balanced if it can be written in the form: F(X) = N i=1 N i =1 φ(aii )xii + N i=1 N i =1 ¯φ(aii )¯xii (5) Where: ¯xii = 1 − xii represents the opposite relation of X, noted ¯X. φ(aii ) ≥ 0 ∀i, i and ¯φ(aii ) ≥ 0 ∀i, i are non negative functions verifying: N i=1 N i =1 φii > 0 and N i=1 N i =1 ¯φii > 0. As we will see later the functions φ and ¯φ behave as ”costs”. Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 8 / 35 Mathematical Relational representations of criteria The property of Linear balance Given a graph If ¯φii = 0 ∀i, i all the nodes are clustered together, then κ = 1. If φii = 0 ∀i, i all nodes are separated, then κ = N If N i=1 N i =1 φii = N i=1 N i =1 ¯φii the criterion is a null model and therefore it has a resolution limit. Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 9 / 35 Mathematical Relational representations of criteria Existing linear functions Function Relational notation Zahn-Condorcet (1785, 1964) FZC(X) = N i=1 N i =1 (aii xii + ¯aii ¯xii ) Owsi´nski-Zadro˙zny (1986) FZOZ (X) = N i=1 N i =1 ((1−α)aii xii +α¯aii ¯xii ) with 0 < α < 1 Newman-Girvan (2004) FNG(X) = 1 2M N i=1 N i =1 aii − ai.a.i 2M xii Table: Linear criteria Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 10 / 35 Mathematical Relational representations of criteria Three new linear criteria Function Relational notation Deviation to uniformity (2013) FUNIF(X) = 1 2M N i=1 N i =1 aii − 2M N2 xii Deviation to indeter- mination (2013) FDI(X) = 1 2M N i=1 N i =1 aii − ai. N − a.i N + 2M N2 xii Balanced modularity (2013) FBM (X) = N i=1 N i =1 (aii − Pii ) xii + (¯aii − ¯Pii )¯xii where Pii = ai.a.i 2M and ¯Pii = ¯aii − (N−ai.)(N−a.i ) N2−2M Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 11 / 35 Mathematical Relational representations of criteria Interpretation of new linear criteria Uniformity structure Indetermination structure Duality independance and indetermination structure: Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 12 / 35 Mathematical Relational representations of criteria Some properties of these new criteria Whereas the Newman-Girvan modularity is based on the ”deviation from independance” structure, the DI index criterion is based on the ”deviation to the indetermination” structure. All these three new criteria are null models as Newman-Girvan modularity. The balanced modularity is a balanced version of Newman-Girvan modularity. If all the nodes had the same degree : dav = N i=1 ai N = 2M N all the new criteria would have the same behavior as Newman-Girvan modularity does: FNG ≡ FUNIF ≡ FBM ≡ FDI. Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 13 / 35 Algorithm and some results Plan 1 Introduction and objective 2 Mathematical Relational representations of criteria 3 Algorithm and some results 4 Comparison of criteria 5 Applications 6 Conclusions Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 14 / 35 Algorithm and some results The number of clusters The Louvain algorithm is easy to adapt to Separable criteria. Data Jazz Internet N = 198 N = 69949 M = 2742 M = 351380 Function κ κ Zahn-Condorcet 38 40123 Owsi´nski-Zadro˙zny 6 α = 2% 456 α < 1% Newman-Girvan 4 46 Deviation to uniformity 20 173 Deviation to indetermination 6 45 Balanced modularity 5 46 How to explain these differences? Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 15 / 35 Comparison of criteria Plan 1 Introduction and objective 2 Mathematical Relational representations of criteria 3 Algorithm and some results 4 Comparison of criteria 5 Applications 6 Conclusions Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 16 / 35 Comparison of criteria Impact of merging two clusters Now let us suppose we want to merge two clusters C1 and C2 in the network of sizes n1 and n2 respectively. Let us suppose as well they are connected by l edges and they have average degree d1 av et d2 av respectively. Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 17 / 35 Comparison of criteria Impact of merging two clusters What is the contribution of merging two clusters to the value of each criterion? The contribution C of merging two clusters will be: C = n1 i∈C1 n2 i ∈C2 (φii − ¯φii ) (6) The objective is to compare function φ(.) to function ¯φ(.) If C > 0 the criterion merges the two clusters, the contribution is a gain. If C < 0 the criterion separates the two clusters, the contribution is a cost. l is the number of edges between clusters C1 and C2. ¯l is the number of missing edges between clusters C1 and C2. Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 18 / 35 Comparison of criteria Contribution of merging two clusters The contribution for the Zahn-Condorcet criterion: CZC = (l − ¯l) = l − n1n2 2 (7) The Zahn-Condorcet criterion requires that the connexions within the cluster be bigger than the absence of connexions ⇐⇒ the number of connections l between C1 and C2 must be at least as half as the possible connexions between the two subgraphs. This criterion does not have resolution limit as the contribution depends only upon local properties: l, ¯l, n1, n2. The contribution does not depend on the size of the network. With this criterion we obtain many small clusters or cliques, some of them are sigle nodes. Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 19 / 35 Comparison of criteria Contribution of merging two clusters The contribution for the Owsi´nski-Zadro˙zny criterion: COZ = (l − αn1n2) 0 < α < 1 (8) As this Zahn-Condorcet criterion is so exigent we obtain many small clusters, the Owsi´nski-Zadro˙zny criterion gives the choice to define the minimum required percentage of within-cluster α edges. This coefficient defines the balance between φ and ¯φ. For α = 0.5 the wsi´nski-Zadro˙zny criterion ≡ the Zahn-Condorcet criterion. α is defined by the user as the minimum required fraction of within- cluster edges. This criterion does not have resolution limit as the contribution depend only upon local properties: l, ¯l, n1, n2. The contribution does not depend on the size of the network. Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 20 / 35 Comparison of criteria Impact of merging two clusters The contribution for the Newman-Girvan criterion: CNG = l − n1n2 d1 avd2 av 2M (9) The contribution depends on the degree distribution of the clusters. This criterion has a resolution limit since the contribution depends on global properties of the whole network M. The optimal partition has no clusters with a single node. Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 21 / 35 Comparison of criteria Impact of merging two clusters The contribution for the deviation to Uniformity criterion: CUNIF = l − n1n22M N2 (10) This criterion is a particular case of Zahn-Condorcet (or Owsi´nski-Zadro˙zny) criterion with α = 2M N2 which can be interpreted as a density occupancy of edges among the nodes δ. To merge the two clusters l n1n2 > δ the fraction of within clusters edges must be greater than the global density of edges δ. This criterion has a resolution limit since the contribution depends on global properties of the whole network M and N. Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 22 / 35 Comparison of criteria Impact of merging two clusters The contribution for the deviation to indetermination criterion: CDI = l − n1n2 d1 av N + d2 av N − 2M N2 (11) The contribution depends on the degree distribution of the clusters and on their sizes. This criterion has a resolution limit since the contribution depends on global properties of the whole network M and N. It favors big cluster with high average degree and small clusters with low average degree. So, the degree distribution of each cluster obtained by this criterion tends to be more homogeneous than that of the clusters obtained by optimizing the Newman-Girvan criterion. Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 23 / 35 Comparison of criteria Impact of merging two clusters The contribution for the Balanced modularity criterion: CBM = 2l + n1n2 (N − d1 av)(N − d2 av) N2 − 2M − n1n2 − n1n2 d1 avd2 av 2M (12) The contribution depends on the degree distribution of the clusters and on the sizes of the clusters. This criterion has a resolution limit since the contribution depends on global properties of the whole network M and N. Depending upon δ and dav this criterion behaves like a regulator between the Newman-Girvan criterion and the deviation to indetermination criterion. On one hand the degree distribution within clusters is more homogeneous than that found with Newman-Girvan criterion. On the other hand, the degree distribution within clusters is more heterogeneous than that found with deviation to indetermination criterion. Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 24 / 35 Comparison of criteria Summary by criterion Criterion Characteristics of the clustering Zahn-Condorcet Clusters have a fraction of within cluster edges greater than 50%. Owsi´nski-Zadro˙zny A generalization of ZC criterion where the user defines the minimum fraction of within cluster edges. Deviation to Unifor- mity The OZ criterion for α = δ the density of edges among the nodes. Newman-Girvan It has a resolution limit and the optimal clus- tering does not contain isolated nodes. Deviation to uniformity Within cluster degree distribution is more homogeneous than that found by the Newman-Girvan criterion. Balanced modularity It behaves like a regulator between the NG criterion and the DI criterion. . Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 25 / 35 Applications Plan 1 Introduction and objective 2 Mathematical Relational representations of criteria 3 Algorithm and some results 4 Comparison of criteria 5 Applications 6 Conclusions Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 26 / 35 Applications ”Data: Zachary karate club network” A network of friendships between the 34 members of a karate club at a US university. N = 34 nodes, M = 78 edges, dav = 4.6 and δ = 0.13 Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 27 / 35 Applications ”Data: Zachary karate club network” The number of clusters per criterion (with Louvain Algorithm): Criterion Number of clusters Single nodes Zahn-Condorcet 19 12 Owsi´nski-Zadro˙zny 7 (α = 0.2) 3 Deviation to uniformity 6 2 Newman-Girvan 4 Deviation to indetermination 4 Balanced modularity 4 The partitions found with Newman-Girvan, Deviation to indetermination and the Balanced modularity are the same. Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 28 / 35 Applications ”Data: Zachary karate club network” Density of within cluster edges: Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 29 / 35 Applications ”Data: Zachary karate club network” Partitions obtained by the criteria: Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 30 / 35 Applications The Jazz network A network of jazz musicians, N = 198, M = 2742, dav ∼= 27.7 and δ = 0.14. The clusters found by three criteria: Newman-Girvan nj dj av σj cvj 62 32.3 18.5 0.57 53 30.5 16.2 0.53 61 20.3 14.1 0.69 22 28.4 20.1 0.71 Balanced modularity nj dj av σj cvj 60 33.1 18.2 0.55 53 31.3 16.3 0.52 61 20.3 14.1 0.69 23 26 19.4 0.75 1 1 0 0 Deviation to indetermination nj dj av σj cvj 63 19.8 14.2 0.71 63 33.7 16 0.48 18 13.8 5.2 0.37 51 36.4 17.7 0.49 2 2.5 2.1 0.85 1 1 0 0 Where nj is the size of the cluster, dj av is the average degree, σj is the standard deviation and cvj is the coefficient of variation of the degree of the cluster j. Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 31 / 35 Applications The Jazz network The coefficient of variation for the three criteria: Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 32 / 35 Conclusions Plan 1 Introduction and objective 2 Mathematical Relational representations of criteria 3 Algorithm and some results 4 Comparison of criteria 5 Applications 6 Conclusions Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 33 / 35 Conclusions Conclusions We presented 6 modularization criteria in Relational Analysis notation, which allowed us to easily calculate their contribution, cost or gain, when merging two clusters. We analysed important characteristics of different criteria. We compared the differences found in the partitions provided by each criterion. However the 3 criteria we introduced have nearly the same properties they differ depending mainly on the degree distribution, on the sizes of the clusters and on global characteristics of the graph. Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 34 / 35 Conclusions Thanks for your attention! Patricia Conde C´espedes (LSTA, Paris 6) Comparison of linear modularization criteria of networks using relational metricAugust 2013 35 / 35

Klavdija Kutnar

Creative Commons Aucune (Tous droits réservés) Aucune (Tous droits réservés)

On Prime-valent Symmetric Bicirculants and Cayley Snarks Klavdija Kutnar University of Primorska Paris, 2013 Joint work with Ademir Hujdurovi´c and Dragan Maruˇsiˇc. Snarks A snark is a connected, bridgeless cubic graph with chromatic index equal to 4. non-snark = bridgeless cubic 3-edge colorable graph The Petersen graph is a snark Blanuˇsa Snarks Not vertex-transitive. Vertex-transitive graph An automorphism of a graph X = (V , E) is an isomorphism of X with itself. Thus each automorphism α of X is a permutation of the vertex set V which preserves adjacency. A graph is vertex-transitive if its automorphism group acts transitively on vertices. Cayley graph A vertex-transitive graph is a Cayley graph if its automorphism group has a regular subgroup. Cayley graph A vertex-transitive graph is a Cayley graph if its automorphism group has a regular subgroup. Given a group G and a subset S of G \ {1}, such that S = S−1, the Cayley graph Cay(G, S) has vertex set G and edges of the form {g, gs} for all g ∈ G and s ∈ S. Cayley graph A vertex-transitive graph is a Cayley graph if its automorphism group has a regular subgroup. Given a group G and a subset S of G \ {1}, such that S = S−1, the Cayley graph Cay(G, S) has vertex set G and edges of the form {g, gs} for all g ∈ G and s ∈ S. Cay(G, S) is connected if and only if G = S . Example The Cayley graph Cay(Z7, {±1, ±2}) on the left-hand side and the Petersen graph on the right-hand side. Snarks Any other snarks amongst vertex-transitive graphs, in particular Cayley graphs? Snarks Nedela, ˇSkoviera, Combin., 2001 If there exists a Cayley snark, then there is a Cayley snark Cay(G, {a, x, x−1}) where x has odd order, a2 = 1, and G = a, x is either a non-abelian simple group, or G has a unique non-trivial proper normal subgroup H which is either simple non-abelian or the direct product of two isomorphic non-abelian simple groups, and |G : H| = 2. Potoˇcnik, JCTB, 2004 The Petersen graph is the only vertex-transitive snark containing a solvable transitive subgroup of automorphisms. Snarks The hunting for vertex-transitive/Cayley snarks is essentially a special case of the Lovasz question regarding hamiltonian paths/cycles. Existence of a hamiltonian cycle implies that the graph is 3-edge colorable, and thus a non-snark. Hamiltonicity problem is hard, the snark problem is hard too, but should be easier to deal with. The Coxeter graph is not a snark (easy) vs the Coxeter graph is not hamiltonian (harder) The Coxeter graph is not a snark (easy) vs the Coxeter graph is not hamiltonian (harder) Hamiltonian cycles in cubic Cayley graphs (hard) vs Cayley snarks (still hard (but easier)) Types of cubic Cayley graphs Cay(G, S): Type 1: S consists of 3 involutions; Hamiltonian cycles in cubic Cayley graphs (hard) vs Cayley snarks (still hard (but easier)) Types of cubic Cayley graphs Cay(G, S): Type 1: S consists of 3 involutions; no snarks; nothing known about hamiltonian cycles except YES for the case when two involutions commute (Cherkassov, Sjerve). Hamiltonian cycles in cubic Cayley graphs (hard) vs Cayley snarks (still hard (but easier)) Types of cubic Cayley graphs Cay(G, S): Type 1: S consists of 3 involutions; no snarks; nothing known about hamiltonian cycles except YES for the case when two involutions commute (Cherkassov, Sjerve). Type 2: S = {a, x, x−1 }, where a2 = 1 and x is of even order; Hamiltonian cycles in cubic Cayley graphs (hard) vs Cayley snarks (still hard (but easier)) Types of cubic Cayley graphs Cay(G, S): Type 1: S consists of 3 involutions; no snarks; nothing known about hamiltonian cycles except YES for the case when two involutions commute (Cherkassov, Sjerve). Type 2: S = {a, x, x−1 }, where a2 = 1 and x is of even order; no snarks; nothing known about hamiltonian cycles Hamiltonian cycles in cubic Cayley graphs (hard) vs Cayley snarks (still hard (but easier)) Types of cubic Cayley graphs Cay(G, S): Type 1: S consists of 3 involutions; no snarks; nothing known about hamiltonian cycles except YES for the case when two involutions commute (Cherkassov, Sjerve). Type 2: S = {a, x, x−1 }, where a2 = 1 and x is of even order; no snarks; nothing known about hamiltonian cycles Type 3: S = {a, x, x−1 }, where a2 = 1 and x is of odd order. Hamiltonian cycles in cubic Cayley graphs (hard) vs Cayley snarks (still hard (but easier)) Types of cubic Cayley graphs Cay(G, S): Type 1: S consists of 3 involutions; no snarks; nothing known about hamiltonian cycles except YES for the case when two involutions commute (Cherkassov, Sjerve). Type 2: S = {a, x, x−1 }, where a2 = 1 and x is of even order; no snarks; nothing known about hamiltonian cycles Type 3: S = {a, x, x−1 }, where a2 = 1 and x is of odd order. See next slides. Partial results for Type 3 graphs A (2, s, t)-generated group is a group G = a, x | a2

Albums

Voir la vidéo