Sélectionnez d'abord la publication

(2015) GSI2015

Poincaré's equations for Cosserat shells - application to locomotion of cephalopods Federico Renda, Frederic Boyer GSI2015
Détails de l'article
In 1901 Henri Poincaré proposed a new set of equations for mechanics. These equations are a generalization of Lagrange equations to a system whose configuration space is a Lie group, which is not necessarily commutative. Since then, this result has been extensively refined by the Lagrangian reduction theory. In this article, we show the relations between these equations and continuous Cosserat media, i.e. media for which the conventional model of point particle is replaced by a rigid body of small volume named microstructure. In particular, we will see that the usual shell balance equations of nonlinear structural dynamics can be easily derived from the Poincaré’s result. This framework is illustrated through the simulation of a simplified model of cephalopod swimming.
Poincaré's equations for Cosserat shells - application to locomotion of cephalopods
Information Algebras and their Applications Matilde Marcolli GSI2015
Détails de l'article
In this lecture we will present joint work with Ryan Thorngren on thermodynamic semirings and entropy operads, with Nicolas Tedeschi on Birkhoff factorization in thermodynamic semirings, ongoing work with Marcus Bintz on tropicalization of Feynman graph hypersurfaces and Potts model hypersurfaces, and their thermodynamic deformations, and ongoing work by the author on applications of thermodynamic semirings to models of morphology and syntax in Computational Linguistics.
Information Algebras and their Applications
Generalized EM algorithms for minimum divergence estimation Diaa Al Mohamad, Michel Broniatowski GSI2015
Détails de l'article
Minimum divergence estimators are derived through the dual form of the divergence in parametric models. These estimators generalize the classical maximum likelihood ones. Models with unobserved data, as mixture models, can be estimated with EM algorithms, which are proved to converge to stationary points of the likelihood function under general assumptions. This paper presents an extension of the EM algorithm based on minimization of the dual approximation of the divergence between the empirical measure and the model using a proximaltype algorithm. The algorithm converges to the stationary points of the empirical criterion under general conditions pertaining to the divergence and the model. Robustness properties of this algorithm are also presented. We provide another proof of convergence of the EM algorithm in a two-component gaussian mixture. Simulations on Gaussian andWeibull mixtures are performed to compare the results with the MLE.
Generalized EM algorithms for minimum divergence estimation
Spherical parameterization for genus zero surfaces using Laplace-Beltrami eigenfunctions Guillaume Auzias, Julien Lefèvre GSI2015
Détails de l'article
In this work, we propose a fast and simple approach to obtain a spherical parameterization of a certain class of closed surfaces without holes. Our approach relies on empirical findings that can be mathematically investigated, to a certain extent, by using Laplace-Beltrami Operator and associated geometrical tools. The mapping proposed here is defined by considering only the three first non-trivial eigenfunctions of the Laplace-Beltrami Operator. Our approach requires a topological condition on those eigenfunctions, whose nodal domains must be 2. We show the efficiency of the approach through numerical experiments performed on cortical surface meshes.
Spherical parameterization for genus zero surfaces using Laplace-Beltrami eigenfunctions
Texture classification using Rao's distance on the space of covariance matrices Lionel Bombrun, Salem Said, Yannick Berthoumieu GSI2015
Détails de l'article
The current paper introduces new prior distributions on the zero-mean multivariate Gaussian model, with the aim of applying them to the classification of covariance matrices populations. These new prior distributions are entirely based on the Riemannian geometry of the multivariate Gaussian model. More precisely, the proposed Riemannian Gaussian distribution has two parameters, the centre of mass ˉY and the dispersion parameter σ. Its density with respect to Riemannian volume is proportional to exp(−d2(Y;ˉY)), where d2(Y;ˉY) is the square of Rao’s Riemannian distance. We derive its maximum likelihood estimators and propose an experiment on the VisTex database for the classification of texture images.
Texture classification using Rao's distance on the space of covariance matrices
Laplace's rule of succession in information geometry Yann Ollivier GSI2015
Détails de l'article
When observing data x1, . . . , x t modelled by a probabilistic distribution pθ(x), the maximum likelihood (ML) estimator θML = arg max θ Σti=1 ln pθ(x i ) cannot, in general, safely be used to predict xt + 1. For instance, for a Bernoulli process, if only “tails” have been observed so far, the probability of “heads” is estimated to 0. (Thus for the standard log-loss scoring rule, this results in infinite loss the first time “heads” appears.)
Laplace's rule of succession in information geometry
Riemannian trust regions with finite-difference Hessian approximations are globally convergent Nicolas Boumal GSI2015
Détails de l'article
The Riemannian trust-region algorithm (RTR) is designed to optimize differentiable cost functions on Riemannian manifolds. It proceeds by iteratively optimizing local models of the cost function. When these models are exact up to second order, RTR boasts a quadratic convergence rate to critical points. In practice, building such models requires computing the Riemannian Hessian, which may be challenging. A simple idea to alleviate this difficulty is to approximate the Hessian using finite differences of the gradient. Unfortunately, this is a nonlinear approximation, which breaks the known convergence results for RTR. We propose RTR-FD: a modification of RTR which retains global convergence when the Hessian is approximated using finite differences. Importantly, RTR-FD reduces gracefully to RTR if a linear approximation is used. This algorithm is available in the Manopt toolbox.
Riemannian trust regions with finite-difference Hessian approximations are globally convergent
Non-convex relaxation of optimal transport for color transfer between images Julien Rabin, Nicolas Papadakis GSI2015
Détails de l'article
Optimal transport (OT) is a major statistical tool to measure similarity between features or to match and average features. However, OT requires some relaxation and regularization to be robust to outliers. With relaxed methods, as one feature can be matched to several ones, important interpolations between different features arise. This is not an issue for comparison purposes, but it involves strong and unwanted smoothing for transfer applications. We thus introduce a new regularized method based on a non-convex formulation that minimizes transport dispersion by enforcing the one-to-one matching of features. The interest of the approach is demonstrated for color transfer purposes.
Non-convex relaxation of optimal transport for color transfer between images
Image processing in the semidiscrete group of rototranslations Dario Prandi, Jean-Paul Gauthier, Ugo Boscain GSI2015
Détails de l'article
It is well-known, since [12], that cells in the primary visual cortex V1 do much more than merely signaling position in the visual field: most cortical cells signal the local orientation of a contrast edge or bar . they are tuned to a particular local orientation. This orientation tuning has been given a mathematical interpretation in a sub-Riemannian model by Petitot, Citti, and Sarti [6,14]. According to this model, the primary visual cortex V1 lifts grey-scale images, given as functions f : ℝ2 → [0, 1], to functions Lf defined on the projectivized tangent bundle of the plane PTℝ2 = ℝ2 ×ℙ1. Recently, in [1], the authors presented a promising semidiscrete variant of this model where the Euclidean group of rototranslations SE(2), which is the double covering of PTℝ2, is replaced by SE(2,N), the group of translations and discrete rotations. In particular, in [15], an implementation of this model allowed for state-of-the-art image inpaintings. In this work, we review the inpainting results and introduce an application of the semidiscrete model to image recognition. We remark that both these applications deeply exploit the Moore structure of SE(2,N) that guarantees that its unitary representations behaves similarly to those of a compact group. This allows for nice properties of the Fourier transform on SE(2,N) exploiting which one obtains numerical advantages.
Image processing in the semidiscrete group of rototranslations
Evolution Equations with Anisotropic Distributions and Diffusion PCA Stefan Sommer GSI2015
Détails de l'article
This paper presents derivations of evolution equations for the family of paths that in the Diffusion PCA framework are used for approximating data likelihood. The paths that are formally interpreted as most probable paths generalize geodesics in extremizing an energy functional on the space of differentiable curves on a manifold with connection. We discuss how the paths arise as projections of geodesics for a (non bracket-generating) sub-Riemannian metric on the frame bundle. Evolution equations in coordinates for both metric and cometric formulations of the sub-Riemannian geometry are derived. We furthermore show how rank-deficient metrics can be mixed with an underlying Riemannian metric, and we use the construction to show how the evolution equations can be implemented on finite dimensional LDDMM landmark manifolds.
Evolution Equations with Anisotropic Distributions and Diffusion PCA
TS-GNPR Clustering Random Walk Time Series Frank Nielsen, Gautier Marti, Philippe Donnat, Philippe Very GSI2015
Détails de l'article
We present in this paper a novel non-parametric approach useful for clustering independent identically distributed stochastic processes. We introduce a pre-processing step consisting in mapping multivariate independent and identically distributed samples from random variables to a generic non-parametric representation which factorizes dependency and marginal distribution apart without losing any information. An associated metric is defined where the balance between random variables dependency and distribution information is controlled by a single parameter. This mixing parameter can be learned or played with by a practitioner, such use is illustrated on the case of clustering financial time series. Experiments, implementation and results obtained on public financial time series are online on a web portal http://www.datagrapple.com .
TS-GNPR Clustering Random Walk Time Series
Probability density estimation on the hyperbolic space applied to radar processing Emmanuel Chevallier, Frédéric Barbaresco, Jesús Angulo GSI2015
Détails de l'article
The two main techniques of probability density estimation on symmetric spaces are reviewed in the hyperbolic case. For computational reasons we chose to focus on the kernel density estimation and we provide the expression of Pelletier estimator on hyperbolic space. The method is applied to density estimation of reflection coefficients derived from radar observations.
Probability density estimation on the hyperbolic space applied to radar processing
The nonlinear Bernstein-Schrodinger equation in Economics Alfred Galichon, Scott Kominers, Simon Weber GSI2015
Détails de l'article
In this paper we relate the Equilibrium Assignment Problem (EAP), which is underlying in several economics models, to a system of nonlinear equations that we call the “nonlinear Bernstein-Schrödinger system”, which is well-known in the linear case, but whose nonlinear extension does not seem to have been studied. We apply this connection to derive an existence result for the EAP, and an efficient computational method.
The nonlinear Bernstein-Schrodinger equation in Economics
Geometry of Goodness-of-Fit Testing in High Dimensional Low Sample Size Modelling Frank Critchley, Germain Van Bever, Paul Marriott, Radka Sabolova GSI2015
Détails de l'article
We introduce a new approach to goodness-of-fit testing in the high dimensional, sparse extended multinomial context. The paper takes a computational information geometric approach, extending classical higher order asymptotic theory. We show why the Wald – equivalently, the Pearson X2 and score statistics – are unworkable in this context, but that the deviance has a simple, accurate and tractable sampling distribution even for moderate sample sizes. Issues of uniformity of asymptotic approximations across model space are discussed. A variety of important applications and extensions are noted.
Geometry of Goodness-of-Fit Testing in High Dimensional Low Sample Size Modelling
Stochastic PDE projection on manifolds Assumed-Density and Galerkin Filters Damiano Brigo, John Armstrong GSI2015
Détails de l'article
We review the manifold projection method for stochastic nonlinear filtering in a more general setting than in our previous paper in Geometric Science of Information 2013. We still use a Hilbert space structure on a space of probability densities to project the infinite dimensional stochastic partial differential equation for the optimal filter onto a finite dimensional exponential or mixture family, respectively, with two different metrics, the Hellinger distance and the L2 direct metric. This reduces the problem to finite dimensional stochastic differential equations. In this paper we summarize a previous equivalence result between Assumed Density Filters (ADF) and Hellinger/Exponential projection filters, and introduce a new equivalence between Galerkin method based filters and Direct metric/Mixture projection filters. This result allows us to give a rigorous geometric interpretation to ADF and Galerkin filters. We also discuss the different finite-dimensional filters obtained when projecting the stochastic partial differential equation for either the normalized (Kushner-Stratonovich) or a specific unnormalized (Zakai) density of the optimal filter.
Stochastic PDE projection on manifolds Assumed-Density and Galerkin Filters
The extremal index for a random tessellation Nicolas Chenavier GSI2015
Détails de l'article
Let m be a random tessellation in R d , d ≥ 1, observed in the window W p = ρ1/d[0, 1] d , ρ > 0, and let f be a geometrical characteristic. We investigate the asymptotic behaviour of the maximum of f(C) over all cells C ∈ m with nucleus W p as ρ goes to infinity.When the normalized maximum converges, we show that its asymptotic distribution depends on the so-called extremal index. Two examples of extremal indices are provided for Poisson-Voronoi and Poisson-Delaunay tessellations.
The extremal index for a random tessellation
Some geometric consequences of the Schrödinger problem Christian Leonard GSI2015
Détails de l'article
This note presents a short review of the Schrödinger problem and of the first steps that might lead to interesting consequences in terms of geometry. We stress the analogies between this entropy minimization problem and the renowned optimal transport problem, in search for a theory of lower bounded curvature for metric spaces, including discrete graphs.
Some geometric consequences of the Schrödinger problem
Finite polylogarithms, their multiple analogues and the Shannon entropy Herbert Gangl, Philippe Elbaz-Vincent GSI2015
Détails de l'article
We show that the entropy function–and hence the finite 1-logarithm–behaves a lot like certain derivations. We recall its cohomological interpretation as a 2-cocycle and also deduce 2n-cocycles for any n. Finally, we give some identities for finite multiple polylogarithms together with number theoretic applications.
Finite polylogarithms, their multiple analogues and the Shannon entropy
Extension of information geometry to non-statistical systems some examples Ben Anthonis, Jan Naudts, Michel Broniatowski GSI2015
Détails de l'article
Our goal is to extend information geometry to situations where statistical modeling is not obvious. The setting is that of modeling experimental data. Quite often the data are not of a statistical nature. Sometimes also the model is not a statistical manifold. An example of the former is the description of the Bose gas in the grand canonical ensemble. An example of the latter is the modeling of quantum systems with density matrices. Conditional expectations in the quantum context are reviewed. The border problem is discussed: through conditioning the model point shifts to the border of the differentiable manifold.
Extension of information geometry to non-statistical systems some examples
Biased estimators on quotient spaces Nina Miolane, Xavier Pennec GSI2015
Détails de l'article
Usual statistics are defined, studied and implemented on Euclidean spaces. But what about statistics on other mathematical spaces, like manifolds with additional properties: Lie groups, Quotient spaces, Stratified spaces etc? How can we describe the interaction between statistics and geometry? The structure of Quotient space in particular is widely used to model data, for example every time one deals with shape data. These can be shapes of constellations in Astronomy, shapes of human organs in Computational Anatomy, shapes of skulls in Palaeontology, etc. Given this broad field of applications, statistics on shapes -and more generally on observations belonging to quotient spaces- have been studied since the 1980’s. However, most theories model the variability in the shapes but do not take into account the noise on the observations themselves. In this paper, we show that statistics on quotient spaces are biased and even inconsistent when one takes into account the noise. In particular, some algorithms of template estimation in Computational Anatomy are biased and inconsistent. Our development thus gives a first theoretical geometric explanation of an experimentally observed phenomenon. A biased estimator is not necessarily a problem. In statistics, it is a general rule of thumb that a bias can be neglected for example when it represents less than 0.25 of the variance of the estimator. We can also think about neglecting the bias when it is low compared to the signal we estimate. In view of the applications, we thus characterize geometrically the situations when the bias can be neglected with respect to the situations when it must be corrected.
Biased estimators on quotient spaces
Color Texture Discrimination using the Principal Geodesic Distance on a Multivariate Generalized Gau Aqsa Shabbir, Geert Verdoolaege GSI2015
Détails de l'article
We present a new texture discrimination method for textured color images in the wavelet domain. In each wavelet subband, the correlation between the color bands is modeled by a multivariate generalized Gaussian distribution with fixed shape parameter (Gaussian, Laplacian). On the corresponding Riemannian manifold, the shape of texture clusters is characterized by means of principal geodesic analysis, specifically by the principal geodesic along which the cluster exhibits its largest variance. Then, the similarity of a texture to a class is defined in terms of the Rao geodesic distance on the manifold from the texture’s distribution to its projection on the principal geodesic of that class. This similarity measure is used in a classification scheme, referred to as principal geodesic classification (PGC). It is shown to perform significantly better than several other classifiers.
Color Texture Discrimination using the Principal Geodesic Distance on a Multivariate Generalized Gau
Standard Divergence in Manifold of Dual Affine Connections Nihat Ay, Shun-Ichi Amari GSI2015
Détails de l'article
A divergence function defines a Riemannian metric G and dually coupled affine connections (∇, ∇  ∗ ) with respect to it in a manifold M. When M is dually flat, a canonical divergence is known, which is uniquely determined from {G, ∇, ∇  ∗ }. We search for a standard divergence for a general non-flat M. It is introduced by the magnitude of the inverse exponential map, where α = -(1/3) connection plays a fundamental role. The standard divergence is different from the canonical divergence.
Standard Divergence in Manifold of Dual Affine Connections
Generalized Pareto Distributions, Image Statistics and Autofocusing in Automated Microscopy Reiner Lenz GSI2015
Détails de l'article
We introduce the generalized Pareto distributions as a statistical model to describe thresholded edge-magnitude image filter results. Compared to the more commonWeibull or generalized extreme value distributions these distributions have at least two important advantages, the usage of the high threshold value assures that only the most important edge points enter the statistical analysis and the estimation is computationally more efficient since a much smaller number of data points have to be processed. The generalized Pareto distributions with a common threshold zero form a two-dimensional Riemann manifold with the metric given by the Fisher information matrix. We compute the Fisher matrix for shape parameters greater than -0.5 and show that the determinant of its inverse is a product of a polynomial in the shape parameter and the squared scale parameter. We apply this result by using the determinant as a sharpness function in an autofocus algorithm. We test the method on a large database of microscopy images with given ground truth focus results. We found that for a vast majority of the focus sequences the results are in the correct focal range. Cases where the algorithm fails are specimen with too few objects and sequences where contributions from different layers result in a multi-modal sharpness curve. Using the geometry of the manifold of generalized Pareto distributions more efficient autofocus algorithms can be constructed but these optimizations are not included here.
Generalized Pareto Distributions, Image Statistics and Autofocusing in Automated Microscopy
Universal, non-asymptotic confidence sets for circular means Florian Kelma, Johannes Wieditz, Thomas Hotz GSI2015
Détails de l'article
Based on Hoeffding’s mass concentration inequalities, nonasymptotic confidence sets for circular means are constructed which are universal in the sense that they require no distributional assumptions. These are then compared with asymptotic confidence sets in simulations and for a real data set.
Universal, non-asymptotic confidence sets for circular means
Barycentric Subspaces and Affine Spans in Manifolds Xavier Pennec GSI2015
Détails de l'article
This paper addresses the generalization of Principal Component Analysis (PCA) to Riemannian manifolds. Current methods like Principal Geodesic Analysis (PGA) and Geodesic PCA (GPCA) minimize the distance to a “Geodesic subspace”. This allows to build sequences of nested subspaces which are consistent with a forward component analysis approach. However, these methods cannot be adapted to a backward analysis and they are not symmetric in the parametrization of the subspaces. We propose in this paper a new and more general type of family of subspaces in manifolds: barycentric subspaces are implicitly defined as the locus of points which are weighted means of k + 1 reference points. Depending on the generalization of the mean that we use, we obtain the Fréchet/Karcher barycentric subspaces (FBS/KBS) or the affine span (with exponential barycenter). This definition restores the full symmetry between all parameters of the subspaces, contrarily to the geodesic subspaces which intrinsically privilege one point. We show that this definition defines locally a submanifold of dimension k and that it generalizes in some sense geodesic subspaces. Like PGA, barycentric subspaces allow the construction of a forward nested sequence of subspaces which contains the Fréchet mean. However, the definition also allows the construction of backward nested sequence which may not contain the mean. As this definition relies on points and do not explicitly refer to tangent vectors, it can be extended to non Riemannian geodesic spaces. For instance, principal subspaces may naturally span over several strata in stratified spaces, which is not the case with more classical generalizations of PCA.
Barycentric Subspaces and Affine Spans in Manifolds
A common symmetrization framework for random linear algorithms Alain Sarlette GSI2015
Détails de l'article
This paper highlights some more examples of maps that follow a recently introduced “symmetrization” structure behind the average consensus algorithm. We review among others some generalized consensus settings and coordinate descent optimization.
A common symmetrization framework for random linear algorithms
Entropy and structure of the thermodynamical systems Géry de Saxcé GSI2015
Détails de l'article
With respect to the concept of affine tensor, we analyse in this work the underlying geometric structure of the theories of Lie group statistical mechanics and relativistic thermodynamics of continua, formulated by Souriau independently one of each other. We reveal the link between these ones in the classical Galilean context. These geometric structures of the thermodynamics are rich and we think they might be source of inspiration for the geometric theory of information based on the concept of entropy.
Entropy and structure of the thermodynamical systems
Histograms of images valued in the manifold of colours endowed with perceptual metrics Emmanuel Chevallier, Ivar Farup, Jesús Angulo GSI2015
Détails de l'article
We address here the problem of perceptual colour histograms. The Riemannian structure of perceptual distances is measured through standards sets of ellipses, such as Macadam ellipses. We propose an approach based on local Euclidean approximations that enables to take into account the Riemannian structure of perceptual distances, without introducing computational complexity during the construction of the histogram.
Histograms of images valued in the manifold of colours endowed with perceptual metrics
Block-Jacobi methods with Newton-steps and non-unitary joint matrix diagonalization Hao Shen, Martin Kleinsteuber GSI2015
Détails de l'article
In this work, we consider block-Jacobi methods with Newton steps in each subspace search and prove their local quadratic convergence to a local minimum with non-degenerate Hessian under some orthogonality assumptions on the search directions. Moreover, such a method is exemplified for non-unitary joint matrix diagonalization, where we present a block-Jacobi-type method on the oblique manifold with guaranteed local quadratic convergence.
Block-Jacobi methods with Newton-steps and non-unitary joint matrix diagonalization