Optimization in the Space of Smooth Shapes

07/11/2017
Auteurs : Kathrin Welker
Publication GSI2017
OAI : oai:www.see.asso.fr:17410:22581
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit
 

Résumé

The theory of shape optimization problems constrained by partial differential equations is connected with the differential-geometric structure of the space of smooth shapes.

Optimization in the Space of Smooth Shapes

Collection

application/pdf Optimization in the Space of Smooth Shapes Kathrin Welker
Détails de l'article
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

The theory of shape optimization problems constrained by partial differential equations is connected with the differential-geometric structure of the space of smooth shapes.
Optimization in the Space of Smooth Shapes

Média

Voir la vidéo

Métriques

0
0
251.11 Ko
 application/pdf
bitcache://c1f7260489cb1a289366a09765c8362ba25e2fdc

Licence

Creative Commons Aucune (Tous droits réservés)

Sponsors

Sponsors Platine

alanturinginstitutelogo.png
logothales.jpg

Sponsors Bronze

logo_enac-bleuok.jpg
imag150x185_couleur_rvb.jpg

Sponsors scientifique

logo_smf_cmjn.gif

Sponsors

smai.png
gdrmia_logo.png
gdr_geosto_logo.png
gdr-isis.png
logo-minesparistech.jpg
logo_x.jpeg
springer-logo.png
logo-psl.png

Organisateurs

logo_see.gif
<resource  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                xmlns="http://datacite.org/schema/kernel-4"
                xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4/metadata.xsd">
        <identifier identifierType="DOI">10.23723/17410/22581</identifier><creators><creator><creatorName>Kathrin Welker</creatorName></creator></creators><titles>
            <title>Optimization in the Space of Smooth Shapes</title></titles>
        <publisher>SEE</publisher>
        <publicationYear>2018</publicationYear>
        <resourceType resourceTypeGeneral="Text">Text</resourceType><dates>
	    <date dateType="Created">Thu 8 Mar 2018</date>
	    <date dateType="Updated">Thu 8 Mar 2018</date>
            <date dateType="Submitted">Tue 13 Nov 2018</date>
	</dates>
        <alternateIdentifiers>
	    <alternateIdentifier alternateIdentifierType="bitstream">c1f7260489cb1a289366a09765c8362ba25e2fdc</alternateIdentifier>
	</alternateIdentifiers>
        <formats>
	    <format>application/pdf</format>
	</formats>
	<version>37302</version>
        <descriptions>
            <description descriptionType="Abstract">The theory of shape optimization problems constrained by partial differential equations is connected with the differential-geometric structure of the space of smooth shapes.
</description>
        </descriptions>
    </resource>
.

Optimization in the Space of Smooth Shapes Kathrin Welker Trier University, Department of Mathematics, 54286 Trier, Germany welker@uni-trier.de Abstract. The theory of shape optimization problems constrained by partial differential equations is connected with the differential-geometric structure of the space of smooth shapes. 1 Introduction A lot of real world problems can be reformulated as shape optimization prob- lems which are constrained by partial differential equations (PDEs), see, for instance, [6, 16]. The subject of shape optimization is covered by several fun- damental monographs (cf. [5, 20]). In recent work, PDE constrained shape op- timization problems are embedded in the framework of optimization on shape spaces. Finding a shape space and an associated metric is a challenging task and different approaches lead to various models. One possible approach is to define shapes as elements of a Riemannian manifold as proposed in [11]. In [12], a survey of various suitable inner products is given, e.g., the curvature weighted metric and the Sobolev metric. From a theoretical point of view this is attrac- tive because algorithmic ideas from [1] can be combined with approaches from differential geometry. In [17], shape optimization is considered as optimization on a Riemannian shape manifold which contains smooth shapes, i.e., shapes with infinitely differentiable boundaries. We consider exactly this manifold in the following. A well-established approach in shape optimization is to deal with shape derivatives in a so-called Hadamard form, i.e., in the form of integrals over the surface, as well as intrinsic shape metrics (cf. [13, 20]). Major effort in shape calculus has been devoted towards such surface expressions (cf. [5, 20]), which are often very tedious to derive. Along the way, volume formulations appear as an intermediate step. Recently, it has been shown that this intermediate formu- lation has numerical advantages, see, for instance, [3, 6, 14]. In [9], also practical advantages of volume shape formulations have been demonstrated. E.g., they require less smoothness assumptions. Furthermore, the derivation as well as the implementation of volume formulations require less manual and programming work. However, volume integral forms of shape derivatives require an outer met- ric on the domain surrounding the shape boundary. In [18], both points of view are harmonized by deriving a metric from an outer metric. Efficient shape opti- mization algorithms based on this metric, which reduce the analytical effort so far involved in the derivation of shape derivatives, are proposed in [18, 19]. The main aim of this paper is to explain how shape calculus can be combined with geometric concepts of the space of smooth shapes and to outline how this combination results in efficient optimization techniques. This paper reports on ongoing work and has the following structure. A short overview of basics concepts in shape optimization is given in Section 2. Afterwards, in Section 3, we do not only introduce the space of smooth shapes, but we also consider the surface and volume form of shape derivatives and summarize the way from shape derivatives to entire optimization algorithms in this shape space for each formulation. 2 A brief introduction in shape optimization In this section, we set up notation and terminology of basic shape optimization concepts. For a detailed introduction into shape calculus, we refer to [5, 20]. Shape optimization deals with shape functionals, which are defined as a func- tions J : A → R, Ω 7→ J(Ω) with A ⊂ {Ω : Ω ⊂ D}, where D denotes a non-empty subset of Rd . One of the main focuses of shape optimization is to solve shape optimization problems. A shape optimization problem is given by minΩ J(Ω), where J is a shape functional. When J depends on a solution of a PDE, we call the shape optimization problem PDE constrained. To solve PDE constrained shape optimization problems, we need their shape derivatives: Let D be as above. Moreover, let {Ft}t∈[0,T ] be a family of mappings Ft : D → Rd such that F0 = id, where D denotes the closure of D and T > 0. This family transforms the domain Ω into new perturbed domains Ωt := Ft(Ω) = {Ft(x): x ∈ Ω} with Ω0 = Ω and the boundary Γ of Ω into new perturbed boundaries Γt := Ft(Γ) = {Ft(x): x ∈ Γ} with Γ0 = Γ. Such a transforma- tion can be described, e.g., by the perturbation of identity, which is defined by Ft(x) := x + tV (x), where V denotes a sufficiently smooth vector field. Definition 1. Let D ⊂ Rd be open, where d ≥ 2 is a natural number. Moreover, let k ∈ N ∪ {∞}, let Ω ⊂ D be measurable and let Ωt denote the perturbed domains defined above. The Eulerian derivative of a shape functional J at Ω in direction V ∈ Ck 0 (D, Rd ) is defined by DJ(Ω)[V ] := lim t→0+ J(Ωt) − J(Ω) t . (1) If for all directions V ∈ Ck 0 (D, Rd ) the Eulerian derivative (1) exists and the mapping G(Ω): Ck 0 (D, Rd ) → R, V 7→ DJ(Ω)[V ] is linear and continuous, the expression DJ(Ω)[V ] is called the shape derivative of J at Ω in direction V ∈ Ck 0 (D, Rd ). In this case, J is called shape differentiable of class Ck at Ω. There are a lot of options to prove shape differentiability of shape function- als, e.g., the min-max approach [5], the chain rule approach [20], the Lagrange method of Céa [4] and the rearrangement method [7]. Note that there are cases where the method of Céa fails (cf. [15]). A nice overview about these approaches is given in [21]. In many cases, the shape derivative arises in two equivalent notational forms: DJΩ[V ] := Z Ω RV (x) dx (volume formulation) (2) DJΓ [V ] := Z Γ r(s) hV (s), n(s)i ds (surface formulation) (3) Here r ∈ L1 (Γ) and R is a differential operator acting linearly on the vector field V with DJΩ[V ] = DJ(Ω)[V ] = DJΓ [V ]. Surface expressions of shape derivatives are often very tedious to derive. Along the way, volume formulations appear as an intermediate step. These volume expressions are preferable over surface forms. This is not only because of saving analytical effort, but also due to additional regularity assumptions, which usually have to be required in order to transform volume into surface forms, as well as because of saving programming effort. In the next section, it is outlined how shape calculus and in particular surface as well as volume shape derivatives can be combined with geometric concepts of the space of smooth shapes. 3 Shape calculus combined with geometric concepts of the space of smooth shapes In this section, we analyze the connection of Riemannian geometry on the space of smooth shapes to shape optimization. Moreover, we summarize the way from shape derivatives to entire optimization algorithms in the space of smooth shapes for both, surface and volume shape derivative formulations. First, we introduce the space of smooth shapes. In [11], the set of all two- dimensional smooth shapes is characterized by Be(S1 , R2 ) := Emb(S1 , R2 )/Diff(S1 ), i.e., the orbit space of Emb(S1 , R2 ) under the action by composition from the right by the Lie group Diff(S1 ). Here Emb(S1 , R2 ) denotes set of all embeddings from the unit circle S1 into the plane R2 and Diff(S1 ) is the set of all diffeo- morphisms from S1 into itself. In [8], it is proven that Be(S1 , R2 ) is a smooth manifold. For the sake of completeness it should be mentioned that the shape space Be(S1 , R2 ) together with appropriate inner products is even a Rieman- nian manifold. In [12], a survey of various suitable inner products is given. Note that the shape space Be(S1 , R2 ) and its theoretical results can be generalized to higher dimensions (cf. [10]). The tangent space TcBe(S1 , R2 ) is isomorphic to the set of all smooth normal vector fields along c ∈ Be(S1 , R2 ), i.e., TcBe(S1 , R2 ) ∼ =  h: h = αn, α ∈ C∞ (S1 ) , (4) where n denotes the exterior unit normal field to the shape boundary c such that n(θ) ⊥ cθ(θ) for all θ ∈ S1 , where cθ = ∂c ∂θ denotes the circumferential derivative. Due to the Hadamard Structure Theorem given in [20, Theorem 2.27], there exists a scalar distribution r on the boundary Γ of the domain Ω under consid- eration. If we assume r ∈ L1 (Γ), the shape derivative can be expressed on the boundary Γ of Ω (cf. (3)). The distribution r is often called the shape gradient. However, note that gradients depend always on chosen scalar products defined on the space under consideration. Thus, it rather means that r is the usual L2 - shape gradient. If we want to optimize on the shape manifold Be, we have to find a representation of the shape gradient with respect to an appropriate in- ner product. This representation is called the Riemannian shape gradient and required to formulate optimization methods in Be. In order to deal with surface formulations of shape derivatives in optimization techniques, e.g., the Sobolev metric is an appropriate inner product. Of course, there are a lot of further metrics on Be (cf. [12]), but the Sobolev metric is the most suitable choice for our applications. One reason for this is that the Rieman- nian shape gradient with respect to g1 acts as a Laplace-Beltrami smoothing of the usual L2 -shape gradient (cf. Definition 2). Thus, in the following, we consider the first Sobolev metric g1 on the shape space Be. It is given by g1 : TcBe(S1 , R2 ) × TcBe(S1 , R2 ) → R, (h, k) 7→ Z S1 h(I − A4c) α, βi ds, where h = αn, k = βn denote elements of the tangent space TcBe(S1 , R2 ), A > 0 and 4c denotes the Laplace-Beltrami operator on the surface c. For the definition of the Sobolev metric g1 in higher dimensions we refer to [2]. Now, we have to detail the Riemannian shape gradient with respect to g1 . The shape derivative can be expressed as DJΓ [V ] = Z Γ αr ds (5) if V ∂Ω = αn. In order to get an expression of the Riemannian shape gradient with respect to the Sobolev metric g1 , we look at the isomorphism (4). Due to this isomorphism, a tangent vector h ∈ TΓ Be is given by h = αn with α ∈ C∞ (Γ). This leads to the following definition. Definition 2. The Riemannian shape gradient of a shape differentiable objective function J in terms of the Sobolev metric g1 is given by grad(J) = qn with (I − A4Γ )q = r, where Γ ∈ Be, A > 0, q ∈ C∞ (Γ) and r is the L2 -shape gradient given in (3). The Riemannian shape gradient is required to formulate optimization methods in the shape space Be. In the setting of PDE constrained shape optimization prob- lems, a Lagrange-Newton method is obtained by applying a Newton method to find stationary points of the Lagrangian of the optimization problem. In con- trast to this method, which requires the Hessian in each iteration, quasi-Newton methods only need an approximation of the Hessian. Such an approximation is realized, e.g., by a limited memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) update. In a limited-memory BFGS method, a representation of the shape gra- dient with respect to the Sobolev metric g1 has to be computed and applied as a Dirichlet boundary condition in the linear elasticity mesh deformation. We refer to [17] for the limited-memory BFGS method in Be. In Figure 1, the entire optimization algorithm for the limited-memory BFGS case is summarized. Note that this method boils down to a steepest descent method by omitting the com- putation of the BFGS-update. This method only needs the gradient—but not the Hessian—in each iteration. Evaluate measurements Solve the state and adjoint equation of the optimization problem Assemble the Dirichlet boundary condition of linear elasticity: 1. Compute the Riemannian shape gradient with respect to g1 2. Compute a limited-memory BFGS update Solve the linear elasticity equation with source term equals zero Apply the resulting deformation to the FE mesh Fig. 1. Entire optimization algorithm based on surface expressions and g1 . One possible approach to use volume formulations of shape derivatives is to consider Steklov-Poincaré metrics. In order to define these metrics, let us consider a compact domain Ω ⊂ X ⊂ Rd with Ω 6= ∅ and C∞ -boundary Γ := ∂Ω, where X denotes a bounded domain with Lipschitz-boundary Γout := ∂X. In particular, this means Γ ∈ Be(Sd−1 , Rd ). In this setting, the Steklov-Poincaré metric is defined by gS : H1/2 (Γ) × H1/2 (Γ) → R, (α, β) 7→ Z Γ α(s) · [(Spr )−1 β](s) ds. Here Spr denotes the projected Poincaré-Steklov operator which is given by Spr : H−1/2 (Γout) → H1/2 (Γout), α 7→ (γ0U)T n, where γ0 : H1 0 (X, Rd ) → H1/2 (Γout, Rd ), U 7→ U Γout and U ∈ H1 0 (X, Rd ) solves the Neumann problem a(U, V ) = Z Γout α · (γ0V )T n ds ∀V ∈ H1 0 (X, Rd ) with a(·, ·) being a symmetric and coercive bilinear form. Due to isomorphism (4) and expression (5), we can state the connection of Be with respect to gS to shape calculus: Definition 3. Let r denote the L2 -shape gradient given in (3). Moreover, let Spr and γ0 be as above. A representation h ∈ TΓ Be ∼ = C∞ (Γ) of the shape gradient in terms of gS is determined by gS (φ, h) = (r, φ)L2(Γ ) ∀φ ∈ C∞ (Γ), which is equivalent to Z Γ φ(s) · [(Spr )−1 h](s) ds = Z Γ r(s)φ(s) ds ∀φ ∈ C∞ (Γ). (6) The definition of the shape gradient with respect to Steklov-Poincaré metric enables the formulation of optimization methods in Be which involve volume formulations of shape derivatives. From (6) we get h = Spr r = (γ0U)T n, where U ∈ H1 0 (X, Rd ) solves a(U, V ) = Z Γ r · (γ0V )T n ds = DJΓ [V ] = DJΩ[V ] ∀V ∈ H1 0 (X, Rd ). We get the gradient representation h and the mesh deformation U all at once. In each iteration, we have to solve the so-called deformation equation a(U, V ) = b(V ) for all test functions V in the optimization algorithm, where b(·) is a linear form and given by b(V ) := DJvol(Ω)[V ] + DJsurf(Ω)[V ]. Here Jsurf(Ω) denotes parts of the objective function leading to surface shape derivative expressions. It is incorporated as a Neumann boundary condition. Parts of the objective func- tion leading to volume shape derivative expressions are denoted by Jvol(Ω). Note that from a theoretical point of view the volume and surface shape derivative formulations have to be equal to each other for all test functions. Thus, DJvol[V ] is assembled only for test functions V whose support includes Γ. Figure 2 summarizes the entire optimization algorithm in the setting of the Steklov-Poincaré metric and, thus, in the case of volume shape derivative ex- pressions. This algorithm is very attractive from a computational point of view. The computation of a representation of the shape gradient with respect to the chosen inner product of the tangent space is moved into the mesh deformation itself. The elliptic operator is used as an inner product and a mesh deformation. This leads to only one linear system, which has to be solved. In shape optimiza- tion one usually computes a descent direction as a deformation of the variable boundary. Note that this method also boils down to a steepest descent method by omitting the computation of the BFGS-update. In contrast to the algorithm based on the Sobolev metric (cf. Figure 1), the metric used here interprets the descent direction as a volumetric force within the FE grid. For more details about this approach and in particular the implementation details we refer to [18, 19]. Evaluate measurements Solve the state and adjoint equation of the optimization problem Assemble the deformation equation: 1. Assemble DJΩ[V ] only for V with Γ ∩ supp(V ) 6= 0 as a source term 2. Assemble derivative contributions which are in surface formulations into the right-hand side in form of Neumann boundary conditions Solve the deformation equation Compute a limited-memory BFGS update Apply the resulting deformation to the FE mesh Fig. 2. Entire optimization algorithm based on volume formulations and gS . 4 Conclusion The differential-geometric structure of the space of smooth shapes is applied to the theory of PDE constrained shape optimization problems. In particular, a Riemannian shape gradient with respect to the Sobolev metric and the Steklov- Poincaré metric are defined. If we consider Sobolev metrics, we have to deal with surface formulations of shape derivatives. An intermediate and equivalent result in the process of deriving surface forms is the volume expression, which is preferable over surface forms. One possible approach to use volume forms is to consider Steklov-Poincaré metrics. The gradients with respect to both, g1 and gS , open the door to formulate optimization algorithms in the space of smooth shapes. Acknowledgement This work has been partly supported by the German Research Foundation within the priority program SPP 1962 under contract number Schu804/15-1. References 1. P.A. Absil, R. Mahony, and R. Sepulchre. Optimization Algorithms on Matrix Manifolds. Princeton University Press, 2008. 2. M. Bauer, P. Harms, and P.M. Michor. Sobolev metrics on shape space of surfaces. Journal of Geometric Mechanics, 3(4):389–438, 2011. 3. M. Berggren. A unified discrete-continuous sensitivity analysis method for shape optimization. In W. Fitzgibbon et al., editors, Applied and Numerical Partial Differential Equations, volume 15 of Computational Methods in Applied Sciences, pages 25–39. Springer, 2010. 4. J. Céa. Conception optimale ou identification de formes calcul rapide de la dérivée directionelle de la fonction coût. RAIRO Modelisation mathématique et analyse numérique, 20(3):371–402, 1986. 5. M.C. Delfour and J.-P. Zolésio. Shapes and Geometries: Metrics, Analysis, Differ- ential Calculus, and Optimization, volume 22 of Advances in Design and Control. SIAM, 2nd edition, 2001. 6. P. Gangl, A. Laurain, H. Meftahi, and K. Sturm. Shape optimization of an electric motor subject to nonlinear magnetostatics. SIAM Journal on Scientific Computing, 37(6):B1002–B1025, 2015. 7. K. Ito, K. Kunisch, and G.H. Peichl. Variational approach to shape derivatives. ESAIM: Control, Optimisation and Calculus of Variations, 14(3):517–539, 2008. 8. A. Kriegl and P.W. Michor. The Convient Setting of Global Analysis, volume 53 of Mathematical Surveys and Monographs. American Mathematical Society, 1997. 9. A. Laurain and K. Sturm. Domain expression of the shape derivative and applica- tion to electrical impedance tomography. Technical Report No. 1863, Weierstraß- Institut für angewandte Analysis und Stochastik, Berlin, 2013. 10. P.M. Michor and D. Mumford. Vanishing geodesic distance on spaces of submani- folds and diffeomorphisms. Documenta Mathematica, 10:217–245, 2005. 11. P.M. Michor and D. Mumford. Riemannian geometries on spaces of plane curves. Journal of the European Mathematical Society, 8(1):1–48, 2006. 12. P.M. Michor and D. Mumford. An overview of the Riemannian metrics on spaces of curves using the Hamiltonian approach. Applied and Computational Harmonic Analysis, 23(1):74–113, 2007. 13. B. Mohammadi and O. Pironneau. Applied Shape Optimization for Fluids. Oxford University Press, 2001. 14. A. Paganini. Approximative shape gradients for interface problems. In A. Pratelli and G. Leugering, editors, New Trends in Shape Optimization, volume 166 of In- ternational Series of Numerical Mathematics, pages 217–227. Springer, 2015. 15. O. Pantz. Sensibilitè de l’èquation de la chaleur aux sauts de conductivitè. Comptes Rendus Mathematique de l’Académie des Sciences, 341(5):333–337, 2005. 16. S. Schmidt, E. Wadbro, and M. Berggren. Large-scale three-dimensional acoustic horn optimization. SIAM Journal on Scientific Computing, 38(6):B917B940, 2016. 17. V.H. Schulz, M. Siebenborn, and K. Welker. Structured inverse modeling in parabolic diffusion problems. SIAM Journal on Control and Optimization, 53(6):3319–3338, 2015. 18. V.H. Schulz, M. Siebenborn, and K. Welker. Efficient PDE constrained shape opti- mization based on Steklov-Poincaré type metrics. SIAM Journal on Optimization, 26(4):2800–2819, 2016. 19. M. Siebenborn and K. Welker. Computational aspects of multigrid methods for optimization in shape spaces. SIAM Journal on Scientific Computing (submitted), 2016. 20. J. Sokolowski and J.-P. Zolésio. Introduction to Shape Optimization, volume 16 of Computational Mathematics. Springer, 1992. 21. K. Sturm. Shape differentiability under non-linear PDE constraints. In A. Pratelli and G. Leugering, editors, New Trends in Shape Optimization, volume 166 of In- ternational Series of Numerical Mathematics, pages 271–300. Springer, 2015.