An introduction to internal stabilization of infinite-dimensional linear systems

01/10/2017
Auteurs : A. Quadrat
Publication e-STA e-STA 2004-1
OAI : oai:www.see.asso.fr:545:2004-1:20065
DOI : You do not have permission to access embedded form.

Résumé

An introduction to internal stabilization of infinite-dimensional linear systems

Métriques

13
5
340.87 Ko
 application/pdf
bitcache://6a7fbcf2c62163d15d54a0dfe251178189c0f355

Licence

Creative Commons Aucune (Tous droits réservés)
<resource  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                xmlns="http://datacite.org/schema/kernel-4"
                xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4/metadata.xsd">
        <identifier identifierType="DOI">10.23723/545:2004-1/20065</identifier><creators><creator><creatorName>A. Quadrat</creatorName></creator></creators><titles>
            <title>An introduction to internal stabilization of infinite-dimensional linear systems</title></titles>
        <publisher>SEE</publisher>
        <publicationYear>2017</publicationYear>
        <resourceType resourceTypeGeneral="Text">Text</resourceType><dates>
	    <date dateType="Created">Sun 1 Oct 2017</date>
	    <date dateType="Updated">Sun 1 Oct 2017</date>
            <date dateType="Submitted">Sat 17 Feb 2018</date>
	</dates>
        <alternateIdentifiers>
	    <alternateIdentifier alternateIdentifierType="bitstream">6a7fbcf2c62163d15d54a0dfe251178189c0f355</alternateIdentifier>
	</alternateIdentifiers>
        <formats>
	    <format>application/pdf</format>
	</formats>
	<version>34082</version>
        <descriptions>
            <description descriptionType="Abstract"></description>
        </descriptions>
    </resource>
.

1 An introduction to internal stabilization of infinite-dimensional linear systems A. Quadrat Abstract— In these notes, we give a short introduction to the fractional representation approach to analysis and synthesis problems [12], [14], [17], [28], [29], [50], [71], [77], [78]. In particular, using algebraic analysis (commutative algebra, module theory, homological algebra, Banach algebras), we shall give necessary and sufficient conditions for a plant to be internally stabilizable or to admit (weakly) left/right/doubly coprime factor- izations. Moreover, we shall explicitely characterize all the rings A of SISO stable plants such that every plant − defined by means of a transfer matrix with entries in the quotient field K = Q(A) of A − satisfies one of the previous properties (e.g. internal stabilization, (weakly) doubly coprime factorizations). Using the previous results, we shall show how to parametrize all stabilizing controllers of an internally stabilizable plants which does not necessarily admits a doubly coprime factorization. Finally, we shall give some necessary and sufficient conditions so that a plant is strongly stabilizable (i.e. stabilizable by a stable controller) and prove that every internally stabilizable MIMO plant over A = H∞(C+) is strongly stabilizable. Index Terms— Fractional representation approach to anal- ysis and synthesis problems, internal stabilization, (weak) left/right/doubly coprime factorizations, parametrization of all stabilizing controllers, strong/simultaneous/robust stabilization, algebraic analysis, module theory, theory of fractional ideals, homological algebra, Banach algebras, stable range, H∞(C+). I. A BRIEF INTRODUCTION For the twentieth anniversary of the paper “Alge- braic and topological aspects of feedback stabilization” by M. Vidyasagar, M. Schneider and H. Francis, published in IEEE Transactions on Automatic Control (August 1982) [77], we want to present in these notes certain of its main ideas as well as to give a personal overview of their recent progress. The impacts of this paper, as well as the book [78], are difficult to evaluate in the present research [79], [83]. However, we can easily say that certain ideas of [77], [78] (frac- tional representation of systems, internal stabilization, Youla- Kučera parametrization of the stabilizing controllers, strong and simultaneous stabilizations, graph approach to plants, graph topology, margins of robustness...) have been at the core of the successful development of H∞-control for finite- dimensional linear systems [20], [25], [29] in the nineties. We refer to [2], [41] for nice surveys about stabilization problems for finite-dimensional systems. The question of the possibility to extend certain of the pre- vious results to infinite-dimensional linear systems (e.g. delay systems, partial differential equations, convolution systems) was naturally asked in [17], [77] (see also the last chapter of [78]). However, the larger the class of systems becomes, INRIA Sophia Antipolis, CAFE project, 2004 route des lucioles, BP 93, 06902 Sophia Antipolis cedex, France. Alban.Quadrat@sophia.inria.fr the more difficult it is to give a general answer concerning these problems (internal stabilization, existence of doubly coprime factorizations, parametrization of the stabilizing con- trollers...). Hence, certain parts of the program developed in [17], [77] for infinite-dimensional linear systems are still in progress nowadays (see e.g. [14], [28], [44], [47], [56], [57], [58], [60], [69], [71], [73] and the references therein). In these notes, we shall mainly focus on the following general questions [77], [78]: 1) Does it exist necessary and sufficient conditions to internal stabilization? 2) When can we parametrize all stabilizing controllers of a plant by means of the well-known Youla-Kučera parametrization? 3) Can we characterize all the rings A of single input single output (SISO) stable plants so that every transfer matrix − defined by a matrix with entries in the quotient field K = Q(A) of A − is internally stabilizable? For lack of space, we shall not have the possibility to develop certain results such as equivalences of external and internal closed-loop stability [14], [44], graph approach to plants (see [28], [83] and the references therein), graph topol- ogy, margins of robustness [14], H2 or H∞-optimal controllers [14], [29], [50]. Moreover, we shall only use an input-output approach to systems via transfer matrices as it is developed in [11], [12], [17], [50], [77], [78]. We refer to [14], [44] for the link between the frequency-domain approach and the state- space one (e.g. stabilizable and detectable state-space systems, Pritchard-Salamon class of systems). More generally, we refer the reader to the following nice references [14], [44], [50], [79], [83] for complementary information and bibliographies. Throughout this paper, we shall denote by A a commutative integral domain [31], [66] (namely a ring with an identity which satisfies ∀ a, b ∈ A, a b = b a and a b = 0, b 6= 0 ⇒ a = 0), the group of the units of A by U(A) = {a ∈ A | ∃ b ∈ A : a b = 1} and the quotient field of A by: K = Q(A) = {a/b | 0 6= b, a ∈ A}. By convention, every vector of An is a row vector. Moreover, Aq×p will denote the set of the q × p matrices with entries in A and GLp(A) = {U ∈ Ap×p | ∃ V ∈ Ap×p : U V = V U = Ip} the group of invertible p × p matrices of Ap×p and Ip its identity. If R ∈ Ap×p , then RT will denote the transposed matrix and (a1, . . . , an) the ideal A a1 + . . . + A an of A. 2 Finally, p, q and r will always be three positive integers satisfying p = q + r and , will mean ‘by definition’. II. THE FRACTIONAL REPRESENTATION APPROACH TO SYSTEMS “...As soon as I read this, my immediate reaction was ‘What is so difficult about handling that case? All one has to do is to write the unstable part as a ratio of two stable rational functions!’ Without exaggeration, I can say that the idea occured to me within no more than 10 min. So there it is − the best idea I have had in my entire research career, and it took less than 10 min. All the thousands of hours I have spent thinking about problems in control theory since have not resulted in any ideas as good as this one. I don’t think I know what the ‘moral of this story’ really is !”, M. Vidyasagar [79]. The fractional representation approach to systems is an input-output theory based on the idea that the algebraic struc- ture of a class of single input single output (SISO) plants needs to be a ring if we want to put two systems in connection (×) and in parallel (+) [84]. Moreover, in the seventies, M. Vidyasagar [76], C. Desoer and coauthors [15] introduced the idea to consider an integral domain A of SISO stable plants in order to represent an unstable plant as a ratio of two stable plants, i.e. as an element of the quotient field of A, namely K = Q(A) = {n/d | 0 6= d, n ∈ A} (see [79] for a historical survey). Examples of integral domains of SISO stable plants, usually encountered in the literature, are the following ones. Example 2.1: • The ring of proper stable real rational functions [41], [78] RH∞ = {n/d | 0 6= d, n ∈ R[s], deg n ≤ deg d, d(s? ) = 0 ⇒ Re (s? ) < 0}. (1) A transfer function p belongs to RH∞ iff p is the transfer function of an exponentially stable time-invariant finite- dimensional SISO linear system. • The Hardy algebra of bounded holomorphic functions on the open right half plane C+ = {s ∈ C | Re s > 0}, i.e. H∞(C+) = {f ∈ H(C+) | sup s∈C+ |f(s)| < +∞}, (2) where H(C+) denotes the ring of holomorphic functions in C+ [14], [84]. A transfer function p belongs to H∞(C+) iff k p k∞= sup 06=u∈H2(C+) k p u k2 k u k2 < +∞, where H2(C+) is the Hilbert space of the holomorphic functions in C+ which are bounded w.r.t. the norm: k f k2 2= sup Re x>0 Z +∞ −∞ |f(x + i y)|2 dy. Let us recall that H2(C+) = L(L2(R+), where L(·) denotes the Laplace transform. Hence, p belongs to H∞(C+) iff p is the transfer function of a L2(R+)-stable time-invariant infinite-dimensional SISO system [14]. • The Wiener algebra defined by A = {h(t) = f(t) + P+∞ i=0 ai δt−ti | f ∈ L1(R+), (ai)i≥0 ∈ l1(Z+), 0 = t0 < t1 < t2 < . . . , ti ∈ R+} (3) where h is bounded w.r.t. the norm: k h kA = k f kL1(R+) + k (ai)i≥0 kl1(Z+) = R +∞ 0 |f(t)| dt + P+∞ i=0 |ai|. Then, h belongs to A iff h is the impulse response of a L∞(R+)-stable time-invariant infinite-dimensional SISO linear system (BIBO stability) [11], [14]. Let us also consider the integral domain  = {L(f) | f ∈ A} of transfer functions of BIBO stable time-invariant infinite- dimensional SISO linear systems [11], [14] . • Let W+ be the commutative integral domain of holomor- phic functions on the unit disc D = {z ∈ C | |z| ≤ 1} whose Taylor series converge absolutely: W+ = {(ai)i≥0, ai ∈ k = R, C | +∞ X i=0 |ai| < +∞}. (4) Then, p ∈ W+ iff p is the unit-pulse response of a BIBO-stable causal digital filter, i.e. W+ is the algebra of the bounded input bounded output (BIBO) causal digital filters [78]: • Let MDn be the ring of structural stable multidimensional linear systems, namely MDn = {n/d | 0 6= d, n ∈ R[z1, . . . , zn], d(z) = 0 ⇒ z ∈ Cn \D n }, (5) where D n = {z ∈ Cn | |z| ≤ 1} is the closed unit polydisc of Cn . See [43] and the references therein. See [8], [9], [30], [34], [45], [82] for other examples of rings used in stabilization problems. Example 2.2: Let us consider A = RH∞ and the transfer function p = 1/(s − 1). We easily check that p / ∈ A because p has a pole in 1 ∈ C+ (unstable pole). However, we have p ∈ K = Q(A) = R(s) because p = n/d, where: ( n = 1/(s + 1) ∈ A, d = (s − 1)/(s + 1) ∈ A. Testing the stability of a transfer function p ∈ K = Q(A) becomes a membership problem: testing whether or not p ∈ A. By extension, a multi input multi output (MIMO) system is defined by means of a transfer matrix P whose entries belong to the quotient field K = Q(A) of a certain integral domain A of SISO stable plants. Hence, if we have P ∈ Kq×r , then we can always write P as P = D−1 N = Ñ D̃−1 , where: ( R = (D : −N) ∈ Aq×p , R̃ = (ÑT : D̃T )T ∈ Ap×r . 3 For instance, we can always take D = d Iq and D̃ = d Ir, where d is the product of the denominators of all the entries of P. Example 2.3: Let us consider A = H∞(C+), K = Q(A) and the following transfer matrix with entries in K: P = e−s s−1 e−s (s−1)2 ! . (6) We easily see that we have P = D−1 N = Ñ D̃−1 , where R = (D : −N) ∈ A2×3 and R̃ = (ÑT : D̃T )T ∈ A3×1 are for instance defined by:                        R =   (s−1)2 (s+1)2 0 −(s−1) e−s (s+1)2 0 (s−1)2 (s+1)2 − e−s (s+1)2   , R̃ =      (s−1) e−s (s+1)2 e−s (s+1)2 (s−1)2 (s+1)2      . (7) In the fractional representation approach, instead of the transfer matrix y = P u, we usually prefer to study the system D y − N u = 0, i.e. R z = 0, where P = D−1 N ∈ Kq×r , R = (D : −N) ∈ Aq×p and z = (yT : uT )T . The idea to only consider the input and output variables together, i.e. without any separation between the inputs and the outputs, is similar to the module theory and the behavioural approaches to linear multidimensional systems (see [13], [22], [52], [53] and references therein). Hence, the structural properties of the plant, defined by P, can be studied by means of the linear system R z = 0 whose coefficients belong to a ring A. This can be achieved using linear algebra over the ring A (e.g. testing the existence of a right/left/doubly coprime factorization, invariant factors, equivalences...). However, lin- ear algebra over a ring is a part of the module theory [5], [6], [7], [31], [66]. Therefore, it seems to be quite natural to introduce module theory into the study of linear systems. This idea is quite old and R. E. Kalman seems to be the first person who has used module theory in linear control theory during the sixties (see [38] and the references therein). Since this pioneering work, module theory has been more and more used in linear control theory (see [13], [22], [23], [24], [53] and the references therein). But, as surprising as it might be, module theory has only recently been introduced into fractional representation approach to analysis and synthesis problems in the pioneering work of V. R. Sule [73] (see also [69]) and, up to our knowledge, has only been developed since then in [47], [54], [55], [56], [58], [59], [61]. Let us recall the definition of an A-module (see [5], [31], [66] for more informations). Definition 2.1: An A-module M over a ring A is a set M with two operations, namely an addition + : M × M → M, defined by (m1, m2) → m1 + m2, and a scalar multiplication A × M → M, defined by (a, m) → a m, which satisfy 1) m1 + m2 = m2 + m1, 2) (m1 + m2) + m3 = m1 + (m2 + m3), 3) ∃ 0 ∈ M, ∀ m ∈ M : m + 0 = m, 4) ∀ m ∈ M, ∃ (−m) ∈ M : m + (−m) = 0, 5) a (m1 + m2) = a m1 + a m2, 6) (a + b) m = a m + b m, 7) (a b) m = a (b m), 8) 1 m = m, for all m, m1, m2, m3 ∈ M and a, b ∈ A. A submodule N of an A-module M is a subset N of M which also satisfies 1, 2, 3, 4 and: ∀ a ∈ A : a N = {a n | n ∈ N} ⊆ N. Hence, an A-module shares the same definition as a k-vector space with the only distinction that the scalars belong to a ring A in the case of a module whereas they belong to a field k (i.e. a commutative ring such that every non-zero element has an inverse for the product) in the case of a vector space. This small difference implies huge ones in the respective theories (module theory and linear algebra over a field) that can be easily understood if we notice that an A-module has generally no basis. Indeed, if we want to obtain a basis of a k-vector space defined by a non minimal family of generators, we need to invert certain coefficients of k to obtain an independent subfamily of generators, i.e. a basis. But, if the scalars belong to a ring A instead of a field k, they generally do not admit inverses in A, and thus, we cannot generally obtain a basis from a family of generators. Example 2.4: 1) If A is a commutative ring, then, for all n ∈ Z+, An is an A-module: ∀ λ1, λ2 ∈ An , ∀ a1, a2 ∈ A : a1 λ1 + a2 λ2 ∈ An . Let ei be the vector of An defined by 1 in the ith component and 0 for all the others. Then, {e1, . . . , en} is a basis of An because every λ = (λ1 : . . . : λn) ∈ An can be uniquely written as λ = Pn i=1 λi ei. This basis is called the canonical basis of An . 2) If f : M −→ N is an A-morphism, namely an A-linear application from the A-module M to the A-module N, i.e. ∀ λ1, λ2 ∈ M, ∀ a1, a2 ∈ A: f(a1 λ1 + a2 λ2) = a1 f(λ1) + a2 f(λ2), then      ker f = {m ∈ M | f(m) = 0}, im f = {n ∈ N | ∃ m ∈ M : n = f(m)}, coker f = N/im f, − where N/im f is the quotient A-module obtained by identifying two elements n1 and n2 of N if there exists m ∈ M such that n1 − n2 = f(m) − are three A- modules [5], [31], [66]. 3) Let H2(C+) be the Hardy space of holomorphic func- tions in the open right half plane C+ which are bounded with respect to the norm: k f k2 , sup x∈R+ ( Z +∞ −∞ |f(x + i y)|2 dy)1/2 . 4 It is well known that H2(C+) is a Hilbert space [14] and, by a theorem of Paley-Wiener, every function of H2(C+) is the Laplace transform of a unique function of L2(R+) [14]. Finally, H2(C+) has a natural structure of an H∞(C+)-module defined by: ∀ f, g ∈ H2, ∀ h, k ∈ H∞ : h f + k g ∈ H2. Exercise 2.1: 1) Prove 2 of Example 2.4 (Hints for the structure of A-module of coker f: if n1 and n2 are identified in N/im f, i.e. there exists m ∈ M such that n1 − n2 = f(m), we say that n1 and n2 belong to the same equivalence class and we denote this class by π(n1) = π(n2) ∈ N/im f. Then, we have an A- morphism π : N → N/im f, defined by mapping any element n ∈ N into its equivalence class π(n), called the quotient map. The structure of A-module of coker f is defined by: ∀ a ∈ A, ∀ n ∈ N : a π(n) , π(a n). Check that a π(n) does not depend on the choice of n, i.e. if π(n1) = π(n2) = π(n), then a π(n1) = a π(n2)). 2) Prove that Lp(R+) is an A-module for 1 ≤ p ≤ +∞, H2(C+) is an Â-module (see (3) for the definitions of A and Â) and H2 is an RH∞-module (see (1) for the definition of RH∞) (Hints: show that if f ∈ A, g ∈ Lp(R+), k ∈ Â, l ∈ RH∞ and h ∈ H2(C+), then f ? g ∈ Lp(R+), k h ∈ H2(C+) and l h ∈ H2(C+). See [16] for informations and details). III. WEAKLY DOUBLY COPRIME FACTORIZATIONS A. Definitions A useful tool for time-invariant finite-dimensional linear systems (A = RH∞ or k[s], k = R, C) is the concept of coprime factorization. The coprime factorization of a rational matrix goes back to the work of H. H. Rosenbrock [65] and has played since then a major role in analysis and syn- thesis problems (controllability, observability, stabilizability, detectability, Youla-Kučera parametrization of all stabilizing controllers, graph topology, equivalences...). This technique has been popularized by the book of M. Vidyasagar [78]. However, contrary to finite-dimensional systems, the transfer matrix of more general systems (delays systems, systems of partial differential equations, convolution equations...) gener- ally does not admit a coprime factorization [12], [14], [17], [44], [77], [78], [82]. Intuitively, this comes from the fact that the algebraic properties of the rings H∞(C+), A and Â... are more complex than the ones of RH∞. For finite- dimensional systems (A = RH∞ or k[s], k = R, C), one can prove that there exists only one concept of primeness, but, for more sophisticated rings as H∞(C+) or Â, this fact is no longer true. We are going to introduce the concept of weak primeness which plays a major role in these notes. This concept generalizes the one introduced by M. C. Smith for H∞(C+) in the important contribution [71]. Definition 3.1: • [56] A matrix R ∈ Aq×p is weakly left- prime if we have Kq R ∩ Ap , {λ ∈ Ap | ∃ µ ∈ Kq : λ = µ R} = Aq R , {λ ∈ Ap | ∃ ν ∈ Aq : λ = ν R}, i.e. if a row vector µ ∈ Kq is such that µ R ∈ Ap , then there exists ν ∈ Aq satisfying: µ R = ν R. (8) • R is weakly right-prime if RT is weakly left-prime. Exercise 3.1: Show that, if R has a full row rank, namely the q rows of R are A-linear independent, then R is weakly left-prime iff, if there exists µ ∈ Kq such that µ R ∈ Aq , then µ ∈ Aq (Hints: factorize (8) by R and use the fact that R has full row rank to obtain µ = ν ∈ Aq ). Example 3.1: Let us consider the matrix R defined by (7). The matrix R is not weakly left-prime because we have  s+1 s−1 : 0  (s−1)2 (s+1)2 0 −(s−1) e−s (s+1)2 0 (s−1)2 (s+1)2 − e−s (s+1)2 ! =  s−1 s+1 : 0 : − e−s s+1  ∈ A3 and the vector (s+1 s−1 : 0) belongs to K2 but not to A2 . Definition 3.2: • A couple (a, b) of elements of A has a common divisor c ∈ A if there exist a0 , b0 ∈ A such that: ( a = a0 c, b = b0 c. If there exists a common divisor c of a and b which satisfies that, for every other divisor c0 of a and b, there exists d ∈ A such that c = d c0 , then c is called the greatest common divisor of a and b and is denoted by [a, b]. A greatest common divisor is defined up to an invertible element, i.e. up to an element of U(A). • A ring A is a greatest common divisor domain (gcdd) if every couple (a, b) of elements of A has a greatest common divisor [a, b]. Proposition 3.1: [71] If A is a greatest common divisor domain, then a full row rank matrix R ∈ Aq×p (⇒ 0 < q ≤ p) is weakly left-prime iff 1 is a greatest common divisor of all the q × q minors of R. Exercise 3.2: 1) We shall see in Theorem 3.4 that H∞(C+) is a greatest common divisor domain. Prove that the matrix R defined by (7) is not weakly left-prime because (s−1)2 (s+1)2 is a common divisor of the 2×2 minors of R. 2) Check that 1 is a greatest common divisor of 1 s+1 and e−s ∈ H∞(C+). Similar problem for s−1 s+1 and e−s . 3) Find a common divisor of the two elements 1−e−s s+1 and s s+1 of A = H∞(C+) (Hint: 1 s+1 is not a common divisor of the two elements because s / ∈ A but use the fact that 1−e−s s ∈ A). Definition 3.3: • A transfer matrix P ∈ Kq×r admits a weakly left-coprime factorization if there exists a weakly 5 left-prime matrix R = (D : −N) ∈ Aq×r such that D ∈ Aq×q has full rank (i.e. det D 6= 0) and: P = D−1 N. • Dually, P ∈ Kq×r admits a weakly right-coprime factor- ization if there exists a weakly right-prime matrix R̃ = (ÑT : D̃T )T ∈ Ap×r such that the matrix D̃ ∈ Ar×r has full rank (i.e. det D̃ 6= 0) and: P = Ñ D̃−1 . • P ∈ Kq×r admits a weakly doubly coprime factorization if P has a weakly left-coprime factorization as well as a weakly right-coprime factorization. Let us note that the matrix R defined by (7) was obtained by removing all the denominators of P. In Example 3.1, we saw that R was not weakly left-prime. Hence, the procedure which consists in writing all the entries of a transfer matrix over a common denominator generally leads to matrices which are not weakly left/right-prime. Moreover, we shall show that this concept of weak primeness is the weakest existing concept of primeness, and thus, if P = D−1 N = Ñ D̃−1 is not a weakly doubly coprime factorization, then it is also not a doubly coprime factorization. Hence, if we want to compute effectively a doubly coprime factorization of a transfer matrix (when it exists), then we first need to have an algorithm which computes a weakly doubly coprime factorization of a transfer matrix (if such a factorization also exists). B. Transfer matrices & Torsion-freeness In order to understand when a transfer matrix P ∈ Kq×r admits a weakly doubly coprime factorization, we need to introduce the concepts of a torsion element of an A-module and of a torsion/torsion-free A-module. All the A-modules which will be considered in the rest of this paper are finitely generated, namely are defined by means of a finite family of generators [7], [31], [66]. Definition 3.4: • An A-module M is free if it admits a basis, or equivalently, if M ∼ = Ar with r ∈ Z+. Then, r is called the rank of the A-module M. • The torsion submodule t(M) of an A-module M is defined: t(M) = {m ∈ M | ∃ 0 6= a ∈ A, a m = 0}. An element of t(M) is called a torsion element. An A- module M is torsion-free if t(M) = 0, or equivalently, M/t(M) = M and M is a torsion module if t(M) = M. • If N is a submodule of an A-module M, then we call the A-closure of N in M the A-module defined by: N = {m ∈ M | ∃ 0 6= a ∈ A, a m ∈ N}. Remark 3.1: Let us notice that the concept of a torsion element of a k-vector space (k is a field) is trivial: if m is a torsion element of k-vector space, then there exists 0 6= a ∈ k such that a m = 0. But, using the fact that 0 6= a ∈ k and k is a field, then a−1 ∈ k, and thus: a m = 0 ⇒ a−1 (a m) = m = 0. Hence, every k-vector space is torsion-free, i.e. this concept is only interesting for A-modules. More generally, every k-vector space is a free k-module. If R ∈ Aq×p , then we define the A-morphism .R (see 2 of Example 2.4) by: Aq .R −→ Ap (λ1 : . . . : λq) −→ (λ1 : . . . : λq) R. From 2 of Example 2.4, we know that im .R = Aq R and coker .R = Ap /Aq R are two A-modules. These two A- modules will be of very common use in all these notes. The A-module Aq R , {λ ∈ Ap | ∃ ν ∈ Aq : λ = ν R} is defined by the A-linear combination of the rows of R. Let us give an interpretation of the A-module Ap /Aq R. From Example 2.4, we know that Aq (resp. Ap ) is a free A-module having a canonical basis denoted by {e1, . . . , eq} (resp. {f1, . . . , fp}). Let us denote by zi = π(fi) the equivalence class of fi in Ap /Aq R (see 1 of Exercise 2.1). For i = 1, . . . , q, we have: ei R = (Ri1 : . . . : Rip) = p X j=1 Rij fj ∈ Aq R ⇒ π(ei R) = 0. Using the structure of A-module of M = Ap /Aq R and the A-morphism π : Ap → M (see 1 of Exercise 2.1), then, in Ap /Aq R, for i = 1, . . . , q, we have: π(ei R) = π( p X j=1 Rij fj) = p X j=1 Rij π(fj) = p X j=1 Rij zj = 0. (9) Thus, M is defined by the system R z = 0 and all the A-linear combinations of its equations, where z = (z1 : . . . : zp) is a vector of formal variables which correspond to the generators of M (they do not belong to any “functional space”). Example 3.2: Let us reconsider the matrix R ∈ A2×3 defined by (7) (A = H∞(C+)). Let us call y1 (resp. y2) the class of f1 (resp. f2) and u the class of f3 in M = A3 /A2 R. We find that the A-module M is defined by the system    (s−1)2 (s+1)2 y1 − (s−1) e−s (s+1)2 u = 0, (s−1)2 (s+1)2 y2 − e−s (s+1)2 u = 0, as well as the A-linear combinations of these two equations. We can check that the element z = (s−1) (s+1) y1 − e−s (s+1) u of M (i.e. class of  (s−1) (s+1) : 0 : − e−s (s+1)  ∈ A3 in M) satisfies the equation (s−1) (s+1) z = 0, i.e. m is a torsion element of M. Lemma 3.1: [56] Let us consider R ∈ Aq×p and the A- module M = Ap /Aq R. Then, we have: 1) The A-closure of the A-module Aq R in Ap is: Aq R = Kq R ∩ Ap . 2) t(M) = (Kq R ∩ Ap )/Ap R = Aq R/Aq R. 3) M/t(M) = Ap /(Kq R ∩ Ap ) = Ap /Aq R. 4) M = Ap /Aq R is a torsion-free A-module (t(M) = 0) iff R is weakly left-prime. Exercise 3.3: Prove Lemma 3.1. See [56] for the answers. 6 A transfer matrix P ∈ Kq×r has lots of different fractional representations of the form P = D−1 N = Ñ D̃−1 , where: ( R = (D : −N) ∈ Aq×p , R̃ = (ÑT : D̃T )T ∈ Ap×r . In the next proposition, we show that the concepts of A- closure and torsion submodule allow us to capture the intrinsic informations of these different representations. Proposition 3.2: [56] If a transfer matrix P ∈ Kq×r can be written as P = D−1 1 N1 = D−1 2 N2, P = Ñ1 D̃−1 1 = Ñ2 D̃−1 2 , with ( Ri = (Di : −Ni) ∈ Aq×p , R̃i = (ÑT i : D̃T i )T ∈ Ap×r , i = 1, 2, then we have: 1) Aq R1 = Aq R2, 2) Ar R̃T 1 = Ar R̃T 2 , 3) Ap RT 1 ∼ = Ap RT 2 ∼ = Ñi/t(Ñi) = Ap /Ar R̃T i 4) Ap R̃1 ∼ = Ap R̃2 ∼ = Mi/t(Mi) = Ap /Ar R̃T i , with the notations: ( Mi = Ap /Aq Ri, Ñi = Ap /Ar R̃T i , i = 1, 2. Example 3.3: Let us consider A =  and: p = e−s /(s − 1) ∈ K = Q(A). There are different ways to obtain a fractional representation of p: for instance, we have p = n1/d1 = n2/d2 with:          n1 = e−s /(s + 1) ∈ A, d1 = (s − 1)/(s + 1) ∈ A, n2 = (e−s (s − 1))/(s + 1)2 ∈ A, d2 = (s − 1)2 /(s + 1)2 ∈ A. If we denote by ( R1 = (d1 : n1) ∈ A1×2 , R2 = (d2 : n2) ∈ A1×2 , then we have:        I1 = A2 RT 1 = (d1, n1), I2 = A2 RT 2 = (d2, n2) = ([d2, n2] d1, [d2, n2] n1) =  s−1 s+1  I1. If we define the following A-morphisms        φ : I1 −→ I2, φ(a) = (s−1) (s+1) a, ∀ a ∈ I1, ψ : I2 −→ I1 ψ(b) = (s+1) (s−1) b = c1 d1 + c2 n1, ∀ b = (s−1) (s+1) (c1 d1 + c2 n1) ∈ I2, and we easily check that φ◦ψ = idI2 and ψ ◦φ = idI1 , which proves that I1 ∼ = I2. Corollary 3.1: [56] The structural (intrinsic) properties of a transfer matrix P = D−1 N = Ñ D̃−1 ∈ Kq×r , where ( R = (D : −N) ∈ Aq×p , R̃ = (ÑT : D̃T )T ∈ Ap×r , only depend on the A-modules Aq R and Ar R̃T and, up to an isomorphism, on the A-modules Ap RT and Ap R̃. C. Algorithm The next theorem gives necessary and sufficient conditions for a transfer matrix to admit a weakly left/right-coprime factorization. Theorem 3.1: [56] P = D−1 N = Ñ D̃−1 ∈ Kq×r , where ( R = (D : −N) ∈ Aq×p , R̃ = (ÑT : D̃T )T ∈ Ap×r , admits a weakly left (resp. right) coprime factorization iff Aq R (resp. Ar R̃T ) is a free A-module of rank q (resp. r), or equivalently, iff there exists a full row rank matrix R0 ∈ Aq×p (resp. a full column rank matrix R̃0 ∈ Ap×r ) such that Aq R = Aq R0 (resp. Ar R̃T = Ar R̃0 T ). Exercise 3.4: 1) [58], [59] Prove that P ∈ Kq×r admits a weakly left-coprime factorization iff there exists a non- singular matrix D ∈ Aq×q such that: {λ ∈ Aq | λ P ∈ Ar } = Aq D. Deduce that P = D−1 N is a weakly left-coprime of P. 2) [58], [59] Prove that P ∈ Kq×r admits a weakly right-coprime factorization iff there exists a non-singular matrix D̃ ∈ Ar×r such that: {λ ∈ Ar | λ PT ∈ Aq } = Ar D̃T . Prove that P = Ñ D̃−1 is a weakly right-coprime of P. Corollary 3.2: [56] P = D−1 N = Ñ D̃−1 ∈ Kq×r , where ( R = (D : −N) ∈ Aq×p , R̃ = (ÑT : D̃T )T ∈ Ap×r , admits a weakly doubly coprime factorization iff the Aq R and Ar R̃T are two free A-modules of rank q and r. We give an algorithm which computes the A-closure Aq R of an A-module of the form Aq R if a certain hypothesis on the ring A is satisfied, namely A is a coherent ring (see next section). This hypothesis allows us to certify that, for every matrix R ∈ Aq×p , the A-modules ker .RT and ker .R−1 that we need to compute are finitely generated. Algorithm 1: Input: A a coherent ring and R ∈ Aq×p . Output: R0 ∈ Aq0 ×p such that Aq R = Aq0 R0 . 1) Start with R ∈ Aq×p . 2) Transpose R to obtain RT ∈ Ap×q . 3) Find a family of generators of: ker .RT = {λ ∈ Ap | λ RT = 0}. If {λ1, . . . , λm} is a family of generators of ker .RT , then denote by RT −1 ∈ Am×p the matrix whose ith row is λi. 7 4) Tranpose RT −1 in order to obtain R−1 ∈ Ap×m . 5) Find a family of generators of ker .R−1 = {η ∈ Ap | η R−1 = 0}. If {η1, . . . , ηq0 } is a family of generators of ker .R−1, then denote by R0 ∈ Aq0 ×p the matrix whose ith row is ηi. Then, we have: Aq R = Aq0 R0 . Remark 3.2: Let us notice that the previous algorithm was obtained using a concept of homological algebra called exten- sion functor [7], [31], [66]. More precisely, in [56], we proved that we have t(M) ∼ = ext1 A(Aq /Ap RT , A) and gave a proof of the previous algorithm. This result generalizes to a more general situation certain results obtained in [13], [53]. Example 3.4: In Example 3.1, we saw that the matrix R defined by (7) is not weakly left-prime, and thus, that the following fractional representation P =   (s−1)2 (s+1)2 0 0 (s−1)2 (s+1)2   −1   (s−1) e−s (s+1)2 e−s (s+1)2   is not a weakly coprime factorization of the transfer matrix P defined by (6). Let us check whether or not P admits a weakly left-coprime factorization using the previous algorithm (in Theorem 3.3, we shall see that A = H∞(C+) is a coherent ring). 1) We first start with R ∈ A2×3 defined by (7). 2) We compute RT ∈ A3×2 . 3) Let us compute ker .RT = {λ = (λ1 : λ2 : λ3) ∈ A3 | λ RT = 0}. Let λ ∈ ker .RT , i.e.:    (s−1)2 (s+1)2 λ1 − (s−1) e−s (s+1)2 λ3 = 0, (s−1)2 (s+1)2 λ2 − e−s (s+1)2 λ3 = 0. (10) From the first equation, we obtain (s−1) (s+1)  (s−1) (s+1) λ1 − e−s (s+1) λ3  = 0 ⇔ (s−1) (s+1) λ1 − e−s (s+1) λ3 = 0, because A is an integral domain and λi ∈ A. Using the fact h s−1 s+1 , e−s s+1 i = 1, we obtain:      λ1 = e−s s+1 µ, λ3 = s−1 s+1 µ, µ ∈ A. Substituting λ3 in the second equation of (10), we obtain: (s−1) (s+1)  (s−1) (s+1) λ2 − e−s (s+1)2 µ  = 0 ⇔ (s−1) (s+1) λ2 − e−s (s+1)2 µ = 0. Finally, using the fact that h s−1 s+1 , e−s (s+1)2 i = 1, we obtain        λ2 = e−s (s+1)2 µ0 , µ = (s−1) (s+1) µ0 , µ0 ∈ A ⇒          λ1 = (s−1) e−s (s+1)2 µ0 , λ2 = e−s (s+1)2 µ0 , λ3 = (s−1)2 (s+1)2 µ0 . Therefore, we have λ = µ0 RT −1, where: RT −1 =  (s−1) e−s (s+1)2 : e−s (s+1)2 : (s−1)2 (s+1)2  ∈ A1×3 . 4) We transpose RT −1 in order to obtain R−1 ∈ A3×1 . 5) Let us compute ker .R−1 = {η = (η1 : η2 : η3) ∈ A3 | η R−1 = 0}. Let us consider η = (η1 : η2 : η3) ∈ ker .R−1, i.e.: (s−1) e−s (s+1)2 η1 + e−s (s+1)2 η2 + (s−1)2 (s+1)2 η3 = 0 ⇔ (s−1) (s+1)  e−s s+1 η1 + (s−1) (s+1) η3  = − e−s (s+1)2 η2. Using the fact that h s−1 s+1 , e−s (s+1)2 i = 1, we obtain:        e−s (s+1) η1 + (s−1) (s+1) η3 = e−s (s+1)2 ζ1, η2 = −(s−1) (s+1) ζ1, ζ1 ∈ A. (11) From the first equation of (11), we deduce that e−s (s+1)  η1 − 1 (s+1) ζ1  = −(s−1) (s+1) η3, and using the fact that h e−s s+1 , s−1 s+1 i = 1, we obtain              η1 = 1 (s+1) ζ1 + (s−1) (s+1) ζ2, η2 = −(s−1) (s+1) ζ1, η3 = − e−s s+1 ζ2, ζ1, ζ2 ∈ A, ⇔ η = (ζ1 : ζ2) R0 , where: R0 =   1 (s+1) −(s−1) (s+1) 0 (s−1) (s+1) 0 − e−s s+1   ∈ A2×3 . (12) 6) We have A2 R = A2 R0 and R0 is a full row rank matrix. Thus, A2 R0 , i.e. A2 R, is a free A-module of rank 2. Hence, by Theorem 3.1, we know that the following fractional representation of P P =   1 (s+1) −(s−1) (s+1) (s−1) (s+1) 0   −1 0 e−s s+1 ! (13) is a weakly left-coprime factorization of P (check that there is no common factor to all the 2×2 minors of the matrix R0 ). Finally, by Lemma 3.1, we know that:      A2 R = A2 R0 = K2 R ∩ A2 , t(M) = A2 R/A2 R = A2 R0 /A2 R, M/t(M) = A2 /A2 R0 . 8 Let us compute a family of generators of the torsion elements of M. We know that the torsion submodule of M is defined by t(M) = A2 R0 /A2 R, and thus, the class of the first row of R0 in t(M) corresponds to the element z1 = 1 (s+1) y1 − (s−1) (s+1) y2 whereas z2 = (s−1) (s+1) y1 − e−s (s+1) u corresponds to the class of the second row of R0 in t(M). We easily check that we have (s−1)2 (s+1)2 z1 = 0 and (s−1) (s+1) z2 = 0 in t(M), and thus, z1 and z2 constitute a family of generators of t(M). Finally, M/t(M) = A2 /A2 R0 is defined by    1 (s+1) y1 − (s−1) (s+1) y2 = 0, (s−1) (s+1) y1 − e−s (s+1) u = 0, as well as all the A-linear combinations of these two equations (see (9) for more details). Exercise 3.5: Let A = H∞(C+), K = Q(A) and let us consider the following transfer matrix: P = e−s s+1 s−1 s+1 0 1 s−1 ! ∈ K2×2 . 1) Show that we have P = D−1 N, where: D =  s−1 s+1 0 0 s−1 s+1  ∈ A2×2 , N = (s−1) e−s (s+1)2 (s−1)2 (s+1)2 0 1 s+1 ! ∈ A2×2 . 2) Check that P = D−1 N is not a weakly left-coprime factorization of P. Can you exhibit a torsion element of the A-module M = A4 /A2 R, R = (D : −N) ∈ A2×4 ? 3) Doing similarly as Example 3.4, show that we have the following weakly left-coprime factorization of P: P =  1 0 0 s−1 s+1 −1 e−s s+1 s−1 s+1 0 1 s+1 ! . 4) Give a family of generators of t(M) of M and the equations which generate M/t(M). 5) Dually, find a weakly right-coprime factorization of P. We can check your computations looking at [56]. D. Sylvester coherent domains Recall that an ideal I of A is just an A-submodule of A [31], [66], i.e. ∀ a1, a2 ∈ I, ∀ b1, b2 ∈ A, we have b1 a1+b2 a2 ∈ I. Definition 3.5: A ring is noetherian if every ideal I of A is finitely generated, namely there exists a finite family {a1, . . . , an} of elements of A such that: I = (a1, . . . , an) , ( n X i=1 bi ai | bi ∈ A ) . Example 3.5: The ring A = RH∞ of proper stable real rational functions is a principal ideal domain [78], namely every ideal of A is generated by means of a single element of A. In particular, A is a noetherian ring. Similarly for A = k[s] with k = R, C. Definition 3.6: A Banach algebra A is a k-algebra (with k = R, C) (namely a ring A which has the structure of a k- module) with a norm k · kA (namely an application k · kA: A → R+ which satisfies • k a kA= 0 ⇔ a = 0, ∀ a ∈ A, • k α a kA=| α |k k a kA, ∀ α ∈ k, ∀ a ∈ A, • k a + b kA ≤ k a kA + k b kA, ∀ a, b ∈ A) which satisfies the following properties: • k a b kA ≤ k a kA k b kA, ∀ a, b ∈ A, • k 1 kA= 1, • A is complete as a k-vector space, namely every Cauchy sequence (an)n≥0 of elements of A (i.e. a sequence (an)n≥0 satisfying: ∀  > 0, ∃ N ∈ Z+, ∀ n, m > N : k an − am kA< ) converges (namely, ∃ l ∈ A, ∀  > 0, ∃ N ∈ Z+, ∀ n > N : k an −l kA< ). Example 3.6: The following four examples • (H∞(C+), k f k∞= sups∈C+ |f(s)|), • (A, k g kA=k f kL1(R+) + P+∞ n=0 |an|), • (Â, k ĝ kÂ=k g kA), • (W+, k (an)n≥0 kW+ = P+∞ n=0 |an|), are Banach algebras (see [11], [12], [14], [78] for more details). Theorem 3.2: [68] A noetherian Banach algebra is finitely dimensional (as a k-vector space). Therefore, H∞(C+), A,  and W+ are not noetherian rings, and thus, certain of their ideals are not finitely gen- erated. Hence, it seems that we cannot use the main part of commutative algebra which was developed for noetherian rings in order to study the algebraic properties of these rings. In the fifties, H. Cartan and J. P. Serre developed the concept of a coherent sheaf in order to study analytic and algebraic geometries. This concept is closely related to the concept of a coherent ring which was introduced in commutative algebra by S. U. Chase in 1960. This concept plays a crucial role in these notes. Definition 3.7: • [5], [7], [26], [66] A ring A is coherent if the A-module of the relations (syzygy A-module) of every finitely generated ideal I = (a1, . . . , an) of A, namely S(I) = {r = (r1 : . . . : rn) ∈ An | n X i=1 ri ai = 0}, is finitely generated, i.e. there exist m ∈ Z+ and a matrix R ∈ Am×n such that, ∀ r ∈ S(I), ∃ b = (b1 : . . . : bm) ∈ Am : r = b R, or, equivalently, S(I) = Am R. • [5], [7], [26], [66] A finitely generated ideal I of A which satisfies that the A-module of the relations S(I) is finitely generated is called a finitely presented ideal of A. The class of modules over a coherent ring enjoys very nice algebraic properties (e.g. it is closed by respect to (direct) 9 sums, intersections, quotients, tensor products, morphisms...) which makes every computation of a finitely presented module (i.e. an A-module of the form Ap /Aq R for a certain matrix R ∈ Aq×p and p, q ∈ Z+) very tractable (as in the case of a noetherian ring). Example 3.7: • Any noetherian ring is coherent [7], [26], [66]. In particular, RH∞ and k[s] (k = R, C) are two coherent integral domains. • A coherent ring is not necessarily a noetherian ring. For instance, the ring k[xi, i ≥ 1] of polynomials in an infi- nite number of independent variables xi with coefficients in the field k = R, C is not a noetherian ring but a coherent one [66]. • A Bézout domain, namely an integral domain such that every finitely generated ideal of A is generated by a single element of A, is a coherent ring. For instance, the ring of entire functions in C with coefficients in k = R, C, namely E(k) = {f(s) = P+∞ n=0 an sn | an ∈ k, limn→+∞ |an|1/n = 0}, and E = E(R) ∩ R(s)[e−s ] are two Bézout domains [30], [33], [45], and thus, coherent rings. Exercise 3.6: Show that k[xi, i ≥ 1], with k = R, C, is not a noetherian ring (Hint: consider the ideal P i≥1 A xi and prove that this ideal is not finitely generated). Theorem 3.3: [46] H∞(D), H∞(C+), L∞(T) and L∞(R) are coherent rings, where: ( D = {s ∈ C | |s| < 1}, T = {s ∈ C | |s| = 1}. For all these rings, the algorithm given in section III-C finishes because we can prove that if A is a coherent ring and R ∈ Aq×p , then ker .R = {λ ∈ Aq | λ R = 0} is a finitely generated A-module, i.e. is defined by means a finite family of generators. Let us introduce another concept which will play an important role in the rest of these notes. Definition 3.8: [18] An integral domain A is a coherent Sylvester domain if, for every q ∈ Z+ and every column vector RT ∈ Aq , the A-module ker .RT = {λ ∈ Aq | λ R = 0} is a free A-module. Remark 3.3: The previous definition of a coherent Sylvester domain is the simplest one that we know. A more useful but abstract definition (by means of homological algebra) of a coherent Sylvester domain is a projective-free coherent integral domain of weak global dimension w.gl.dim(A) ≤ 2. See VII for more details. For instance, the next examples of coherent Sylvester domains are obtained using this last definition. Example 3.8: • A Bézout domain, namely an integral domain such that every finitely generated ideal I of A has the form I = (a) for a certain element of A, is a coherent Sylvester domain. Since, RH∞ and E are two Bézout domains [30], [45], [78], and thus, they are two coherent Sylvester domains. • In [19], it is shown that A = B[x] is a coherent Sylvester domain iff B is a Bézout domain. In particular, if B is a principal ideal domain, namely an integral domain such that every ideal of B has the form I = (a) for a certain element of A (e.g. B = Z, k[s], k = R, C, RH∞), then A = B[x] is a coherent Sylvester domain. Therefore, A = Z[x] and A = k[s][z] = k[s, z] are two examples of coherent Sylvester domains. Theorem 3.4: [56] H∞(C+) is a coherent Sylvester do- main. Proposition 3.3: [19] Every coherent Sylvester domain is a greatest common divisor domain. Corollary 3.3: H∞(C+) is a greatest common divisor do- main (see [63], [71] for direct proofs). The next result links the existence of a weakly doubly coprime factorization of any transfer matrix − with entries in K = Q(A) − to a coherent Sylvester domain A. Theorem 3.5: [56] We have the following equivalences: • Every transfer matrix − with entries in K = Q(A) − admits a weakly doubly coprime factorization, • A is a coherent Sylvester domain. Corollary 3.4: [56] Every transfer matrix with entries in K = Q(H∞(C+)) admits a weakly doubly coprime factor- ization (see [71] for a direct proof). Hence, Theorem 3.5 generalizes a result on H∞(C+) ob- tained by M. C. Smith [71] to a large class of rings (namely coherent Sylvester domains). Exercise 3.7: Let us consider the ring A = C[x1, x2, x3] of polynomials in x1, x2, x3 whose coefficients belong to C and the following vector R = (x1 : x2 : x3)T ∈ A3 (gradient operator). 1) Prove that ker .R = A3 R1, where the matrix R1 is defined by (curl operator): R1 =   0 −x3 x2 x3 0 −x1 −x2 x1 0   ∈ A3×3 . 2) Prove that ker .R1 = A RT . 3) If f : M → N is any A-morphism, then show that M/ ker f ∼ = im f. Deduce that A3 / ker .R1 ∼ = A3 R1 = ker .R, and thus, ker .R ∼ = A3 /A RT . 4) Using the fact that A3 /A RT is defined by the single equation x1 z1 + x2 z2 + x3 z3 = 0 (divergent operator) and its A-linear combinations, prove that A3 /A RT , and thus, ker .R is not a free A-module (show that A3 /A RT has no basis). Deduce that A is not a coherent Sylvester domain. 5) Deduce that the multidimensional linear system defined by P =  x1 x3 : x2 x3 T ∈ K2×1 has no weakly left- coprime factorization (K = C(x1, x2, x3) is the ring of rational functions in x1, x2 and x3). IV. LEFT/RIGHT/DOUBLY COPRIME FACTORIZATIONS Let us recall the well-known concepts of left/right/doubly coprime factorizations [12], [14], [77], [78]. 10 Definition 4.1: • A matrix R = (D : −N) ∈ Aq×p is left-prime if R has a right-inverse, namely a matrix S = (XT : Y T )T ∈ Ap×q which satisfies R S = D X − N Y = Iq. • A transfer matrix P ∈ Kq×r admits a left-coprime factorization if there exists a left-prime matrix R = (D : −N) ∈ Aq×p such that D ∈ Aq×q has full rank (i.e. det D 6= 0) and: P = D−1 N. • A matrix R̃ = (ÑT : D̃T )T ∈ Ap×r is right-prime if R̃ has a left-inverse, namely a matrix S̃ = (−Ỹ : X̃) ∈ Ar×p which satisfies S̃ R̃ = −Ỹ Ñ + X̃D̃ = Ir. • A transfer matrix P ∈ Kq×r admits a right-coprime factorization if there exists a right-prime matrix R̃ = (ÑT : D̃T )T ∈ Ap×r such that D̃ ∈ Ar×r has full rank (i.e. det D̃ 6= 0) and: P = Ñ D̃−1 . • A transfer matrix P ∈ Kq×r admits a doubly coprime factorization if P admits a left and right-coprime factor- ization. In order to give necessary and sufficient conditions of the existence of a left/right/doubly coprime factorization, we need to introduce the following definitions. Definition 4.2: [5], [7], [26], [66] If M is a finitely gener- ated A-module (i.e. M is defined by means of a finite family of generators), then, we have: • M is a stably free A-module if there exist r, s ∈ Z+ such that M ⊕ As ∼ = Ar (⊕ denotes the direct sum). • M is a projective A-module if there exist an A-module P and r ∈ Z+ such that M ⊕ P ∼ = Ar , i.e. M is a direct summand of a free A-module. Let us note that, in this case, P is also a projective A-module. Proposition 4.1: [7], [66] We have the following implica- tions of A-modules: free ⇒ stably free ⇒ projective ⇒ torsion-free. Definition 4.3: [42], [66] We have the following definitions: • A ring A is a projective-free ring if every finitely gener- ated projective A-module is free. • A ring A is a Hermite ring if every finitely generated stably free A-module is free. Let us introduce the Fitting ideals of a finitely presented A-module (namely an A-module of the form Ap /Aq R, for a certain matrix R ∈ Aq×p ). In the next proposition, this concept will give a tractable characterization of the finitely presented projective A-module M = Ap /Aq R in terms of the minors of the matrix R. Definition 4.4: • If R ∈ Aq×p , then we denote by Ii(R) the ideal of A generated by: – all the i × i minors of R, if 1 ≤ i ≤ min {p, q}, – Ii(R) = 0, if i > min {p, q}, – Ii(R) = A, if i ≤ 0. • [31] If R ∈ Aq×p and M = Ap /Aq R, then Ii(R) only depends on M and not on R (the same module M can be defined by means of different matrices). Then, we call the Fitting ideals of M the ideals defined by: Fitti(M) = Ip−i(R), ∀ i ∈ Z. Proposition 4.2: [31] The A-module M = Ap /Aq R is projective iff there exists r ∈ Z+ such that: ( Fittr(M) = 0, Fittr+1(M) = A ⇔ 1 ∈ Fittr+1(M). Example 4.1: Let us consider the matrix R0 ∈ A2×3 defined by (12) and the A-module M0 = A3 /A2 R0 where A = H∞(C+). We have Fitt0(M0 ) = 0 and: Fitt1(M0 ) =  e−s s+1 , (s−1)2 (s+1)2 , (s−1) e−s (s+1)2  ⊆ A. We have the following Bézout identity e−s (s+1) a + (s−1)2 (s+1)2 b = 1 ⇒ Fitt1(M0 ) = A, where          a = 4 e (5 s−3) (s+1) ∈ A, b = (s+25) (s+1) + 4 (5 s−3) (s+1) (2−s−e−(s−1) ) (s−1)2 ∈ A, = (s+1)3 −4 (5 s−3) e−(s−1) (s+1) (s−1)2 , (14) and thus, M0 = A3 /A2 R0 is a projective A-module. Exercise 4.1: Let A = H∞(C+) and let us consider the matrix R0 ∈ A2×4 defined by R0 = 1 0 − e−s s+1 −s−1 s+1 0 s−1 s+1 0 − 1 s+1 ! , which corresponds to the weakly left-coprime factorization of Exercise 3.5. Prove that the finitely presented A-module M0 = A4 /A2 R0 is a projective A-module (Hint: consider the two elements s−1 s+1 and 1 s+1 of Fitt2(M0 ) and prove that 1 is an A-linear combination of them). The following theorem gives necessary and sufficient con- ditions for a transfer matrix to admit left/right/doubly coprime factorizations. Theorem 4.1: [56] Let P = D−1 N = Ñ D̃−1 be any fractional representation of the transfer matrix P ∈ Kq×r , where: ( R = (D : −N) ∈ Aq×p , R̃ = (ÑT : D̃T )T ∈ Ap×r . Then, we have: • P admits a left-coprime factorization iff the A-module Aq R is a free A-module of rank q and Ap /Aq R is a stably free A-module. • P admits a right-coprime factorization iff the A-module Ar R̃T is a free A-module of rank r and Ap /Ar R̃T is a stably free A-module. • P admits a doubly coprime factorization iff Aq R and Ar R̃T are two free A-modules of rank respectively q 11 and r and Ap /Aq R and Ap /Ar R̃T are two stably free A-modules. Remark 4.1: If a transfer matrix P admits a left (resp. right or doubly) coprime factorization, then P also admits a weakly left (resp. right or doubly) coprime factorization (see Theorems 3.1 and 4.1). Thus, every left (resp. right or doubly) coprime factorization is a weakly left (resp. right or doubly) coprime factorization. Exercise 4.2: • [58], [59] Prove that P ∈ Kq×r admits a right-coprime factorization iff there exists a non-singular matrix D̃ ∈ Ar×r such that Ap (PT : Ir)T = {λ1 P + λ2 | λ1 ∈ Aq , λ2 ∈ Ar } = Ap D̃−1 . Deduce that P = Ñ D̃−1 , where Ñ , P D̃ ∈ Aq×r , is a right-coprime factorization of P. • [58], [59] Prove that P ∈ Kq×r admits the left-coprime factorization iff there exists a non-singular matrix D ∈ Aq×q such that Ap (Iq : −P)T =  λ1 − λ2 PT | λ1 ∈ Aq , λ2 ∈ Ar = Aq (D−1 )T . Deduce that P = D−1 N, where N , D P ∈ Aq×r , is a left-coprime factorization. Proposition 4.3: [56] If R ∈ Aq×p is a full row rank matrix, then the A-module M = Ap /Aq R is stably free iff the A-module N = Aq /Ap RT = 0, i.e. iff there exists S ∈ Ap×q such that: R S = Iq. Example 4.2: Let us determine whether or not the transfer matrix P defined by (6) admits a left-coprime factorization. In Example 3.4, we proved that A2 R = A2 R0 , where R0 ∈ A2×3 is defined by (12). Hence, the A-module A2 R is a free A- module of rank 2. By Proposition 4.3, A3 /A2 R = A3 /A2 R0 is a stably free A-module iff A2 /A3 R0T = 0. The A-module A2 /A3 R0T is defined by the following equations        1 (s+1) λ1 + (s−1) (s+1) λ2 = 0, −(s−1) (s+1) λ1 = 0, − e−s (s+1) λ2 = 0, (15) as well as their A-linear combinations. If we put a second member µ = (µ1 : µ2 : µ3)T to the equations (15), combining the first two equations, we obtain: (s−1)2 (s+1)2 λ2 = (s−1) (s+1) µ1 + 1 (s+1) µ2. Combining this new equation with the last one of (15), we obtain λ2 = b (s−1) (s+1) µ1 + b 1 (s+1) µ2 − a 1 (s+1) µ3, (16) where a and b are defined by (14). From the first two equations of (15), we also obtain: λ1 + 2 (s−1) (s+1) λ2 = 2 µ1 − µ2 Using this new equation and (16), we obtain: λ1 = 2 (−b (s−1)2 (s+1)2 + 1) µ1 − (2 b (s−1) (s+1)2 + 1) µ2 +2 a (s−1) (s+1)2 µ3, (17) Hence, if µ1 = µ2 = µ3 = 0, then, from (16) and (17), we obtain λ1 = λ2 = 0, i.e. we have A2 /A3 R0T = 0, and thus, A3 /A2 R = A3 /A2 R0 is a stably free A-module. By Theorem 4.1, P admits a left-coprime factorization. We have already done all the computations for such a left-coprime factorization: from (16) and (17), we obtain (λ1 : λ2) = (µ1 : µ2 : µ3) S, where S =     −2 b (s−1)2 (s+1)2 + 2 b (s−1) (s+1) −2 b (s−1) (s+1)2 − 1 b 1 (s+1) 2 a (s−1) (s+1)2 −a 1 (s+1)     ∈ A3×2 , and thus, R S = I2. Therefore, (13) is a left-coprime factor- ization of P because we have:   1 (s+1) −(s−1) (s+1) (s−1) (s+1) 0     −2 b (s−1)2 (s+1)2 + 2 b (s−1) (s+1) −2 b (s−1) (s+1)2 − 1 b 1 (s+1)   − 0 e−s (s+1) !  2 a (s−1) (s+1)2 : −a 1 (s+1)  = I2. (18) Exercise 4.3: Doing as in the previous example, show that P = 1 0 0 s−1 s+1 !−1 e−s s+1 s−1 s+1 0 1 s+1 ! ∈ K2×2 is a left-coprime factorization of the transfer matrix P defined in Exercise 3.5 (K = Q(H∞(C+))). Equivalent necessary and sufficient conditions of the exis- tence of left/right/doubly coprime factorizations can be ob- tained. Theorem 4.2: [56] Let P = D−1 N = Ñ D̃−1 be any fractional representation of the transfer matrix P ∈ Kq×r , where: ( R = (D : −N) ∈ Aq×p , R̃ = (ÑT : D̃T )T ∈ Ap×r . Then, we have: • P admits a left-coprime factorization iff Ap /Ar R̃T is a free A-module of rank q. • P admits a right-coprime factorization iff Ap /Aq R is a free A-module of rank r. • P admits a doubly coprime factorization iff Ap /Ar R̃T and Ap /Aq R are two free A-modules of rank respectively q and r. A direct consequence of the last point of Theorem 4.2 is the following corollary first obtained by V. R. Sule in [73]. Corollary 4.1: Let P = D−1 N = Ñ D̃−1 be any frac- tional representation of the transfer matrix P ∈ Kq×r , where: ( R = (D : −N) ∈ Aq×p , R̃ = (ÑT : D̃T )T ∈ Ap×r . 12 Then, P admits a doubly coprime factorization iff the A- modules Ap RT and Ap R̃ are two free A-modules of rank respectively q and r. Exercise 4.4: Using the last point of Theorem 4.2 and 3 and 4 of Proposition 3.2, prove Corollary 4.1. Corollary 4.2: A SISO plant, defined by a transfer function p = n/d ∈ K = Q(A), where 0 6= d, n ∈ A, admits a coprime factorization iff the ideal I = (d, n) of A is a free A-module, i.e. I is a principal ideal of A (namely I = (d, n) is defined by a single element of A). This result was already proved by M. Vidyasagar in [78]. Exercise 4.5: Let us consider R = (d : −n) ∈ A1×2 . Show that the A-module A2 RT is the ideal I = (d, n) of A defined by d and n. Then, using Theorem 4.2 and the result that an ideal I of an integral domain A is free iff I is a principal ideal (prove this result), prove Corollary 4.2. Corollary 4.3: If A is a Hermite ring, namely a ring such that every finitely generated stably free A-module is free (see Definition 4.3), then a transfer matrix P ∈ Kq×r admits a doubly coprime factorization iff P admits a left-coprime factorization or a right-coprime factorization. This result was firstly proved by M. Vidyasagar in [78]. Exercise 4.6: In this exercise, we prove Corollary 4.3. 1) Suppose that the transfer matrix P admits a left-coprime factorization P = D−1 N, R = (D : −N) ∈ Aq×p . Using the first point of Theorem 4.1, deduce that the A-module Aq R = Aq R is free of rank q and the A- module Ap /Aq R = Ap /Aq R is stably free of rank r. 2) Using the definition of a Hermite ring (see Defini- tion 4.3), deduce that Ap /Aq R is a free A-module. 3) Using the second point of Theorem 4.2, deduce that P admits a right-coprime factorization, i.e. P admits a doubly coprime factorization. 4) Do the same by admitting that P admits now a right- coprime factorization. Finally, we have the following theorem which characterizes the class of rings A of SISO stable plants over which every transfer matrix admits a doubly coprime factorization. Theorem 4.3: [78] We have the following equivalences: 1) Every transfer function with entries in K = Q(A) admits a coprime factorization. 2) Every transfer matrix with entries in K = Q(A) admits a doubly coprime factorization. 3) A is a Bézout domain. Exercise 4.7: 1) Prove that 2 ⇒ 1 ⇒ 3 (use Corol- lary 4.2 for the last implication). 2) Use the following result that A is a Bézout domain iff every finitely generated torsion-free A-module is free [26], Theorem 4.2 and Lemma 3.1 to prove 3 ⇒ 2. Example 4.3: For instance, if A = RH∞ or A = E (two Bézout domains), then every transfer matrix whose entries belong to K = Q(A) admits a doubly coprime factorization. Recall that in a Bézout domain, two elements a, b ∈ A generate an ideal I = (a, b) which satisfies I = ([a, b]) (a Bézout domain is a gcdd by Example 3.8 and Proposition 3.3). Let us recall that we have [14], [16], [78]            ∀ a, b ∈ A = H∞(C+), (a, b) = A ⇔ infs∈C+ (|a(s)| + |b(s)|) > 0, ∀ a, b ∈ A = Â, (a, b) = A ⇔ infs∈C+ (|a(s)| + |b(s)|) > 0, (19) where C+ = {s ∈ C | Re (s) ≥ 0} is the closed right half plane. Therefore, if we take A = H∞(C+) or A = Â, then [ 1 s+1 , e−s ] = 1 (see Exercise 3.2) but the ideal I =  1 s+1 , e−s  ( (1) = A because we have: infs∈C+  1 s+1 + |e−s |  = 0. Indeed, if we take a sequence (xn)n≥0, with xn ∈ R+ and limn→+∞ xn = +∞, then we have: limn→+∞ 1 xn+1 = limn→+∞ |e−xn | = 0. Therefore, A = H∞(C+) and A =  are not Bézout domains. Exercise 4.8: 1) Let us consider the plant defined by the transfer function p = e−s s−1 . Show that p belongs to K = Q(A), where A = H∞(C+) or A = Â, because we have: ( n = e−s s+1 ∈ A, d = s−1 s+1 ∈ A. 2) Using (19), show that the two elements d = s−1 s+1 and n = e−s s+1 of A satisfy that the ideal I = (d, n) is equal to A, and thus, that p admits a coprime factorization. 3) Show that p = n/d is a coprime factorization of p with: (s−1) (s+1)  1 + 2  1−e−(s−1) s−1  +  e−s s+1  2 e = 1. The effective computation of a doubly coprime factorization is generally a difficult task. See [8], [9], [78] for the explicit forms of coprime factorizations for some classes of SISO systems. V. THE FRACTIONAL REPRESENTATION APPROACH TO SYNTHESIS PROBLEMS A. Introduction “The central idea that is used repeatedly in the book is that “of factoring” the transfer matrix of a (not necessarily stable) system as the “ratio” of two stable rational matrices. This idea was first used in a paper published in 1972 (see [76]), but the emphasis there was on analyzing the stability of a given system rather than on the synthesis of control systems as is the case here. It turns out that this seemingly simple stratagem leads to conceptually simple and compu- tationally tractable solutions to many important and interesting problems...”, M. Vidyasagar [78]. In the eighties, the fractional representation approach to synthesis problems was created in order to study in a unique mathematical framework some synthesis problems (e.g. 13 Fig. 1. Closed-loop system internal/strong/simultaneous/robust stabilization, parametriza- tion of all stabilizing controllers, robustness, H2/H∞-optimal controllers) for different classes of time-invariant linear sys- tems (continuous-time, discrete, finite or infinite-dimensional systems) [2], [14], [17], [41], [77], [78]. The main idea of this approach was to give general formulations of different synthesis problems so that a wide variety of classes of systems (e.g. lumped or delay systems, systems of partial differential equations) could be studied using the same concepts and tools. In this approach, synthesis problems are reformulated independently to the classes of systems which are considered so that general conditions on the solvability of a specific synthesis problem can be obtained. Hence, the verification of the solvability of a synthesis problem for a particular system of a certain class is brought back to the verification of an abstract condition for which the parameters are specified. This allows to separate as much as possible the problems coming from the specific synthesis problem to the difficulties arriving from the class of systems which is considered. It is not surprising that the fractional representation approach to synthesis problems is then a ring-theoretic approach: algebra develops general (universal) concepts which can be used in very different situations. Therefore, it is not surprising to use module theory and homological algebra in the studies of the fractional representation approach to synthesis problems. Indeed, these two algebraic theories have been developed to understand general features of algebraic stuctures without specifying a particular ring. Hence, we could easily say that the fractional representation approach to synthesis problems is a homological algebra approach to stabilization problems. B. Internal stabilization Let us consider the closed-loop system defined in Figure 1 where u2 (resp. u1) is the consign (resp. a perturbation), y1 and y2 the outputs and e1 and e2 the internal inputs. We have the following equations of the closed-loop system:           Iq −P −C Ir   e1 e2  =  u1, u2  , y1 = e2 − u2, y2 = e1 − u1. The following definition plays a crucial role in all the rest of the paper. Definition 5.1: [17], [41], [44], [77], [78] Let A be an integral domain of SISO stable plants and K = Q(A) its quotient field. Let P ∈ Kq×r be a transfer matrix of a plant and C ∈ Kr×q a transfer matrix of a controller. Then, C is called an internal stabilizing controller of P if H(P, C) = Iq −P −C Ir !−1 = (Iq − P C)−1 (Iq − P C)−1 P C (Iq − P C)−1 Ir + C (Iq − P C)−1 P ! ∈ Ap×p , i.e. all the entries of the transfer matrix from (u1 : u2)T to (e1 : e2)T are A-stable. Example 5.1: Let us consider p = 1 (s−1) ∈ K = R(s) given in [37] and A = RH∞. The controller c = −(s−1) (s+1) proposed in [37] is not a stabilizing controller of p because we have    e1 = (s+1) (s+2) u1 + (s+1) (s+2) (s−1) u2, e2 = −(s−1) (s+2) u1 + (s+1) (s+2) u2, and the transfer function from u2 to e1 is not stable (unstable pole at s = 1). Hence, unstable pole-zero cancellations between the plant p and the controller c lead to an instability in the closed-loop, i.e. c is not a stabilizing controller of p. Proposition 5.1: We have the following equivalences: • If A = H∞(C+), then internal stabilizability is equiva- lent to the fact that the linear operator TH(P, C), defined by H2(C+)p −→ H2(C+)p , u = (u1 : u2)T −→ (e1 : e2)T = H(P, C) u, is bounded [14], [28], namely: dom(TH(P, C)) = {u ∈ Hp 2 | H(P, C) u ∈ Hp 2 } = Hp 2 . This means that there is no input u with a finite energy, i.e. u ∈ Hp 2 , so that the corresponding internal input e = (e1 : e2)T has an infinite energy, i.e. e / ∈ Hp 2 . • If A = RH∞(C+) or A = Â, then internal stabilizability implies that the linear operator TH(P, C), defined by H2(C+)p −→ H2(C+)p , u = (u1 : u2)T −→ (e1 : e2)T = H(P, C) u, is bounded [12], [15], [78], namely: dom(TH(P, C)) = {u ∈ Hp 2 | H(P, C) u ∈ Hp 2 } = Hp 2 . • If A = A, then internal stabilization implies that the operator TH(P, C), defined by Lq(R+)p −→ Lq(R+)p , u = (u1 : u2)T −→ (e1 : e2)T = H(P, C) ? u, is bounded for 1 ≤ q ≤ +∞, namely dom(TH(P, C)) = {u ∈ Lq(R+)p | H(P, C) u ∈ Lq(R+)p } = Lq(R+)p . Moreover, if the convolution kernel H(P, C) has a vanishing singular part, then internal stabilization is equivalent to BIBO stability, i.e. to the fact that the previous linear operator is bounded for q = +∞ [12], [14], [15]. The following theorem characterizes internal stabilization in terms of module theory. Theorem 5.1: [54], [56] A plant defined by a transfer matrix P = D−1 N = Ñ D̃−1 ∈ Kq×r , where ( R = (D : −N) ∈ Aq×p , R̃ = (ÑT : D̃T )T ∈ Ap×r , 14 is internally stabilized by a controller of the form C = Y X−1 (resp. C = X̃−1 Ỹ ) iff Ap /Aq R (resp. Ap /Ar R̃T ) is a projective A-module. From Theorem 5.1, we obtain the following algorithm: Algorithm 2: Input: A coherent domain A and a matrix R = (D : −N) ∈ Aq×p . Ouput: Stabilizability or not of P = D−1 N ∈ Kq×r . 1) Using Algorithm 1, compute Aq R: we obtain q0 ∈ Z+ and R0 ∈ Aq0 ×p such that Aq R = Aq0 R0 . 2) For increasing i, check whether or not: 1 ∈ Fitti(Ap /Aq0 R0 ). If there exists i such that 1 ∈ Fitti(Ap /Aq0 R0 ), then P is internally stabilizable, else not. Remark 5.1: In order to be able to check effectively internal stabilizability, we need to be able to: • compute the kernel of matrices with entries in A, • test whether or not 1 belongs to a finitely generated ideal of A. Example 5.2: Let us reconsider Example 4.2. We proved that the A-module A3 /A2 R0 was projective (A = H∞(C+)), where the matrix R0 ∈ A2×3 is defined by (12). Moreover, in Example 3.4, we proved that A2 R = A2 R0 , where R is defined by (7). Thus, the A-module A3 /A2 R = A3 /A2 R0 is projective and, by Theorem 5.1, the plant defined by the transfer matrix P (6) is internally stabilized by a certain controller of the form C = Y X−1 . Exercise 5.1: Using Exercises 3.5 and 4.1, prove that the transfer matrix P defined in Exercise 3.5 is internally stabi- lizable. Corollary 5.1: [56] If a transfer matrix P ∈ Kq×r admits a weakly left (resp. right) coprime factorization of the form P = D−1 N (resp. P = Ñ D̃−1 ), where R = (D : −N) ∈ Aq×p (resp. R̃ = (ÑT : D̃T )T ∈ Ap×r ), then P is internally stabilizable iff P = D−1 N (resp. P = Ñ D̃−1 ) is a left (resp. right) coprime factorization of P. Moreover, if we have ( D X − N Y = Iq, S = (XT : Y T )T ∈ Ap×q , (20) (resp. ( Ỹ Ñ − X̃ D̃ = Ir, S̃ = (Ỹ : X̃) ∈ Ar×p ), (21) then, the controller C = Y X−1 (resp. C = X̃−1 Ỹ ) internally stabilizes P. Exercise 5.2: 1) If P admits a left-coprime factorization (resp. a right-coprime factorization) of the form (20) (resp. (21)), then prove that P is internally stabilized by C = Y X−1 (resp. C = X̃−1 Ỹ ) (Hints: for instance, if P admits the left-coprime factorization (20), then prove we have Iq − P C = (X D)−1 , and thus,          (Iq − P C)−1 = X D ∈ Aq×q , (Iq − P C)−1 P = X N ∈ Aq×r , C (Iq − P C)−1 = Y D ∈ Ar×q , C (Iq − P C)−1 P = Y N ∈ Ar×r , i.e. C internally stabilizes P. See [60] for the explicit computations). 2) Prove the converse of Corollary 5.1 using the following result “if P admits a weakly left-coprime factorization P = D−1 N, with R = (D : −N) ∈ Aq×p , then Ap /Aq R is a projective A-module iff Ap /Aq R is a stably free A-module” (see [56] for a proof of this result). Example 5.3: In Example 4.2, we gave a left-coprime fac- torization (18) of the transfer matrix P defined by (6). Thus, by Corollary 5.1, the controller defined by C = Y X−1 =  2 a (s−1) (s+1)2 : −a 1 (s+1)    −2 b (s−1)2 (s+1)2 + 2 b (s−1) (s+1) −2 b (s−1) (s+1)2 − 1 b 1 (s+1)   −1 = − 4 (5 s−3) e (s−1)2 (s+1) ((s+1)3−4 (5 s−3) e−(s−1)) (1 : 2), internally stabilizes P. Example 5.4: Let us consider the following transfer func- tion p = e− √ s /(s − 1) arising in the theory of transmission lines [9]. Let A = H∞(C+) and let us denote by: ( n = e− √ s /(s + 1) ∈ A, d = (s − 1)/(s + 1) ∈ A. Then, we have p = n/d and [d, n] = 1 which shows that p = n/d is a weakly coprime factorization of p. Hence, p is internally stabilizable iff p admits a coprime factorization, i.e. there exists x, y ∈ A such that d x − n y = 1. Hence, the existence of a coprime factorization for p is equivalent to the existence of y ∈ A such that: x = 1 + y e− √ s /(s + 1) (s − 1)/(s + 1) = (s + 1) + y e− √ s (s − 1) ∈ A. Therefore, we must try to remove the unstable pole 1 by choosing an appropriate y, i.e. y ∈ A such that: ((s + 1) + y e− √ s )(1) = 2 + y(1) e−1 = 0. If we choose y = y(1) = −2 e ∈ A, then we have: x = (s + 1) − 2 e1− √ s (s − 1) ∈ A. Therefore, c = y/x is a stabilizing controller of p. We refer the reader to [8], [9] for explict coprime factor- izations for some classes of infinite-dimensional linear SISO systems (e.g differential time-delay or fractional differential systems). Corollary 5.2: [56] If A is a projective-free integral do- main, then every plant, defined by a transfer matrix with entries in K = Q(A), is internally stabilizable iff it admits a doubly coprime factorization. 15 In particular, Corollary 5.2 is true for coherent Sylvester domains (e.g. H∞(C+) [71], RH∞ [78]). Corollary 5.3: The integral domain MDn , defined in (5), is projective-free [10], [39], and thus, every internally stabiliz- able admits a doubly coprime factorization [59]. Corollary 5.3 answers to a conjecture of Z. Lin. See [43] and the references therein. See [59] for more details. Proposition 5.2: [56] We have the following equivalences: • The A-module Ap /Aq R is projective (R ∈ Aq×p ). • The A-module Ap RT is projective. Hence, we have the following corollary of Theorem 5.1 and Proposition 5.2 which was firstly proved by V. R. Sule in [73]. Corollary 5.4: P = D−1 N = Ñ D̃−1 ∈ Kq×r , where ( R = (D : −N) ∈ Aq×p , R̃ = (ÑT : D̃T )T ∈ Ap×r , is internally stabilizable by a controller C = Y X−1 (resp. C = X̃−1 Ỹ ) iff Ap RT (resp. Ap R̃) is a projective A-module. In [47], K. Mori developed an algorithm in order to check whether or not an A-module of the form Ap RT is projective. Alternatively, using the approach developed in these notes, we can first compute the A-closure Aq R of the A-module Aq R (see Algorithm 1 of section III-C) and use Proposition 4.2 to check whether or not Ap /Aq R is a projective A-module, i.e. whether or not P is internally stabilizable (see Algorithm 2). In the next corollary, we give two characterizations of internal stabilizability only using matrices. Corollary 5.5: 1) [55], [56] P = D−1 N ∈ Kq×r , where R = (D : −N) ∈ Aq×p , is internally stabilizable iff there exists S = (XT : Y T )T ∈ Kp×q , with det X 6= 0, such that: • S R =  X D −X N Y D −Y N  ∈ Ap×p , • R S = D X − N Y = Iq. Then, the controller C = Y X−1 internally stabilizes P. 2) [55], [56] P = Ñ D̃−1 , where R̃ = (ÑT : D̃T )T ∈ Ap×r , is internally stabilizable iff there exists a matrix T = (−Ỹ : X̃) ∈ Kr×p , with det X̃ 6= 0, such that: • S R =  −Ñ Ỹ Ñ X̃ −D̃ Ỹ D̃ X̃  ∈ Ap×p , • T R = −Ỹ Ñ + X̃ D̃ = Ir. Then, the controller C = X̃−1 Ỹ internally stabilizes P. Exercise 5.3: Give the proofs of 1 and 2 of Corollary 5.5 using only matrices. Compare your proofs with [58], [59]. Exercise 5.4: Check that S = (XT : Y T )T ∈ K3×2 defined by S =     b (s−1) (s+1) + 2 (s+1) (s−1)2 2 b (s−1) (s+1) − 2 (s−1) s+1 b (s+1) − (s+1) (s−1)2 2 b (s+1) + (s+1) (s−1) − a (s+1) − 2 a (s+1)     , where a and b are defined by (14), satisfies: ( S R ∈ A3×3 , R S = D X − N Y = I2. Deduce that P is internally stabilized by the controller: C = Y X−1 =  − a (s+1) : − 2 a (s+1)    b (s−1) (s+1) + 2 (s+1) (s−1)2 2 b (s−1) (s+1) − 2 (s−1) s+1 b (s+1) − (s+1) (s−1)2 2 b (s+1) + (s+1) (s−1)   −1 , = − 4 (5 s−3) e (s−1)2 (s+1) ((s+1)3−4 (5 s−3) e−(s−1)) (1 : 2). Corollary 5.6: A SISO plant, defined by a transfer function p = n/d ∈ K = Q(A), where 0 6= d, n ∈ A, is internally stabilizable iff the ideal I = (d, n) of A is a projective A- module, i.e. there exist x, y ∈ K such that: ( d x − n y = 1, d x, d y, n x ∈ A. (22) If x 6= 0 (resp. x = 0), then the controller c = y/x ∈ K (resp. c = 1 − d y ∈ A) internally stabilizes p. Exercise 5.5: The main purpose of the exercise is to prove Corollary 5.6. See [57] for the proofs. 1) Let us consider the matrix R = (d : −n) ∈ A1×2 . Show that A2 RT is the ideal I = (d, n) of A defined by d and n. 2) Using Theorem 5.1 and Corollary 5.4, prove that the plant p = n/d is internally stabilizable iff the ideal I = (d, n) of A is a projective A-module. 3) Using Corollary 5.5, prove that p = n/d is internally stabilizable iff (22) is satisfied for a certain couple (x, y) ∈ K2 . 4) If x 6= 0 (resp. x = 0), prove directly that c = y/x (resp. c = 1 − d y), where x, y ∈ K satisfy (22), is a stabilizing controller of p by showing that we have: H(p, c) = 1 −p −c 1 !−1 = 1 1−p c p 1−p c c 1−p c 1 1−p c ! ∈ A2×2 . (23) 5) One can show that I = (d, n) is a projective A-module iff I is an invertible ideal of A, namely I is such that the product I (A : I) , { Pn i=1 ai bi | ai ∈ I, bi ∈ A : I} of I by A : I = {k ∈ K = Q(A) | k d, k n ∈ A} equals A [54], [56]. Recover point 3 using the fact that p = n/d is internally stabilizable iff I = (d, n) is an invertible ideal of A. 6) Prove that c = s/r internally stabilizes p = n/d iff we have the following equality of ideals of A: (d, n) (r, s) = (d r − n s). 7) Prove that: I (A : I) = {a ∈ A | a n ∈ (d)} + {a ∈ A | a d ∈ (n)}. Deduce that p is internally stabilizable iff we have {a ∈ A | a n ∈ (d)} + {a ∈ A | a d ∈ (n)} = A (see [54], [56] for a proof). This last result was firstly proved by S. Shankar and V. R. Sule in [69]. 16 8) Prove that p = n/d admits a weakly coprime factoriza- tion iff A : I is a principal fractional ideal of A (see Exercise 5.8 for the definition of fractional ideals). 9) Prove p = n/d admits a coprime factorization iff I is a principal ideal of A. 10) Prove that p = n/d is is strongly (resp. bistably) stabilizable, namely p is internally stabilizable by means of a stable controller c ∈ A (resp. by a stable controller c whose inverse is stable [4], [20], [78], i.e. c ∈ U(A)) iff there exists c ∈ A (resp. c ∈ U(A)) such that: I = (d − n c). Exercise 5.6: [57] Let us consider the wave equation:              ∂2 z ∂t2 (x, t) − ∂2 z ∂x2 (x, t) = 0, ∂z ∂x (0, t) = 0, ∂z ∂x (1, t) = u(t), y(t) = ∂z ∂t (1, t), (24) 1) Prove that the transfer function of (24) is given by: p = (es + e−s )/(es − e−s ). 2) Prove that p ∈ K = Q(H∞(C+)). 3) Using the fact that A = H∞(C+) is a gcdd (see Corollary 3.3), compute a weakly coprime factorization of p. 4) Prove that p is internally stabilizable and compute a stabilizing controller of p. 5) Determine a coprime factorization of p. 6) Prove that p is bistably stabilizable. The next theorem gives some explicit characterizations of internal stabilizability only using the transfer matrix P of the system, i.e. without using any fractional representation of P. Theorem 5.2: [58], [59] P ∈ Kq×r is internally stabilizable iff one of the following conditions is satisfied: 1) There exists S = (UT : V T )T ∈ Ap×q such that:      S P =  U P V P  ∈ Ap×r , (Iq : −P) S = U − P V = Iq. Then, C = V U−1 is a stabilizing controller of P. 2) There exists T = (−X : Y ) ∈ Ar×p such that:      P T = (P X : P Y ) ∈ Aq×p , T  P Ir  = −X P + Y = Ir. Then, C0 = Y −1 X is a stabilizing controller of P. If P is internally stabilizable, then there exist S ∈ Ap×q , T ∈ Ar×p satisfying 1 and 2 and such that T S = −X U + Y V = 0, i.e. there exists a stabilizing controller of P of the form: C = V U−1 = Y −1 X. Exercise 5.7: Check that S = (UT : V T )T ∈ A3×2 defined by S =      2 s+1 + b  s−1 s+1 3 2 b  s−1 s+1 3 − 2 (s−1) (s+1) b (s−1)2 (s+1)3 − 1 s+1 s−1 s+1 + 2 b (s−1) (s+1)3 −a (s−1)2 (s+1)3 −2 a (s−1)2 (s+1)3      , where a and b are defined by (14), satisfies: ( S (I2 : −P) ∈ A3×3 , (I2 : −P) S = U − P V = I2. Deduce that P is internally stabilized by the controller: C = V U−1 =  −a (s−1)2 (s+1)3 : −2 a (s−1)2 (s+1)3    2 s+1 + b  s−1 s+1 3 2 b  s−1 s+1 3 − 2 (s−1) (s+1) b (s−1)2 (s+1)3 − 1 s+1 s−1 s+1 + 2 b (s−1) (s+1)3   −1 , = − 4 (5 s−3) e (s−1)2 (s+1) ((s+1)3−4 (5 s−3) e−(s−1)) (1 : 2). Corollary 5.7: [59] P ∈ Kq×r is internally stabilized by the controller C ∈ Kr×q iff one of the following conditions is satisfied: • The matrix Π1 =  (Iq − P C)−1 −(Iq − P C)−1 P C (Iq − P C)−1 −C (Iq − P C)−1 P  is a projector of Ap×p , namely Π2 1 = Π1 ∈ Ap×p . • The matrix Π2 =  −P (Ir − C P)−1 C P (Ir − C P)−1 −(Ir − C P)−1 C (Ir − C P)−1  is a projector of Ap×p , namely Π2 2 = Π2 ∈ Ap×p . Moreover, we have: Π1 + Π2 = Ip. Corollary 5.7 was already proved for H∞(C+) [28]. Remark 5.2: First of all, let us notice that we can prove that Corollary 5.7 is equivalent to the fact that P ∈ Kq×r is internally stabilizable iff one of the following conditions is satisfied [58], [59]: • Ap (PT : Ir)T is a projective lattice of Kr , namely a projective A-submodule of Kr of rank r, • Ap (Iq : −P)T is a projective lattice of Kq , namely a projective A-submodule of Kq of rank q. Secondly, in the loop-shaping procedure [20], [29], let us notice that the robustness radius is defined by [20], [25], [29]: bP,C , k Π1 k−1 ∞ = k Π2 k−1 ∞ . Corollary 5.8: • If P ∈ Kq×r admits a left-coprime factorization P = D−1 N, D X − N Y = Iq, then S = ((X D)T : (Y D)T )T satisfies 1 of Theorem 5.2, and thus, C = (Y D) (X D)−1 = Y X−1 is a stabilizing controller of P. • Similarly, if P ∈ Kq×r admits a right-coprime fac- torization P = Ñ D̃−1 , −Ỹ X + X̃ D̃ = Ir, then 17 T = (−D̃ Ỹ : D̃ X̃) satisfies 2 of Theorem 5.2, and thus, C = (D̃ X̃)−1 (D̃ Ỹ ) = X̃−1 Ỹ is a stabilizing controller of P. Exercise 5.8: This exercise is based on certain results ob- tained in [55], [57], [62]. We refer the reader to these papers for more details and the solutions. 1) The lattices of K are called the fractional ideals of A. A fractional ideal J of A is an A-submodule of the quotient field K = Q(A) which satisfies that there exists 0 6= a ∈ A such that a J ⊆ A. Let p ∈ K be a transfer function. Prove that J = (1, p) , A+A p is a fractional ideal of A. 2) Prove that p admits a weakly coprime factorization iff the ideal J = (1, p) satisfies that A : J , {k ∈ K | k, k p ∈ A} = {d ∈ A | d p ∈ A} is a principal integral ideal of A, namely has the form A : J = (d), with 0 6= d ∈ A. A : J is called the ideal of the denominators of p whereas (p) (A : J) is the ideal of the numenators of p. 3) Prove that p admits a coprime factorization iff the fractional ideal J = (1, p) is principal. 4) c ∈ K is said to externally stabilizes p ∈ K if the transfer function (p c)/(1 − p c) ∈ A. Prove that c ∈ K externally stabilizes p iff we have (1, p c) = (1 − p c). 5) Prove p is internally stabilizable iff the fractional ideal J = (1, p) is invertible, namely satisfies J (A : J) = A, where the product J (A : J) is defined by: J (A : J) = {a + b p | a, b ∈ A : a p, b p ∈ A}. If J is an invertible fractional ideal of A, then A : J is called the inverse of J and is denoted by J−1 . Deduce that p is internal stabilizable iff there exist a, b ∈ A which satisfy 1 :  a − b p = 1, a p ∈ A. (25) Then, prove that if a 6= 0 (resp. a = 0), c = b/a ∈ K (resp. c = 1 − b ∈ A) is a stabilizing controller of p and J−1 = (a, b). Finally, if a 6= 0, then prove that we have: ( a = 1/(1 − p c) (sensitivity transfer function), b = c/(1 − p c). 6) Prove directly that c = b/a ∈ K, where 0 6= a, b ∈ A satisfy (25), is an internally stabilizing controller of p by showing that we then have (23). 7) Prove that c ∈ K internally stabilizes p ∈ K iff we have the following equality of fractional ideals of A: (1, p) (1, c) = (1 − p c). (26) 1While we were completing the paper at the beginning of 2004, we have found that a similar characterization of internal stabilizability was obtained in the paper “Feedback, minimax sensitivity, and optimal robustness”, G. Zames, B. A. Francis, IEEE Trans. Autom. Contr., 28 (1983), pp. 585-601, under the form: p is internally stabilizable iff there exists a stable q such that a = 1−p q and a p = (1 − p q) p are both stable. This characterization corresponds to b = −q, up to the sign convention in the closed-loop system (see Figure 1). 8) Consider the transfer function p defined in Example 4.8. Prove that p is internally stabilizable and p admits a coprime factorization. 9) Prove that c = −(s − 1)/(s + 1) ∈ A cannot internally stabilize the plant p = 1/(s−1) (see Example 5.1) using only (26) and the fact that 1 − p c ∈ U(A). 10) Prove that if p admits a weakly coprime factorization and is internal stabilizable, then p admits a coprime factorization. 11) Let c ∈ K be a stabilizing controller of p. Using 3 and (26), prove that c admits a coprime factorization iff p admits a coprime factorization. The next theorem gives a general parametrization of all stabilizing controllers of an internal stabilizable plant which does not necessarily admit a doubly coprime factorization. Theorem 5.3: [58], [59] Let P ∈ Kq×r be an internally stabilizable plant. Then, all stabilizing controllers of P have the form C(Q) = (V + Q) (U + P Q)−1 = (Y − Q P)−1 (X − Q), (27) where C = V U−1 = Y −1 X is a particular stabilizing controller of P, i.e. we have              U − P V = Iq, Y − X P = Ir,  U P V P  ∈ Ap×r , (−P X : P Y ) ∈ Aq×p , and Q is any matrix which belongs Ω = {L ∈ Ar×q | L P ∈ Ar×r , P L ∈ Aq×q , P L P ∈ Aq×r } (28) such that det(U + P Q) 6= 0 and det(Y − Q P) 6= 0. Let us notice that some attempts in order to parametrize all stabilizing controllers of an internally stabilizable plant which does not necessarily admit a doubly coprime factorization have been done in [48], [73]. Unfortunately, these parametrizations are either not explicit in the free parameters or the set of free parameters is not characterized. Remark 5.3: The number of free parameters in the parametrization (27) is completely characterized by the projec- tive A-module Ω of rank r × q defined by (28). Let us notice that determining the cardinal µ(Ω) of a minimal generating system of an A-module is a well-known and difficult problem in algebra. Some bounds on µ(Ω) have been given in [59] for different cases of systems but the general case is still open. However, for SISO systems, a complete answer is given in the next corollary. Corollary 5.9: [57] Let p = n/d ∈ K = Q(A) be an internally stabilizable plant. • All stabilizing controllers of p have the form c(q1, q2) = y + q1 d x2 + q2 d y2 x + q1 n x2 + q2 n y2 , 18 where c = y/x is a stabilizing controller of p, namely ( d x − n y = 1, d x, d y, n x ∈ A. (29) (see (22)) and q1, q2 are any element of A satisfying: x + q1 n x2 + q2 n y2 6= 0. • All stabilizing controllers of p have the form c(q1, q2) = b + q1 a2 + q2 b2 a + q1 a2 p + q2 b2 p , where c = b/a is a stabilizing controller of p, namely  a − b p = 1, a, b, a p ∈ A, (30) (see (25)) and q1, q2 are any element of A satisfying: a + q1 a2 p + q2 b2 p 6= 0. The parametrizations (29) and (30) have only one free parameter iff p2 admits a coprime factorization. If p2 = s/r is a coprime factorization of p, then: • The parametrization (29) becomes the following one c(q) = d y + q r d x + q p r , where q is any element of A such that d x + q p r 6= 0. • The parametrization (30) becomes the following one c(q) = b + q r a + q p r , where q is any element of A such that a + q p r 6= 0. Exercise 5.9: Let A = R[x2 , x3 ] be the polynomial ring in x2 and x3 . Using the fact that every integer n ≥ 2 is of the form n = 2 i + 3 j, we obtain that xn = (x2 )i (x3 )j ∈ A for n > 1 and x / ∈ A, which proves that: A = {p = n X i=0 ai xi ∈ R[x] | a1 = 0}. In [47], the ring A has been used in order to modelize the set of discrete finite-time delay systems which do not contain the unit time-delay x. For instance, such a system appears in high-speed electronic circuits (see [47] for more details). 1) Let us consider p = (1 − x3 )/(1 − x2 ) ∈ K = Q(A). Using the identity (1 − x3 ) (1 + x3 ) = (1 − x2 ) (1 + x2 + x4 ), prove that p does not admit a weakly coprime factoriza- tion, and thus, does not admit a coprime factorization. 2) Show that c = (−1 + x2 )/(1 + x3 ) is a stabilizing controller of p. Conclude that there is no Youla-Kučera parametrization of all stabilizing controllers of p. 3) Compute the parametrization of all stabilizing con- trollers of p. Prove that this parametrization of all stabilizing controllers of p admits two parameters and there does not exist a parametrization of all stabilizing controllers of p with only one free parameter. Reconsider the exercise with p = (1 + i √ 5)/2 ∈ Q(A) and A = Z[i √ 5] [1]. For both of them, see [57] for the results. Corollary 5.10: • [59] If P ∈ Kq×r admits a left- coprime factorization P = D−1 N, then: Ω = {L ∈ Ar×q | P L ∈ Aq×q } D. • [59] If P ∈ Kq×r admits a right-coprime factorization P = Ñ D̃−1 , then: Ω = D̃ {L ∈ Ar×q | L P ∈ Ar×r }. Corollary 5.11: [58], [59] Let P ∈ Kq×r be a plant which admits a doubly coprime factorization:      P = D−1 N = Ñ D̃−1 ,  D −N −Ỹ X̃   X Ñ Y D̃  = Ip. Then, the A-module Ω of free parameters defined by (28) is the free A-module of rank r × q defined by: Ω = D̃ Ar×q D = {L ∈ Ar×q | L = D̃ R D, ∀ R ∈ Ar×q }. Therefore, all stabilizing controllers of P have the form C(Q) = (Y +D̃ Q) (X+Ñ Q)−1 = (X̃−Q N)−1 (Ỹ −Q D), where Q ∈ Ar×q is any matrix such that: det(X + Ñ Q) 6= 0, det(X̃ − Q N) 6= 0. We recover the well-known Youla-Kučera parametrization of all stabilizing controllers of P [17], [40], [78], [80], [81]. Example 5.5: Let us consider the transfer function p = p0 e−τ s , where p0 ∈ RH∞ is a proper and stable rational transfer function and τ ≥ 0. Hence, we have p ∈ A = H∞(C+), and thus, p admits the coprime factorization p = n/d with n = p0 e−τ s and d = 1. Thus, we have the following Youla- Kučera parametrization of the stabilizing controllers of p c(q) = q 1 + q p0 e−τ s , where q ∈ A is a free parameter. Let c0 ∈ RH∞ be a stabilizing controller of p0 ∈ RH∞ achieving some prescribed performances. Then, we have: q̃ , c0 (1 − p0 c0) ∈ RH∞ ⊆ A. Therefore, we obtain the stabilizing controller of p [50] c(q̃) = c0 1 + p0 c0 (e−τ s − 1) = c0 1 − c0 (p0 − p) which is called the Smith predictor [49], [51]. Let us notice that the complementary sensitivity transfer function has the following form p c(q̃) 1 − p c(q̃) =  p0 c0 1 − p0 c0  e−τ s , showing that the Smith predictor allows us to reject the time- delay e−τ s outside the closed-loop formed by p0 and c0. See [24] for recent results on the Smith predictor. 19 Exercise 5.10: • Following Example 5.4, prove that the unstable transfer function p = e−s /(s − 1) is internally stabilized by the following controller: c = − 2 e 1 + 2  1−e−(s−1) s−1  = − 2 e (s − 1) s + 1 − 2 e−(s−1) . Let us notice that (1−e−(s−1) )/(s−1) ∈ A = H∞(C+) is called a distributed delay. See [8], [49] for more details. • Compute the Youla-Kučera parametrization of all stabi- lizing controllers of p. We refer the reader to [2], [40], [41], [78] for applications of the Youla-Kučera parametrization to synthesis problems. Corollary 5.12: [59] Let A be a Banach algebra (e.g. Â, W+, H∞(C+)), K = Q(A), P ∈ Kq×r a stabilizable plant and W1, W2 ∈ Aq×q two weighted transfer matrices. Let us denote by Stab(P) the set of all stabilizing controllers of P. Then, we have: Ξ , infC∈Stab(P ) k W1 (Iq − P C)−1 W2 kA = infQ∈Ω k W1 (U + P Q) W2 kA, (31) where (UT : V T )T ∈ Ap×q satisfy    U − P V = Iq,  U P V P  ∈ Ap×r , and C = V U−1 is a particular stabilizing controller of P. Exercise 5.11: 1) [59] Let P ∈ Kq×r be a plant which admits the doubly coprime factorization:      P = D−1 N = Ñ D̃−1 ,  D −N −Ỹ X̃   X Ñ Y D̃  = Ip. Prove that U + P Q = (X + Ñ R) D, and thus: Ξ = inf R∈Ar×q k W1 (X + Ñ R) D W2 kA . 2) [61] Let p ∈ K = Q(A) be a stabilizable plant and w ∈ A a weighted transfer function. a) Using Corollary 5.9, prove that we have: inf c∈Stab(p) k w/(1 − p c) kA (32) = inf q1, q2∈A k w (a + a2 p q1 + b2 p q2) kA (33) where a, b ∈ A satisfy a − b p = 1, a p ∈ A, and c = b/a is a stabilizing controller of p. Conclude that we have transformed the non-linear problem (32) into an affine, and thus, convex one (33). b) If p = n/d is a coprime factorization of p d x − n y = 1, x, y ∈ A, then prove that we have a = 1/(1−p c) = d x and b = c/(1 − p c) = d y. Deduce that we have a + a2 p q1 + b2 p q2 = d (x + q n), where q = x2 q1 + y2 q2 ∈ A. c) Using the following identity (d2 (1 − 2 n y)) x2 + (n2 (1 + 2 d x)) y2 = 1, show that, for any q ∈ A, ( q1 = d2 (1 − 2 n y) q, q2 = n2 (1 + 2 d x) q. are such that q = x2 q1 + y2 q2. d) Finally, deduce that we have: inf c∈Stab(p) k w/(1 − p c) kA= inf q∈A k w d (x + n q) kA . VI. STRONG AND SIMULTANEOUS STABILIZATIONS Definition 6.1: We have the following definitions [4], [78]: • A plant P ∈ Kq×r is strongly stabilizable if there exists a stable stabilizing controller C ∈ Ar×q of P. • Two plants P1, P2 ∈ Kq×r are simultaneously stabi- lizable if there exists a controller C ∈ Kr×q which internally stabilizes P1 and P2. The strong and simultaneous stabilization problems have largely been investiguated in the literature (see [4], [78] and the references therein). This can be explained by the fact that strongly stabilizable plants have a good ability to track reference inputs [78]. Moreover, in practice, engineers are usually reluctant to use unstable controllers specially when the plant is stable. Finally, simultaneous stabilization plays an important role in the study of reliable stabilization, i.e. when we want to design a controller which stabilizes a finite family of plants which describes a given system during normal operating conditions and various failed modes (e.g. loss of sensors or actuators, changes in operating points). We refer the reader to [4], [78] for more details and references. Let us introduce some definitions [3], [27], [75]. Definition 6.2: • a = (a1 : . . . : an) ∈ An is unimodu- lar if there exists a vector b = (b1 : . . . : bn) ∈ An such that a bT = Pn i=1 ai bi = 1. We denote the set of all the unimodular vectors of An by Un(A). • A matrix R ∈ Aq×p is unimodular if there exists a matrix S ∈ Ap×q such that R S = Iq. • A unimodular matrix R = col(R1, . . . , Rp) ∈ Aq×p is called k-stable (1 ≤ k ≤ r = p − q) if there exists a (p − k)-tuple (ci)1≤i≤p−k belonging to the A-module Rp−k+1 A + . . . + Rp A , ( k X i=1 Rp−k+i bi | bi ∈ A ) such that the matrix col(R1 + c1 : R2 + c2 : . . . : Rp−k + cp−k) ∈ Aq×(p−k) is a unimodular matrix, where col(R1 : . . . : Rp−k) denotes the matrix formed by the (p − k) first columns of R. Remark 6.1: A unimodular matrix R ∈ Aq×p is k-stable iff there exists a matrix Tk ∈ Ak×(p−k) such that Rk = col(R1 : . . . : Rp−k) + col(Rp−k+1 : . . . : Rp) Tk 20 is a unimodular q × (p − k)-matrix. Definition 6.3: [3], [27], [75] a = (a1 : . . . : an) ∈ Un(A) is called stable (or reductible) if there exists a (n − 1)-tuple b = (b1 : . . . : bn−1) ∈ An−1 such that (a1 + an b1 : . . . : an−1 + an bn−1) ∈ Un−1(A), i.e. there exists (c1 : . . . : cn−1) ∈ An−1 such that we have: n−1 X i=1 (ai + an bi) ci = 1. Definition 6.4: [64], [74], [75] The stable range sr(A) of A is the smallest n ∈ N ∪ {+∞} such that every vector of Un+1(A) is stable. Remark 6.2: Let us notice that the stable range sr(A) is also called the stable rank of A in the literature of algebra. Theorem 6.1: • [74] sr(H∞(C+)) = 1. • [60], [78] sr(RH∞) = 2. • [36] sr(A(D)) = 1. • [67] sr(W+) = 1. • [35] sr(E(k)) = 1 if k = C, and 2 if k = R. • [32] sr(L∞(i R)) = 1. • [75] sr(R[x1, . . . , xn]) = n + 1. Remark 6.3: Let us notice that sr(H∞(C+)) = 1 does not contradict the fact that sr(RH∞) = 2. Indeed, the functions of H∞(C+) can have some complex coefficients whereas a function of RH∞ can only have real coefficients. It seems that the ring {f ∈ H∞(C+) | f(s) = f(s)} has stable range 2 but, up to now, there is no proof of it. The following proposition explains the link between strong stabilizability and k-stability. Proposition 6.1: [60] The transfer matrix P ∈ Kq×r is strongly stabilizable iff P admits a doubly coprime factoriza- tion P = D−1 N = Ñ D̃−1 such that R = (D : −N) ∈ Aq×p and (D̃T : ÑT ) ∈ Ar×p are respectively r and q-stable. Remark 6.4: Let us notice that if P = D−1 1 N1 = D−1 2 N2 are two left-coprime factorizations of P, then, we can prove that there exists a matrix U ∈ GLq(A) such that: (D2 : −N2) = U (D1 : −N1). Hence, we can easily show that R1 is k-stable iff R2 is also k-stable. Similar results also hold for right-coprime factor- izations. Therefore, Proposition 6.1 does not depend on a particular choice of a doubly coprime factorization of P. Secondly, let us notice that strong stabilizability implies the existence of a doubly coprime factorization for the plant. Theorem 6.2: [60] Let P = D−1 N be a left-coprime factorization of P with R = (D : −N) ∈ Aq×p . If R is k- stable and s , r − k ≥ 0, then there exist two stable matrices T1 ∈ Ak×q and T2 ∈ Ak×s such that the matrix Rk = (D − Λ T1 : −(Ns + Λ T2)) ∈ Aq×(p−k) admits a right-inverse with entries in A, with the notations: R = (D : −N) = ( D : −Ns : −Λ) ∈ Aq×p . ↔ q ↔ r ↔ k Let us define by Sk = (UT : V T )T ∈ A(p−k)×q , U ∈ Aq×q , V ∈ As×q , any right-inverse of Rk such that det U 6= 0. Then, the controller C ∈ Kr×q , defined by C =  V U−1 T1 + T2 (V U−1 )  , l s = r − k l k internally stabilizes P. Moreover, if det(D − Λ T1) 6= 0, then the controller Cs = V U−1 ∈ Ks×q internally stabilizes Ps = (D − Λ T1)−1 (Nr + Λ T2) ∈ Kq×s . The unstable part of C is only contained in the transfer matrix Cs = V U−1 and its dimension is less or equal to s × q. Similar results also hold for a transfer matrix P admitting a right-coprime factorization. Up to our knowledge, there is no general algorithm checking whether or not a matrix R is k-stable. However, we can prove that any matrix R ∈ Aq×p such that r ≥ sr(A) is r−sr(A)+1- stable [60]. Therefore, we obtain the following corollary which only depends on sr(A), i.e. on the integral domain A. Corollary 6.1: [60] Let P = D−1 N be a left-coprime factorization P ∈ Kq×r such that r ≥ sr(A). Then, there exist two stable matrices  T1 ∈ A(r−sr(A)+1)×q , T2 ∈ A(r−sr(A)+1)×(sr(A)−1) , such that the following q × (q + sr(A) − 1)-matrix Rr−sr(A)+1 , (D − Λ T1 : −(Nsr(A)−1 + Λ T2)) admits a right-inverse, with the notations: R = (D : −N) = ( D : −Nsr(A)−1 : −Λ). ←→ q ←→ sr(A)−1 ←→ r−sr(A)+1 If Sr−sr(A)+1 = (UT : V T )T ∈ A(q+sr(A)−1)×q is any right- inverse of Rr−sr(A)+1 such that det U 6= 0, then the controller C defined by C =  V U−1 T1 + T2 (V U−1 )  l sr(A) − 1 l r − sr(A) + 1 internally stabilizes the plant P = D−1 N. Moreover, if det(D − Λ T1) 6= 0, then the controller Csr(A)−1 = V U−1 internally stabilizes the plant Psr(A)−1 = (D − Λ T1)−1 (Nsr(A)−1 + Λ T2). Finally, the unstable part of the controller C is only contained in Csr(A)−1 = V U−1 and its dimension is less or equal to (sr(A) − 1) × q. Corollary 6.2: [60] If sr(A) = 1, then every transfer matrix which admits a left or a right-coprime factorization is strongly stabilizable (i.e. is internally stabilized by a stable controller). In particular, this result holds for A = W+ or A(D). Moreover, every internally stabilizable plant, defined by a transfer matrix P with entries in the quotient field of H∞(C+) is strongly stabilizable. Let us notice that Corollary 6.4 solves a question asked by A. Feintuch in [21] on the generalization of S. Treil’s result [74] for MIMO systems defined over H∞(C+). Corollary 6.3: [62] If sr(A) = 1, then A is a Hermite ring. In particular, this is the case for the rings H∞(C+), A(D), 21 W+, E(C) and L∞(i R). Moreover, if K = Q(A) and the transfer matrix P ∈ Kq×r admits a left or a right-coprime factorization, then P admits a doubly coprime factorization. Let us state the link between strong and simultaneous stabilizabilities. Proposition 6.2: [78] Let P1, P2 ∈ Kq×r be two transfer matrices which admit the following doubly coprime factoriza- tions Pi = D−1 i Ni = Ñi D̃−1 i and:  Di −Ni −Ỹi X̃i   Xi Ñi Yi D̃i  = Ip, i = 1, 2. Then, P1 and P2 are simultaneously stabilized by a controller C iff there exists T ∈ A such that U +V T ∈ GLq(A), where: ( U = D1 X0 − N1 Y0, V = −D1 Ñ0 + N1 D̃0. Remark 6.5: Let us notice that if P1 and P2 are two stabiliz- able plants which do not admit doubly coprime factorizations, then the simultaneous stabilization problem for two plants is no more equivalent to a strong stabilization problem. The relationships between these two problems seem to be highly open for stabilizable plants which do not admit doubly coprime factorizations. Corollary 6.4: [60] If sr(A) = 1, then every couple of plants, defined by two transfer matrices P0 and P1 with entries in K = Q(A), having the same dimensions, and admitting doubly coprime factorizations, is simultaneously stabilized by a controller (simultaneous stabilization). In particular, this result holds for A = W+ or A(D). Moreover, if A = H∞(C+) and P0, P1 are two internally stabilizable plants with entries in K = Q(A), then P0 and P1 are simultaneously stabilized by a controller C. We refer to [70] for a promising work on the simultaneous stabilization problem for multidimensional systems, i.e. for the ring MDn defined in Example 2.1. Exercise 6.1: [62] Using Exercise 5.8, prove the results: 1) Prove that p ∈ K = Q(A) is strongly (resp. bistably) stabilizable iff there exists c ∈ A (resp. c ∈ U(A)) such that J = (1−p c). Deduce that p is strongly stabilizable iff there exists c ∈ A such that p/(1 − p c) ∈ A. 2) Using (26), prove that c ∈ K internally stabilizes 0 iff c ∈ A. 3) Let p1 = n1/d1, p2 = n2/d2 ∈ K be two coprime factorizations with d1 x1 − n1 y1 = 1. Prove that p1 and p2 are simultaneously stabilizable iff p3 , (d1 n2 − n1 d2) (d2 x1 − n2 y1) is strongly stabilizable [4], [78]. 4) Let p1 = n1/d1, . . . , pk = nk/dk ∈ K be k coprime factorizations with d1 x1 − n1 y1 = 1. Prove that p1, . . . , pk are simultaneously stabilizable iff the plants pk+1, . . . , p2 k−1, defined by pk+i−1 , di n1 − ni d1 di x1 − ni y1 , i = 2, . . . , k, are simultaneously stabilized by a stable controller [4] . 5) Let p1 ∈ A and p2 ∈ K. Using (26), prove that c simultaneously stabilizes p1 and p2 iff c/(1 − p1 c) strongly stabilizes p2 − p1. 6) Let p1 ∈ A and p2, . . . , pk ∈ K. Prove that c simul- taneously stabilizes p1, . . . , pk iff c/(1 − p1 c) ∈ A simultaneously stabilizes the plants p2 −p1, . . . , pk −p1. 7) Let p, c ∈ A. Using (26), prove that c internally stabilizes p iff 1/(1 − p c) ∈ A. Hence, deduce that c internally stabilizes p iff c externally stabilizes p. Let us recall that if A is a Banach algebra, then: k 1 − a kA < 1 ⇒ a ∈ U(A). (34) Let A be a Banach algebra and: k c kA < 1/ k p kA . Prove that c ∈ A internally stabilizes p. This result is generally called the small gain theorem [14], [84]. 8) Using (26), prove that 0 6= c internally stabilizes p ∈ K iff 1/c internally stabilizes p. 9) Let δ ∈ A. Using (26), prove that c internally stabilizes p ∈ K iff c/(1+δ c) internally stabilizes p+δ. Similarly, prove that c internally stabilizes p ∈ K iff c+δ internally stabilizes p/(1 + δ p). 10) Let δ ∈ A and c be a stabilizing controller of p ∈ K. Using (26), prove that p + δ (resp. p/(1 + δ p)) is internally stabilized by c iff: 1 − (δ c/(1 − p c)) ∈ U(A) (resp. 1 + (δ p/(1 − p c)) ∈ U(A)). If A is a Banach algebra, using (34), deduce that ∀ δ ∈ A : k δ kA < k c/(1 − p c) kA (resp. ∀ δ ∈ A : k δ kA < k p/(1 − p c) kA), c internally stabilizes p + δ (resp. p/(1 + δ p)). Let us notice that p+δ is generally called an additive pertuba- tion of p whereas p/(1 + δ p) is called a multiplicative perturbation of p [20]. To finish, let us introduce the concept of topological stable range of a Banach algebra. Definition 6.5: [64] If A is a Banach algebra, then the topological stable range tsr(A) of A is the smallest n ∈ N ∪ {+∞} such that Un(A) is dense in An for the product topology. Remark 6.6: As for the stable range, the topological stable range tsr(A) is also called the topological stable rank of A. Theorem 6.3: We have the following results: • [72] tsr(H∞(D)) = 2, • [64] tsr(A(D)) = 2. Proposition 6.3: [64] If A is a Banach algebra, then we have sr(A) ≤ tsr(A). Let us notice that we can have sr(A) < tsr(A) as we can easily see it in Theorems 6.1 and 6.3. Proposition 6.4: [60] If A is a Banach algebra such that tsr(A) = 2, then every SISO plant, defined by the transfer function p = n/d (0 6= d, n ∈ A), satisfies: ∀  > 0, ∃ (d : n) ∈ U2(A) : ( k n − n kA ≤ , k d − d kA ≤ . 22 If d 6= 0, then, in the product topology, p is as close as we want to a transfer function p = n/d which admits a coprime factorization. In particular, this result holds for A = H∞(D) or A(D). Remark 6.7: From Proposition 6.4, we obtain that if p is not internally stabilizable, then there exists a stabilizable plant p as close as we want to p in the product topology. VII. CLASSIFICATION OF THE RINGS OF SISO STABLE PLANTS “ ...The foregoing results about rational functions are so elegant that one can hardly resist the tempta- tion to try to generalize them to non-rational func- tions. But to what class of functions? Much attention has been devoted in the engineering literature to the identification of a class that is wide enough to en- compass all the functions of physical interest and yet enjoys the structural properties that allow analysis of the robust stabilisation problem”, N. Young [83]. To finish these notes, we shall give few results of commu- tative algebra and homological algebra which allow to start a classification of rings of SISO stable plants by respect to certain system properties (e.g. existence of (weakly) doubly coprime factorizations, internal stabilization). Definition 7.1: [6], [26], [66] A Prüfer domain A is an integral domain which satisfies one of the following equivalent assertions: • Every finitely generated torsion-free A-module is projec- tive. • Every ideal of the form I = (d, n), 0 6= d, n ∈ A, is a projective A-module, i.e. there exist x, y ∈ K such that: ( d x − n y = 1, d x, d y, n x ∈ A. • For every p ∈ K = Q(A), the fractional ideal J = (1, p) of A is invertible (see Exercise 5.8). Prüfer domains were named after H. Prüfer who initiated their study in 1923. Example 7.1: We have the following examples: • Every integral closure of Z into a finite extension of Q is a Dedekind domain, namely a noetherian Prüfer domain. For example, the integral closure of Z into Q(i √ 5) is the Dedekind domain Z[i √ 5], and thus, a Prüfer domain [26], [66]. This fact allowed us in [56], [57] to explain the counter-example given in [1] • Every non-singular algebraic surface defines a Dedekind affine domain. For instance, the ring R[t1, t2]/(t2 1+t2 2−1) is a Dedekind domain, and thus, a Prüfer domain [66]. • Every Bézout domain is a Prüfer domain. Thus, the ring of entire functions E(k), with k = R, C, and E = E(R) ∩ R(s)[e−s ] are Prüfer domains [26], [66]. • The ring of Z-valued polynomials in Q[x], namely A = {p ∈ Q[x] | p(Z) ⊂ Z}, is a Prüfer domain [26]. The next theorem gives a complete characterization of the rings A of SISO stable plants over which every plant is internally stabilizable. Theorem 7.1: [56] We have the following equivalences: 1) Every SISO plant, defined by a transfer function with entries in K = Q(A), is internally stabilizable. 2) Every MIMO plant, defined by a transfer matrix with entries in K = Q(A), is internally stabilizable. 3) A is a Prüfer domain. Let us notice that Theorem 7.1 has a similar form as Theorem 4.3. Exercise 7.1: Using Definition 7.1, Theorem 5.1, Lemma 3.1 and Exercises 5.5 and 5.8, prove Theorem 7.1. Remark 7.1: Let us notice the fact that the integral domains over which • every transfer matrix admits a weakly doubly coprime factorization, i.e. coherent Sylvester domains (see Theo- rem 3.5), • every plant, defined by a transfer matrix, is internally stabilizable, i.e. Prüfer domains (see Theorem 7.1), • every transfer matrix admits a doubly coprime factoriza- tion, i.e. Bézout domains (see Theorem 4.3), are all coherent rings (see Definition 3.7) and integrally closed [26] (namely, every element k of K = Q(A) satisfying a monic polynomial, i.e. Pn i=0 ai ki = 0, with an = 1 and ai ∈ A, belongs to A). In terms of homological algebra, a coherent Sylvester domain A is a projective-free coherent integral domain (see Definition 4.3) of weak global dimension w.gl.dim(A) ≤ 2, a Prüfer domain is an integral domain of weak global dimension w.gl.dim(A) ≤ 1 and a Bézout domain is a projective-free domain of weak global dimension w.gl.dim(A) ≤ 1 (see [54], [56], [66] for more details). Roughly speaking, the concept of weak global dimension [7], [66] measures the number of different concepts of primeness: a ring A with w.gl.dim(A) ≤ 1 has only one concept of primeness (the standard one) whereas a ring A with w.gl.dim(A) ≤ 2 has two concepts of primeness (the same standard one as well as the concept of weak primeness). Over a ring A with w.gl.dim(A) ≥ 3 (see e.g. Exercise 3.7), not every transfer matrix with entries in the quotient field K = Q(A) admits a weakly doubly coprime factorization, and thus, the fractional representation approach seems to fail to be interesting. Finally, let us notice that the problem to recognize whether or not a finitely generated projective/stably free A-module is free (i.e. whether or not a stabilizing plant admits coprime factorizations) is an important issue in algebra and a theory, so called algebraic K-theory, was developed in the seventies in order to study these problems (as well as others). We refer the interested reader to [60], [57], [58] for an introduction to basic concepts of K-theory as well as their applications to synthesis problems. For lack of space, in these notes, we were not able to show how to use the algebraic analysis approach developed in this paper in order to recover the operator-theoretic approach developed in [28] (see [83] for a nice introduction to this approach). Indeed, a nearly complete characterization of the functional spaces (e.g. H2, Lp(R+)) so that internal stabiliza- tion is equivalent to the existence of the bounded inverse of the 23 linear operator from e to u (see Proposition 5.1) is obtained in [61]. This result can also be used in order to model rings of SISO stable plants with prescribed stabilization properties (for instance, find a ring of SISO stable plants over which internal stabilization is equivalent to the existence of a bounded inverse of the linear operator from e to u, where e and u belong to a certain functional space [61]). VIII. CONCLUSION We hope to have convinced the reader that the algebraic analysis (commutative algebra, module theory, homological algebra, Banach algebras) develops powerful concepts and tools which allow, on the one hand, to recover different results of the classical literature on the fractional representation approach to analysis and synthesis problems and, on the other hand, to develop new ones. For lack of space, we were not able to treat in these notes certain other results that can also be obtained using this mathematical framework. We refer to [54], [55], [56], [57], [58], [59], [60], [61], [62] for more details. REFERENCES [1] V. Anantharam, “On stabilization and existence of coprime factoriza- tions”, IEEE Trans. Automatic Control, 30 (1985), 1030-1031. [2] B. D. O. Anderson, “From Youla-Kučera to identification, adaptive and nonlinear control”, Automatica, 34 (1998), 1485-1506. [3] H. Bass, “K-theory and stable algebra”, Publ. Inst. Hautes Etudes Sci., 22 (1964), 489-544. [4] V. Blondel, Simultaneous Stabilization of Linear Systems, Lecture Notes in Control and Information Sciences 191, Springer-Verlag, 1994. [5] N. Bourbaki, Algèbre Commutative. Chap. 1-4, Hermann, 1961. [6] N. Bourbaki, Algèbre Commutative. Chap. 5-7, Hermann, 1965. [7] N. Bourbaki, Algèbre Homologique. Chap. 10, Masson, 1980. [8] C. Bonnet, J. R. Partington, “Bézout factors and L1-optimal controllers for delay systems using a two-parameter compensator scheme”, IEEE Trans. Automatic Control, 44 (1999), 1512-1521. [9] C. Bonnet, J. R. Partington, “Coprime factorizations and stability of fractional differential systems”, Systems & Control Letters, 41 (2000), 167-174. [10] C. I. Byrnes, M. W. Spong, T.-J. Tarn, “A several complex variables approach to feedback stabilization of linear neutral delay-differential systems”, Mathematical Systems Theory, 17 (1984), 97-133. [11] F. M. Callier, C. A. Desoer, “An algebra of transfer functions for distributed linear time-invariant systems”, IEEE Trans. Circuits and Systems, 9 (1978), 651-662. [12] F. Callier, C. A. Desoer, “Stabilization, tracking and disturbance rejec- tion in multivariable convolution systems”, Ann. Soc. Sc. Bruxelles, 94 (1980), 5-51. [13] F. Chyzak, A. Quadrat, D. Robertz, “Linear control systems over Ore algebras. Effective algorithms for the computation of parametrizations”, proceedings du Workshop on Time-Delay Systems (TDS03), INRIA Rocquencourt (France) (08-10/09/03), submitted for publication. [14] R. F. Curtain, H. J. Zwart, An Introduction to Infinite-Dimensional Linear Systems Theory, TAM 21, Springer-Verlag, 1991. [15] C. A. Desoer, F. M. Callier, “Convolution feedback systems”, SIAM J. Control & Optimization, 10 (1972), 737-746. [16] C. A. Desoer, M. Vidyasagar, Feedback Systems: Input-output proper- ties, Academic Press, 1975. [17] C. A. Desoer, R. W, Liu, J. Murray, R. Saeks, “Feedback system design: the fractional representation approach to analysis and synthesis”, IEEE Trans. Automatic Control, 13 (1978), 243-275. [18] W. Dicks, E. D. Sontag, “Sylvester domains”, J. Pure and Applied Algebra, 27 (1983), 15-28. [19] W. Dicks, “Free algebras over Bézout domains are Sylvester domains”, J. Pure and Applied Algebra, [20] J. C. Doyle, B. A. Francis, A. R. Tannenbaum, Feedback Control Theory, Macmillan Publishing Company, 1992. [21] A. Feintuch, “The strong stabilization problem for linear time-varying systems”, Problem 21 in 2002 MTNS Problem Book “Open Problems on the Mathematical Theory of Systems”, 12-16/08/02, available at http://www.inma.ucl.ac.be/∼blondel/op/. [22] M. Fliess, H. Mounier, “Controllability and observability of linear delay systems: an algebraic approach”, ESAIM Control Optimization and Calculus of Variations, 3 (1998), 301-314. [23] M. Fliess, H. Mounier, P. Rouchon and J. Rudolph, “Systèmes linéaires sur les opérateurs de Mikusiński et commande d’une poutre flexible”, ESAIM proceedings, 2 (1998), 183-193. [24] M. Fliess, R. Marquez, H. Mounier, “An extension of predictive control, PID regulators and Smith predictors to some linear delay systems”, Int. J. Control, 75(2002), 728-743. [25] B. A. Francis, A Course in H∞ Control Theory, Lecture Notes in Control and Information Sciences 88, Springer-Verlag 1987. [26] L. Fuchs, L. Salce, Modules over non-Noetherian Domains, Mathemat- ical Survey and Monographs, vol. 84, American Mathematical Society, 2000. [27] M. R. Gabel, A. V. Geramita, “Stable range for matrices”, J. Pure and Applied Algebra, 5 (1974) , 97-112, “Erratum”, vol. 7 (1976), 239. [28] T. T. Georgiou, M. C. Smith, “Graphs, causality, and stabilizabity: linear, shift-invariant systems on L2[0, ∞)”, Mathematics of Control, Signals, and Systems, vol. 6 (1993), 195-223. [29] K. Glover, D. McFarlane, “Robust stabilization of normalized coprime factor plant description with H∞-bounded uncertainty”, IEEE Trans. Automatic Control, 8 (1989), 821-830. [30] H. Gluesing-Luerssen, Linear Delay-Differential Systems with Commen- surate Delays: An Algebraic Approach, Lectures Notes in Mathematics 1770, Springer, 2002. [31] G.-M. Greul, G. Pfister, A Singular Introduction to Commutative Alge- bra, Springer 2002. [32] D. Handelman, “Stable range in AW?-algebras”, Proc. Amer. Math. Soc., 76 (1979), 241-249. [33] O. Helmer, “Divisibility properties of integral functions”, Duke Math. J., 6 (1940), 345-356. [34] B. Jacob, Stabilizability and Causality of Discrete-time Systems over the Signal Space l2(Z), Habilitation thesis, University of Dortmund, 2001. [35] C. U. Jensen, “Some curiosities of rings of analytic functions”, J. Pure and Applied Algebra, 38 (1985), 277-283. [36] P. Jones, D. Marshall, T. Wolff, “Stable range of the disc algebra”, Proc. Amer. Math. Soc., 96 (1986), 603-604. [37] T. Kailath, Linear Systems, Prentice-Hall, 1980. [38] R. E. Kalman, P. L. Falb, M. A. Arbib, Topics in Mathematical Systems Theory, Mc. Graw Hill, 1969. [39] E. Kamen, P. P. Khargonekar, A. Tannenbaum, “Pointwise stability and feedback control of linear systems with noncommensurate time delays”, Acta Applicandæ Mathematicæ, 2 (1984), 159-184. [40] V. Kučera, Discrete Linear Control: The Polynomial Equation Approach, Wiley, 1979. [41] V. Kučera, “Diophantine equations in control theory − A survey”, Automatica, 29 (1993), 1361-1375. [42] T. Y. Lam, Serre’s Conjecture, Lecture Notes in Mathematics 635, Springer-Verlag, 1978. [43] Z. Lin, “Output feedback stabilizability and stabilization of linear n-D systems”, in Multidimensional Signals, Circuits and Systems, edited by K. Galkowski, J. Wood, Taylor and Francis, 2001. [44] H. Logemann, “Stabilization and regulation of infinite-dimensional systems using coprime factorizations”, in Lecture Notes in Control and Information Sciences 185, Springer-Verlag, 1993. [45] J. J. Loiseau, “Algebraic tools for the control and stabilization of time- delay systems”, IFAC Reviews, Annual Reviews in Control, 24 (2000), 135-149. [46] W. S. McVoy, L. A. Rubel, “Coherence of some rings ”, Journal of Functional Analysis, 21 (1976), 76-87. [47] K. Mori, K. Abe, “Feedback stabilization over commutative rings: further study of coordinate-free approach”, SIAM J. Control & Opti- mization, 39 (2001), 1952-1973. [48] K. Mori, “Parametrization of the stabilizing controllers over a commu- tative ring with applications to multidimensional systems”, IEEE Trans. Circ. Sys., 49 (2002), 743-752. [49] S.-I. Niculescu, Delay Effects on Stability. A Robust Control Approach, Lecture Notes in Control and Information Sciences 269, Springer 2001. [50] H. Ozbay, Introduction to Feedback Control Theory, CRC Press LCC, Boca Raton FL, 1999. [51] Z. J. Palmor, “Time-delay compensation − Smith predictor and its modifications”, in The Control Handbook, W. S. Levine editor, CRC Press, 1999, 224-230. [52] J. W. Polderman et J. C. Willems, Introduction to Mathematical Systems Theory: A Behavioral Approach, TAM vol. 26, Springer, 1991. [53] J. F. Pommaret, Partial Differential Control Theory, Kluwer, 2001. 24 [54] A. Quadrat, “Coherent H∞(D)-modules in control theory”, “Internal stabilization of coherent control systems”, First IFAC symposium on system structure and control, Prague (Czech Republic), 2001, CDRom. [55] A. Quadrat, “Une approche de la stabilisation interne par l’analyse algébrique. I: Factorisations doublement faiblement copremières. II: Stabilisation interne. III: Sur une structure générale des contrôleurs stabilisants basée sur le rang stable”, proceedings of CIFA, Nantes (France), 2002. [56] A. Quadrat, “The fractional representation approach to synthesis prob- lems: an algebraic analysis viewpoint. Part I: (weakly) doubly coprime factorizations. Part II: internal stabilization”, SIAM J. Control & Opti- mization, 42 (2003), 266-299, 300-320. [57] A. Quadrat, “On a generalization of the Youla-Kučera parametrization. Part I: The fractional ideal approach to SISO systems”, Systems & Control Letters, 50 (2003), 135-148. [58] A. Quadrat, “A generalization of the Youla-Kučera parametrization for MIMO stabilizable systems”, proceedings of the Workshop on Time- Delay Systems (TDS03), INRIA Rocquencourt, France, 08-10/09/2003. [59] A. Quadrat, “On a generalization of the Youla-Kučera parametrization. Part II: the lattice approach to MIMO systems”, submitted for publica- tion. [60] A. Quadrat, “On a general structure of the stabilizing controllers based on stable range”, to appear in SIAM J. Control & Optimization, 2004. [61] A. Quadrat, “An algebraic interpretation to the operator-theoretic ap- proach to stabilization problems. Part I: SISO systems”, submitted for publication 2003. [62] A. Quadrat, “”Stabilizing” the stabilizing controllers”, submitted to MTNS04, Leuven (Belgium). [63] M. von Renteln, “Hauptideale und äußere Funktionen im Ring H∞”, Archiv der Mathematik, 28 (1977), 519-524. [64] M. A. Rieffel, “Dimension and stable rank in the K-theory of C?- algebras”, London Math. Soc., 46 (1983) , 301-333. [65] Rosenbrock, State Space and Multivariable Systems, Nelson, 1970. [66] J. J. Rotman, An Introduction to Homological Algebra, Academic Press, 1979. [67] R. Rupp, “Stable rank of holomorphic function algebras”, Studia Math- ematica, 97 (1990), 85-90. [68] A. M. Sinclair, A. W. Tullo, “Noetherian Banach algebras are finite dimensional”, Math. Ann, 34 (1992), 151-153. [69] S. Shankar, V. R. Sule, “Algebraic geometric aspects of feedback stabilization”, SIAM J. Control & Optimization, 30 (1992), 11-30. [70] S. Shankar, “An obstruction to the simultaneous stabilization of two n-D plants”, Acta Applicandæ Mathematicæ, 36 (1994), 289-301. [71] M. C. Smith, “On stabilization and existence of coprime factorizations”, IEEE Trans. Automatic Control, 34 (1989), 1005-1007. [72] D. Suárez, “Trivial Gleason parts and the topological stable rank of H∞”, American J. Math., 118 (1996), 879-904. [73] V. R. Sule, “Feedback stabilization over commutative rings: the matrix case”, SIAM J. Control & Optimization, 32 (1994), 1675-1695. [74] S. Treil, “The stable range of H∞ equals to 1”, J. Funct. Anal., 109 (1992), 130–154. [75] L. N. Vasershtein, “Stable range of rings and the dimension of topologi- cal spaces”, Functional Analysis and its Applications, 5 (1971), 102-110. [76] M. Vidyasagar, “Input-output stability of a broad class of linear time- invariant multivariable feedback systems”, SIAM J. Control & Optimiza- tion, 10 (1972), 203-209. [77] M. Vidyasagar, M. Schneider, H. Francis, “Algebraic and topological aspects of feedback stabilization”, IEEE Trans. Automatic Control, 27 (1982), 880-894. [78] M. Vidyasagar, Control System Synthesis: A Factorization Approach, MIT Press, 1985. [79] M. Vidyasagar, “A brief history of the graph topology”, European J. of Control, 2 (1996), 80-87. [80] D. C. Youla, J. J. Bongiorno, H. A. Jabr, “Modern Wiener-Hopf design of optimal controllers, Part I: The single-input case”, IEEE Trans. Automatic Control, 21 (1976), 3-14. [81] D. C. Youla, J. J. Bongiorno, H. A. Jabr, “Modern Wiener-Hopf design of optimal controllers, Part II: The multivariable case”, IEEE Trans. Autom. Contr., 21 (1976), 319-338. [82] Y. Yamamoto, “Pseudo-rational input/output maps and their realizations: a fractional representation approach to infinite-dimensional systems”, SIAM J. Control & Optimization, 26 (1988), 1415-1430. [83] N. Young, “Some function-theoretic issues in feedback stabilization”, in Holomorphy Spaces, MSRI Publications 33, 1998, 337-349. [84] G. Zames, “Feedback and optimal sensitivity: model reference transfor- mations, multiplicative seminorms, and approximative inverses”, IEEE Trans. Autom. Contr., 26 (1981), 301-320.