A common symmetrization framework for random linear algorithms

28/10/2015
Auteurs : Alain Sarlette
Publication GSI2015
OAI : oai:www.see.asso.fr:11784:14295

Résumé

This paper highlights some more examples of maps that follow a recently introduced “symmetrization” structure behind the average consensus algorithm. We review among others some generalized consensus settings and coordinate descent optimization.

A common symmetrization framework for random linear algorithms

Collection

application/pdf A common symmetrization framework for random linear algorithms Alain Sarlette

Média

Voir la vidéo

Métriques

179
4
1.11 Mo
 application/pdf
bitcache://afdffcc6ac0e2a1ff888a1475df29601a2483109

Licence

Creative Commons Attribution-ShareAlike 4.0 International

Sponsors

Organisateurs

logo_see.gif
logocampusparissaclay.png

Sponsors

entropy1-01.png
springer-logo.png
lncs_logo.png
Séminaire Léon Brillouin Logo
logothales.jpg
smai.png
logo_cnrs_2.jpg
gdr-isis.png
logo_gdr-mia.png
logo_x.jpeg
logo-lix.png
logorioniledefrance.jpg
isc-pif_logo.png
logo_telecom_paristech.png
csdcunitwinlogo.jpg
<resource  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                xmlns="http://datacite.org/schema/kernel-4"
                xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4/metadata.xsd">
        <identifier identifierType="DOI">10.23723/11784/14295</identifier><creators><creator><creatorName>Alain Sarlette</creatorName></creator></creators><titles>
            <title>A common symmetrization framework for random linear algorithms</title></titles>
        <publisher>SEE</publisher>
        <publicationYear>2015</publicationYear>
        <resourceType resourceTypeGeneral="Text">Text</resourceType><dates>
	    <date dateType="Created">Sun 8 Nov 2015</date>
	    <date dateType="Updated">Wed 31 Aug 2016</date>
            <date dateType="Submitted">Tue 15 May 2018</date>
	</dates>
        <alternateIdentifiers>
	    <alternateIdentifier alternateIdentifierType="bitstream">afdffcc6ac0e2a1ff888a1475df29601a2483109</alternateIdentifier>
	</alternateIdentifiers>
        <formats>
	    <format>application/pdf</format>
	</formats>
	<version>24684</version>
        <descriptions>
            <description descriptionType="Abstract">
This paper highlights some more examples of maps that follow a recently introduced “symmetrization” structure behind the average consensus algorithm. We review among others some generalized consensus settings and coordinate descent optimization.

</description>
        </descriptions>
    </resource>
.

Operational viewpoint on consensus inspired by quantum consensus objective covers some more linear algorithms Limit on accelerating consensus algorithms with information-theoretic links Alain Sarlette, INRIA/QUANTIC & Ghent University/SYSTeMS Operational viewpoint on consensus inspired by quantum consensus objective covers some more linear algorithms Limit on accelerating consensus algorithms with information-theoretic links the announced talk Alain Sarlette, INRIA/QUANTIC & Ghent University/SYSTeMS seems cool … in press at IEEE Trans. Automatic Control Operational viewpoint on consensus inspired by quantum consensus objective covers some more linear algorithms Limit on accelerating consensus algorithms with information-theoretic links Alain Sarlette, INRIA/QUANTIC & Ghent University/SYSTeMS Operational viewpoint on consensus inspired by quantum consensus objective covers some more linear algorithms “ Symmetrization “ L.Mazzarella, F.Ticozzi, A.S. arXiv:1311.3364 and arXiv:1303.4077 ! 
 
 
 
 Consensus: reaching agreement x1 = x2 = ... = xN 
 is the basis for many distributed computing tasks 
 Very flexible and robust convergence: as long as the network integrated over some finite T forms a 
 connected graph and α(t) ∈ [αm, αM] ⊂ (0,1)
 
 Classical consensus algorithm x1 x2 x3 x4 xN x... x... ! 
 
 
 
 ! ! 
 
 highest value can only decrease, lowest can only increase Convergence proof idea:
 shrinking convex hull xk(t+1) ( xj(t) – xk(t) ) + xk(t)α(t) Defining consensus in tensor product space? How define consensus w.r.t. correlations, entanglement,... ! How to write a consensus algorithm? Standard consensus: system states xk are directly accessible for 
 computation, can be linearly combined, copied, communicated... Quantum consensus: the whole quantum state / proba distribution 
 cannot be measured ➯ We must physically exchange “things”
 Our initial goal:
 Bringing consensus into quantum regime Consensus viewed as partial swapping Pairwise consensus interaction between agents ( j, k ): Consensus viewed as partial swapping Pairwise consensus interaction between agents ( j, k ): swap j with k stay in place Such mixture of two unitary operations: stay in place and swap
 can be easily implemented physically in quantum systems or, for that matter, in other information structures Linear action a(g,x) of G on X ! ! 
 Target : symmetrization reach a state x ∈ X where a(g,x) = x for all g ∈ G Consensus operation as discrete group action (finite) group vector space with objects “of interest” − − − Property: the projection on the symmetrization set can be written 
 Linear action a(g,x) of G on X ! ! 
 Dynamics : 
 with the defining a convex combination over G at each t Usually sg(t) ≠0 only for g belonging to a very restricted subset of G (finite) group vector space with objects “of interest” − − Consensus operation as discrete group action * The state x(t) at any time can be written as a convex combination ! ! 
 * The dynamics can then be lifted to the vector p(t) and written as ! ! ! Lift from actions to group … with p independent of x(0) ! ! 
 starting point pg(0) = δ(g,e) target pg = 1/|G| for all g … yields consensus on group weights − Possibly large number of nodes, e.g. |G| = N! for permutation group The exact values of sh(t), and even the selected interactions at each time step, need not be exactly controlled ➯ strong robustness ! 
 ! ! ! ! ! Proof: * possible by analogy with classical consensus * alternative: use entropy of p(t) as strict Lyapunov function Convergence to p holds if:– G = Permutations leads to random consensus by acting on classical state values (standard consensus) classical or quantum probability distributions G = cyclic group leads to random Fourier transform (use?) G = decoupling group links to quantum Dynamical Decoupling G = operational gates gives uniform random gate generation Consensus with antagonistic interactions Consensus towards leader value Gradient descent and coordinate descent Various applications the announced talk Non-trivial weight assignment & convergence result Solves previously not covered cases to distinguish {xk}=0 or {xk} = -{xj} Consensus with antagonistic interactions G = permutation matrices with arbitrary sign ±1 on each entry Weights sg: Birkhoff decomposition on |ajk| as for standard consensus Then swap weights to non-positive permutation if ajk <0 Non-trivial weight assignment (iterative procedure, see paper) Operator conclusions about which components of x converge to zero
 (slightly more general than standard convergence to x=0) Consensus towards leader value G = permutation matrices with arbitrary sign ±1 on each entry also other algorithms with (ajk) substochastic Gradient & Coordinate descent Search for min of f(x) by computing Assume (sorry) f(x)= xT A x In the eigenbasis of A this becomes a (if stable) substochastic iteration.
 Not a big insight… extension: k cycle through coordinates k Weights * follow from reflection matrices around nonorthogonal directions 
 * sum to 1 but may be negative ➱ Study coordinate descent convergence via symmetric but possibly 
 negative transition matrix: ∃ clear tools e.g. in consensus Gradient & Coordinate descent G = permutation matrices with arbitrary sign ±1 on each entry Search for min of f(x) by computing Assume (sorry) f(x)= xT A x In the eigenbasis of A this becomes a (if stable) substochastic iteration.
 Not a big insight… extension: k cycle through coordinates k Operational viewpoint on consensus inspired by quantum consensus objective covers some more linear algorithms Limit on accelerating consensus algorithms with information-theoretic links Alain Sarlette, INRIA/QUANTIC & Ghent University/SYSTeMS arXiv:1412.0402 Add one memory, no more [Muthukrishnan et al, 98] + k k memory Properly using one memory x(t-1)-x(t) allows to converge 
 quadratically faster What about more memories? Add one memory, no more [Muthukrishnan et al, 98] + k k memory Properly using one memory x(t-1)-x(t) allows to converge exponentially 
 quadratically faster What about more memories? Our result: if graph eigenvalues can be any in [a,b] with a,b known then more memories do not improve worst consensus eigenvalue proof: not very information-theoretic, see arXiv:1412.0402 Optimization: ! Nesterov method not further improvable by m(t-2),… ? 
 Robust control: design plant to be stable under feedback u = -k y , k in interval ! Communication theory: Interesting links = network Optimization: ! Nesterov method not further improvable by m(t-2),… ? 
 Robust control: design plant to be stable under feedback u = -k y , k in interval ! Communication theory: Interesting links = improves by taking direct
 feedback to itself into account network Optimization: ! Nesterov method not further improvable by m(t-2),… ? 
 Robust control: design plant to be stable under feedback u = -k y , k in interval ! Communication theory: Interesting links = network if network poorly known, no
 benefit to account for longer loops