Sigma Point Kalman Filtering on Matrix Lie Groups Applied to the SLAM Problem

07/11/2017
Publication GSI2017
OAI : oai:www.see.asso.fr:17410:22541
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit
 

Résumé

This paper considers sigma point Kalman ltering on matrix Lie groups. Sigma points that are elements of a matrix Lie group are generated using the matrix exponential. Computing the mean and covariance using the sigma points via weighted averaging and effective use of the matrix natural logarithm, respectively, is discussed. The specific details of estimating landmark locations, and the position and attitude of a vehicle relative to the estimated landmark locations, is considered.

Sigma Point Kalman Filtering on Matrix Lie Groups Applied to the SLAM Problem

Collection

application/pdf Sigma Point Kalman Filtering on Matrix Lie Groups Applied to the SLAM Problem James Richard Forbes, David Evan Zlotnik
Détails de l'article
contenu protégé  Document accessible sous conditions - vous devez vous connecter ou vous enregistrer pour accéder à ou acquérir ce document.
- Accès libre pour les ayants-droit

Sigma Point Kalman Filtering on Matrix Lie Groups Applied to the SLAM Problem

Média

Voir la vidéo

Métriques

0
0
356.63 Ko
 application/pdf
bitcache://66d2698828c95d184abf3d279360be30a24a8d72

Licence

Creative Commons Aucune (Tous droits réservés)

Sponsors

Sponsors Platine

alanturinginstitutelogo.png
logothales.jpg

Sponsors Bronze

logo_enac-bleuok.jpg
imag150x185_couleur_rvb.jpg

Sponsors scientifique

logo_smf_cmjn.gif

Sponsors

smai.png
logo_gdr-mia.png
gdr_geosto_logo.png
gdr-isis.png
logo-minesparistech.jpg
logo_x.jpeg
springer-logo.png
logo-psl.png

Organisateurs

logo_see.gif
<resource  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                xmlns="http://datacite.org/schema/kernel-4"
                xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4/metadata.xsd">
        <identifier identifierType="DOI">10.23723/17410/22541</identifier><creators><creator><creatorName>James Richard Forbes</creatorName></creator><creator><creatorName>David Evan Zlotnik</creatorName></creator></creators><titles>
            <title>Sigma Point Kalman Filtering on Matrix Lie Groups Applied to the SLAM Problem</title></titles>
        <publisher>SEE</publisher>
        <publicationYear>2018</publicationYear>
        <resourceType resourceTypeGeneral="Text">Text</resourceType><subjects><subject>estimation</subject><subject>matrix Lie group</subject><subject>simultaneous localization and mapping</subject></subjects><dates>
	    <date dateType="Created">Wed 7 Mar 2018</date>
	    <date dateType="Updated">Wed 7 Mar 2018</date>
            <date dateType="Submitted">Mon 15 Oct 2018</date>
	</dates>
        <alternateIdentifiers>
	    <alternateIdentifier alternateIdentifierType="bitstream">66d2698828c95d184abf3d279360be30a24a8d72</alternateIdentifier>
	</alternateIdentifiers>
        <formats>
	    <format>application/pdf</format>
	</formats>
	<version>37252</version>
        <descriptions>
            <description descriptionType="Abstract">This paper considers sigma point Kalman ltering on matrix Lie groups. Sigma points that are elements of a matrix Lie group are generated using the matrix exponential. Computing the mean and covariance using the sigma points via weighted averaging and effective use of the matrix natural logarithm, respectively, is discussed. The specific details of estimating landmark locations, and the position and attitude of a vehicle relative to the estimated landmark locations, is considered.
</description>
        </descriptions>
    </resource>
.

Sigma Point Kalman Filtering on Matrix Lie Groups Applied to the SLAM Problem James Richard Forbes1 and David Evan Zlotnik2 1 Department of Mechanical Engineering, McGill University Montréal, Québec, Canada, H3A 0C3 james.richard.forbes@mcgill.ca 2 Department of Aerospace Engineering, University of Michigan Ann Arbor, Michigan, USA, 48109 dzlotnik@umich.edu Abstract. This paper considers sigma point Kalman filtering on matrix Lie groups. Sigma points that are elements of a matrix Lie group are generated using the matrix exponential. Computing the mean and covariance using the sigma points via weighted averaging and effective use of the matrix natural logarithm, respectively, is discussed. The specific details of estimating landmark locations, and the position and attitude of a vehicle relative to the estimated landmark locations, is considered. Keywords: estimation, matrix Lie group, simultaneous localization and mapping 1 Introduction The extended Kalman filter (EKF) [1, pp. 54-64], sigma point Kalman filter (SPKF) [2], particle filter [1, pp. 96-113], and other approximations of the Bayes filter [1, pp. 26-33], assume the system state is an element of a vector space, such as Rnx , where nx is the state dimension. When the system state space is an element a matrix Lie group, such as the special orthogonal group, denoted SO(3), the EKF, SPKF, etc., are not directly applicable. This paper investigates SPKFing on matrix Lie groups. After reviewing the sigma point trans- formation (SPT) it is generalized and used to estimate means and covariances of nonlinear functions with states that are elements of a matrix Lie group. Next, the matrix Lie group SPT is used to construct a SPKF leading to the matrix Lie group SPKF (MLG SPKF). For simplicity of exposition, the unscented transformation (UT) [3] is the specific SPT used in all derivations, although the term SPT is retained throughout the paper. Particular attention is paid to how to compute the weighted average of sigma points when the underlying matrix Lie group is SO(3). This is particularly relevant in robotics applications where vehicles can rotate in space. Two methods to compute a weighted mean on SO(3) are discussed, one that employs a singular value decomposition, and another based on [4]. The method of [4] is popular in the aerospace community and, perhaps more importantly, the weighted averaging employed in [4] is computationally simple. The papers [4–8] are closest to the present work. The work of [4] employes a parameteri- zation of SO(3), but in doing so, a simple way to compute the mean of a set of sigma points is derived. The papers [5,6] also present SPKFs (and, to be more specific, UKFs) for systems with states that are elements of nonlinear manifolds. The formulations presented in [5, 6] are similar to the formulation in the present paper, but with some differences, such as how the correction step is realized, and how the weighted mean of sigma points is computed in the special case that the matrix Lie group is SO(3). The papers [7, 8] share similarities to the present work also. In particular, although [7] is specific to the special Euclidian group, denoted SE(3), the way uncertainty is propagated and the way sigma points are generated is the same to what is presented in this paper. Additionally, [7] proposes an iterative method to fuse multiple poses in order to compute a mean pose, while computing the mean atti- tude, which is an element of SO(3), using alternative methods is discussed in this paper. In [8] SPKFing on matrix Lie groups is also considered. In [8] both left and right Gaussian distributions on the matrix Lie group are considered, while [7] and the present work only consider right Gaussian distributions. The most significant difference between [8] and the present work is the way the mean is computed in the prediction step of the filter. In [8] the mean of the previous step is propagated using the process model input in a noise-free man- ner, while herein the mean is computed using sigma points generated using the prior state covariance and process model noise covariance. Finally, the proposed MLG SPKF is applied to the problem of simultaneous localization and mapping (SLAM), a challenging problem not considered in [4–8]. This paper essentially combines and applies the results of [4–8] to the SLAM problem with particular attention being paid to computing the mean attitude from a set of sigma points. 2 The Sigma Point Transformation The SPKF can be best understood in terms of approximating [2, pp. 81] [9, pp. 128] E[f(xk)] = Z ∞ −∞ f(xk)p(xk)dxk, (1) E h (f(xk) − E[f(xk)])(f(xk) − E[f(xk)])T i = Z ∞ −∞ (f(xk) − E[f(xk)])(f(xk) − E[f(xk)])T p(xk)dxk, (2) using a sigma point transformation. The probability density function p(xk) is assumed to be Gaussian, denoted N(x̂k, Pk), where x̂k ∈ Rnx is the mean, Pk ∈ Rnx×nx is the covariance, and nx is the dimension of xk. Using a Cholesky decomposition, Pk = SkST k where Sk is lower- triangular and Sk = [s1,k · · · si,k · · · sL,k], a set of sigma points are computed as X0,k = x̂k, Xi,k = x̂k + √ L + κ si,k, and Xi+L,k = x̂k − √ L + κ si,k where i = 1, . . . , L and L = nx. Passing the sigma points through the nonlinear function f(·) results in X+ i,k = f(Xi,k), i = 0, . . . , 2L. The mean, x̂+ k , and covariance, P+ k , are then approximated as x̂+ k = 2L X i=0 wiX+ i,k, P+ k = 2L X i=0 wi  X+ i,k − x̂+ k   X+ i,k − x̂+ k T , (3) where wi are weights of the form w0 = κ/(κ + L) and wi = 1/(2κ + 2L). The unscented transformation [3] is the specific sigma point transformation used in this paper, but there are other sigma point transformations. See [2, Ch. 5], [9, Ch. 5], or [10]. 3 Matrix Lie Groups A matrix Lie group, denoted G, is composed of full rank, and therefore invertible, n × n matrices that is closed under matrix multiplication [11, pp. 98]. The matrix Lie algebra associated with a matrix Lie group, denoted g, is the tangent space of G at the identity, denoted T1G. The tangent space of G at a point X ∈ G is denoted TXG. The matrix Lie algebra is a vector space closed under the operation of the matrix Lie bracket defined by [A, B] = AB − BA, ∀A, B ∈ g. Moreover, XAX−1 ∈ g ∀X ∈ G, ∀A ∈ g [11, pp. 98]. Let {B1, . . . , Bn} be a basis for the matrix Lie algebra, called the generators, so that any A ∈ g can be written A = S(a) = Pn i=1 aiBi where a = [a1, . . . , an]T ∈ Rn is the column matrix of coefficients associated with A [12]. The definition of S : Rn → g naturally leads to the definition of the inverse operation S−1 : g → Rn . The matrix Lie group G and its associated matrix Lie algebra are related through the matrix exponential and matrix natural logarithm. Specifically, exp(·) : g → G, ln(·) : G → g so that exp(ln(X)) = X, ∀X ∈ G [13, pp. 19]. 4 The Sigma Point Transformation on Matrix Lie Groups Let N(X̂k, Pk) denote a Gaussian distribution with mean X̂k ∈ G and covariance Pk ∈ Rnx×nx where nx is the dimension of g. A realization, also referred to as a sample, from N(X̂k, Pk) must respect the group structure of G. Herein realizations are generated using the matrix exponential [7,14]. Specifically, using S(ξk) ∈ g where ξi ∈ Rnx and ξi ∼ N(0, Pk), a realization is generated via Xk ← exp(S(ξk))X̂k. The task at hand is to compute the mean and covariance of a nonlinear function f : G → G where G is a matrix Lie Group. Starting with N(X̂k, Pk), a Cholesky decom- position Pk = SkST k where Sk is lower-triangular and Sk = [s1,k · · · si,k · · · sL,k], is used to generate sigma points. In particular, X0,k = X̂k, Xi,k = exp √ L + κ S(si,k)  X̂k, Xi+L,k = exp − √ L + κ S(si,k)  X̂k, where i = 1, . . . , L and L = nx. The sigma points are passed through the nonlinear function f(·) resulting in X+ i,k = f(Xi,k), i = 0, . . . , 2L. The mean and covariance, X̂+ k and P+ k , respectively, cannot be computed using (3) because adding or subtracting elements of G does not yield an element of G. The mean can be computing by solving X̂+ k = arg min Xk∈G 2L X i=0 wid2 (X+ i,k, Xk), (4) where d(·, ·) denotes distance on G. Usually d(·, ·) is the geodesic distance but, as discussed in [15, 16], it is possible to employ alternative distance measures. For instance, when the matrix Lie group is SO(3) the chordal distance may be employed over the geodesic distance. Although [5] suggest solving (4) as well, a detailed discussion on how to go do so is not provided. On the other hand, [6,7,14] proposes finding a solution to (4) in a similar manner to [17,18], that is, in an iterative manner. The covariance is computed by first defining exp(S(ei,k)) ∈ G where S(ei,k) ∈ g and ei,k ∈ Rnx such that X+ i,k = exp(S(ei,k))X̂+ k , where X̂+ k is the solution to (4). The term exp(S(ei,k)) can be thought of as the difference or error between sigma point X+ i,k and the mean X̂+ k . Using the matrix logarithm and S−1 , each column matrix ei,k, i = 0, . . . , 2L, can be computed via exp(S(ei,k)) = X+ i,k(X̂+ k )−1 , S(ei,k) = ln(X+ i,k(X̂+ k )−1 ), ei,k = S−1 (ln(X+ i,k(X̂+ k )−1 )). It follows that the covariance can be approximated as P+ k = P2L i=0 wiei,keT i,k. 5 Specific Use of the Sigma Point Transformation for State Estimation on Matrix Lie Groups Consider the nonlinear process and measurement models Xk = fk−1(Xk, uk−1, wk−1), wk ∼ N(0, Qk), (5) yk = gk(Xk, vk), vk ∼ N(0, Rk), (6) where Xk ∈ G, uk ∈ Rnu , wk ∈ Rnw , yk ∈ Rny and vk ∈ Rny . The process and measurement models could alternatively have uk ∈ G, wk ∈ G, yk ∈ G, and vk ∈ G. To estimate Xk using the SPKF, the properties of the group G must be respected. The state estimation procedure starts with definition of zk−1 =  (S−1 (ln(X̂k−1)))T 0T T , and Yk−1 = diag {Pk−1, Qk−1} . Using a Cholesky decomposition, let Yk−1 = Sk−1ST k−1 where Sk−1 = [s1,k−1 · · · sL,k−1] is lower triangular. Further partition each s1,k−1 as si,k−1 =  sT i,k−1,x sT i,k−1,w T where si,k−1,x ∈ Rnx and si,k−1,w ∈ Rnw . The sigma points are then computed as Z0,k−1 =  S−1 (ln(X̂k−1)) W0,k−1  , Zi,k−1 =  S−1 (ln(exp( √ L + κ S(si,k−1,x))X̂k−1)) W0,k−1 + √ L + κsi,k−1,w  , Zi+L,k−1 =  S−1 (ln(exp(− √ L + κ S(si,k−1,x))X̂k−1)) W0,k−1 − √ L + κsi,k−1,w  . Partitioning the i = 0, . . . , 2L sigma points, Zi,k−1 =  (S−1 (ln(Xi,k−1)))T WT i,k−1 T , where Xi,k−1 = exp( √ L + κ S(si,k−1,x))X̂k−1 for i = 1, . . . , L and Xi,k−1 = exp(− √ L + κ S(si,k−1,x))X̂k−1 for i = L + 1, . . . , 2L, and passing them through the nonlinear motion model in conjunc- tion with uk−1 gives X− i,k = fk−1(Xi,k−1, uk−1, Wi,k−1), i = 0, . . . , 2L. The prediction step is completed by computing the predicted state and covariance in a weighted fashion that respects the group structure of G. As discussed in Section (4), X̂− k = arg min Xk∈G 2L X i=0 wid2 (X− i,k, Xk), P− k = 2L X i=0 wie− i,ke−T i,k , where e− i,k = S−1 (ln(X− i,k(X̂− k )−1 )). To execute the correction step, define z− k =  (S−1 (ln(X̂− k )))T 0T T , and Y− k = diag  P− k , Rk , where L = nx + ny. Partition Y− k using Cholesky decomposition, Y− k = S− k S− k T , where S− k is lower triangular, and s− j,k is the jth column of S− k . Moreover, partition the columns of S− k as s− j,k =  (s− j,k,x)T (s− j,k,y)T T . Compute a set of sigma points via Z0,k =  S−1 (ln(X̂− k )) V0,k  , Z− j,k = " S−1 (ln(exp( √ L + κ S(s− j,k,x))X̂− k )) V0,k + √ L + κs− j,k,y # , Zj+L,k = " S−1 (ln(exp(− √ L + κ S(s− j,k))X̂− k )) V0,k − √ L + κs− j,k,y # . Partitioning the sigma points as Z− j,k = h (S−1 (ln(X− j,k)))T VT j,k iT where X− j,k = exp( √ L + κ S(s− j,k,x))X̂− k for j = 1, . . . , L and X− j,k = exp(− √ L + κ S(s− j,k))X̂− k for j = L + 1, . . . , 2L, and passing the sigma points through the nonlinear observation model yields Yj,k = gk(X− j,k, Vj,k), j = 0, . . . , 2L. The predicted measurement and its covariance are ŷk = P2L j=0 wjYj,k and Vk = P2L j=0 wj (Yj,k − ŷk) (Yj,k − ŷk) T , where wj are weights. The matrix Uk is computed as Uk = P2L j=0 wjej,k (Yj,k − ŷk) T , where ej,k = S−1 (ln(X− j,k(X̂− k )−1 )), and Kk = UkV−1 k . Defining δξk = Kk(yk − ŷk) so that S(δξk) ∈ g, it follows that X̂k = exp(δξk)X̂− k , Pk = P− k − UkV−1 k UT k . (7) In [5] the correction is not computed on G, but rather in TX̂− k G, while [6] and [8] corrects in the same way as given in (7). 6 Application to Simultaneous Localization and Mapping The MLG SPKF will now be applied to a popular problem in robotics, namely the SLAM problem. Estimating attitude and position of a vehicle relative to known landmarks is re- ferred to as localization. When landmarks are not a priori known, creating a map of observed landmarks and, at the same time, localizing the vehicle relative to the map created, is re- ferred to as SLAM. Consider a rigid-body vehicle endowed with a body-fixed frame, denoted Fb, able to rotate relative to a datum frame, denoted Fa. The matrix Cba ∈ SO(3) describes the attitude of Fb relative to Fa, where SO(3) =  C ∈ R3×3 | CT C = 1, det C = +1 is the special orthogonal group [19]. In the aerospace community the matrix Cba ∈ SO(3) is referred to as the direction cosine matrix (DCM) [20, pp. 8]. The matrix Lie alge- bra associated with SO(3), denoted so(3), is the vectors space of 3 × 3 skew-symmetric matrices so(3) =  S ∈ R3×3 | S = −ST . In particular, define (·)× : R3 → so(3) where y× z = −z× y, ∀y, z ∈ R3 , defines the typical cross product operation, and (·)v : so(3) → R3 such that (z× )v = z, ∀z ∈ R3 . The DCM Cba can be written using axis/angle parameters as Cba = exp(−φa× ) = cos φ1 + (1 − cos φ)aaT − sin φa× , where φ ∈ R and a ∈ R3 where aT a = 1. The matrix exponential and matrix logarithm are defined as exp(−φ× ) = Cba and ln(Cba) = −φ× so that exp(ln(Cba)) = Cba where φ = φa is the rotation vector (or, strictly speaking, the rotation column matrix). The negative sign in the matrix exponential, and the“− sin φa× ” term in Cba, follows the aerospace convention of describing attitude in terms of the Fb relative to Fa [20, pp. 12 & pp. 32]. Consider two arbitrary points, points z and w. Let the position of point z relative to point w resolved in Fa be denoted rzw a . The relationship between rzw a ∈ R3 and rzw b ∈ R3 is given by rzw b = Cbarzw a . 6.1 The Process and Measurement Models Consider a robotic vehicle that can rotate and translate in space. Denote the body-fixed frame associated with the vehicle by Fb, and the datum frame by Fa. Assume the vehicle is equipped with two interoceptive sensors, an accelerometer and rate gyro, located at point z on the vehicle. Assume also the vehicle is equipped with an exteroceptive sensor located at point z on the vehicle that is able to observe and identify landmarks p1, . . . , p` where ` ∈ N relative to point z, such as a camera. Let point w be another point. Point z can be thought of as the “origin” of Fb while point w can be though of as the “origin” of Fa. Accelerometer, Rate Gyro, and the Process Model — The discrete-time relationship between the position, velocity, and the accelerometer measurement u1 bk ∈ R3 is rzkw a = rzk−1w a + Tvzk−1w/a a , vzkw/a a = vzk−1w/a a + TCT bk−1a(u1 bk−1 + w1 bk−1 ) − Tga, (8) where T = tk − tk−1, rzkw a is the position of point z relative to point w resolved in Fa at time tk, v zkw/a a ∈ R3 is the velocity of point z relative to point w with respect to Fa resolved in Fa at time tk, ga ∈ R3 is Earth’s gravitational acceleration resolved in Fa, and w1 bk ∈ R3 is noise. The discrete-time relationship between attitude and the rate-gyro measurement u2 bk ∈ R3 is Cbka = exp(−ψ× k−1)Cbk−1a, (9) where ψk−1 = Tu2 bk−1 + Tw2 bk−1 , w2 bk is noise. Let r pjk w a ∈ R3 denote the position of landmark j ∈ {1, · · · , `} relative to point w, resolved in Fa. The process model associated with r pjk w a is r pjk w a = r pjk−1 w a + Tw pj k , ∀j ∈ {1, · · · , `}, (10) where w pj k is noise. This random-walk process model is used to ensure the prediction step of the filter is not overconfident. An interpretation of such a process model is that the landmarks are assumed to be moving very slowly. A similar approach is taken when estimating bias in sensors [4]. Combining (8), (9), and (10) results in          rzkw a v zkw/a a −(ln(Cbka))v r p1k w a . . . r p`k w a          =           r zk−1w a + Tv zk−1w/a a v zk−1w/a a + TCT bk−1a(u1 bk−1 + w1 bk−1 ) − Tga −(ln(exp(−ψ× k−1)Cbk−1a))v r p1k−1 w a + Twp1 k . . . r p`k−1 w a + Twp` k           , (11) the discrete-time process model. Camera and the Measurement Model — Let A = {1, · · · , `} denote the set of all land- mark indices and let Ik = {1, · · · , q} ⊆ A, q ≤ `, denote the set of all landmarks observed from time t = 0 to time t = kT. Further, let Ok = {α, · · · , ζ} ⊆ A denote the set of land- marks observed at time t = kT. The exteroceptive sensor measures yi bk = Cbka(rpiw a −rzkw a )+ vi bk where rpiw a ∈ R3 is unknown but constant and vi bk ∈ R3 is noise. Combining all extero- ceptive measurements together to yields yk = gk(rzkw a , Cbka, rpαw a , · · · , r pζ w a , vα bk , · · · , vζ bk ). 6.2 MLG SPKF Applied to the SLAM Problem To be concise, let r̂k−1 = r̂ zk−1w a , v̂k−1 = v zk−1w/a a , Ĉk−1 = Ĉbk−1a, and m̂j k−1 = r̂ pj,k−1w a ∀j ∈ Ik−1. As discussed in Section 5, to execute the prediction step, first define zk−1 and Yk−1 where L = 15+6q. Decomposing Yk−1 using a Cholesky decomposition, forming sigma points, and passing the sigma points through the nonlinear process model in conjunction with the measurements u1 bk−1 and u2 bk−1 gives R− i,k = Ri,k−1 + TVi,k−1, V− i,k = Vi,k−1 + TCT i,k−1(u1 bk−1 + W1 i,k−1) − Tga, C− i,k = exp(−ψ× k−1)Ci,k−1, where ψk−1 = Tu2 bk−1 + TW2 i,k−1, Mj− i,k = Mj i,k−1 + TW pj i,k−1, ∀j ∈ Ik−1. Using weights wi it follows thatr̂− k = P2L i=0 wiR− i,k, v̂− k = P2L i=0 wiV− i,k, m̂j− k = P2L i=0 wiMj− i,k, ∀j ∈ Ik−1. On the other hand, computing Ĉ− k must be done with care. One way to compute Ĉ− k is to solve Ĉ− k = arg min Ck∈SO(3) 2L X i=0 wid2 (C− i,k, Ck) (12) in an iterative manner [17,18]. Details of alternative methods to compute Ĉ− k are discussed in Section 6.3. The covariance P− k is P− k = 2L X i=0 wiE− i,kE−T i,k , where E− i,k =           R− i,k − r̂− k V− i,k − v̂− k ξ− i,k M1− i,k − m̂1− k . . . Mq− i,k − m̂q− k           , and ξ− i,k = −(ln(C− i,kĈ−T k ))v. When a landmark, such as mj k, is first observed, it is initialized as m̂j− k = Ĉ−T k yj bk + r̂− k , ∀j ∈ Ok, and the covariance is augmented as P− k = " P− k P− xkmj k P−T xkmj k P− mj kmj k # , where P− mj kmj k = P2L i=0 wi(Mj i,k − m̂j− k )(Mj i,k − m̂j− k )T , P− xkmj k = P2L i=0 wiE− i,k(Mj i,k − m̂j− k )T , and Mj i,k = C−T i,k yj bk + R− i,k. Following Section 5, to execute the correction step, define zk and Y− k where L = 9+3q + 3o, and o is the cardinality of Ok. Recall that the discretized exteroceptive measurement model is yk = gk(rzkw a , Cbka, rpαw a , · · · , r pζ w a , vα bk , · · · , vζ bk ). Decomposing Y− k using a Cholesky decomposition, forming sigma points, and passing the sigma points through exteroceptive measurement model results in Yj,k = gk(R− j,k, C− j,k, Mα− j,k , · · · , Mζ− j,k, N − j,k), j = 0, . . . , 2L. The predicted measurement and associated covariance are then ŷk = P2L j=0 wjYj,k, and Vk = P2L j=0 wj (Yj,k − ŷk) (Yj,k − ŷk) T . The matrix Uk is computed in a similar way to P− k , that is Uk = 2L X j=0 wjE− j,k (Yj,k − ŷk) T where E− j,k =           R− j,k − r̂− k V− j,k − v̂− k ξ− j,k M1− j,k − m̂1− k . . . Mq− j,k − m̂q− k           where ξ− j,k = − ln(C− j,kĈ−T k )v. It follows that the Kalman gain is Kk = UkV−1 k . To correct the state estimate first define δχk = Kk(yk − ŷ− k ) where δχk =          δχk,r δχr,v δξk δξk,m1 . . . δξk,mq          =          r̂zkw a − r̂− k v̂ zkw/a a − v̂− k δξk r̂ p1,kw a − m̂1− k . . . r̂ pq,kw a − m̂q− k          . Then, to compute r̂zkw a , v̂ zkw/a a , Cbka, and r̂ pj,kw a , j ∈ Ik, r̂zkw a = r̂− k + δχk,r, v̂ zkw/a a = v̂− k + δχk,v, Ĉbka = exp(−δξ× k )Ĉ− k , r̂ pj,kw a = m̂j− k + δξk,mj . The estimated covariance is updated as Pk = P− k − UkV−1 k UT k . 6.3 Weighted Averaging To solve (12), namely Ĉ− k = arg minCk∈SO(3) P2L i=0 wid2 (C− i,k, Ck), the iterative method of [17, 18] can be employed when d(·, ·) is the geodesic distance. A similar approach is proposed in [6]. The SE(3) case is considered in [7, 14]. Alternatively, when d(·, ·) is a chordal distance [15,16], namely d2 (C1, C0) = kC1 − C0k 2 F, a singular value decomposition (SVD) can be used to solve for Ĉ− k in the following way. Consider the objective function Jk(Ck) = P2L i=0 wi

C− i,k − Ck

2 F , where Ck ∈ SO(3) [15]. The objective function can be written as Jk(Ck) = 2L X i=0 witr  C− i,k − Ck T  C− i,k − Ck  = 6 2L X i=0 wi − 2 2L X i=0 tr  wiC−T i,k Ck  . Minimizing Jk(·) as a function of Ck is equivalent to maximizing Jk(Ck) = tr BT Ck  where BT = P2L i=0 wiC−T i,k . The maximizing solution, Ĉ− k , is Ĉ− k = VB diag {1, 1, det VB det UB} UT B, where the SVD B = VBΣBUT B, VT BVB = 1, UT BUB = 1, and ΣB = diag {σ1, σ2, σ3} where σ1 ≥ σ2 ≥ σ3 ≥ 0 has been employed [21,22]. An alternative way to compute Ĉ− k using generalized Rodrigues parameters is explored in [4]. Herein an exposition of something equivalent, although different, to [4] is presented that is computationally simpler than both the iterative method of [18] and the SVD method previously discussed. Consider C− i,k = Ei0C− 0,k, i = 1, . . . , 2L, where Ei0 ∈ SO(3) repre- sents the attitude of Fbki relative to Fbk0 . Write Ei0 as Ei0 = C− i,kC−T 0,k = cos φi0 1 + (1 − cos φi0 )ai0 ai0T − sin φi0 ai0× , where ai0 can be interpreted as the axis of rotation resolved in Fbk0 . Owing to the fact that Ei0ai0 = ai0 it is also correct to state that ai0 can be in- terpreted as the axis of rotation resolved in Fbki . However, by resolving all i = 1, . . . , 2L axes of rotation in Fbk0 , they can be added in a weighted fashion. Similarly, by resolving all i = 1, . . . , 2L rotation vectors φi0 bk0 = φi0 ai0 in Fbk0 , their weighted sum can be added. Said another way, each φi0× bk0 are being expressed in the same tangent space, and therefore can be added. In particular, φbk0 = P2L i=1 wiφi0 bk0 , where wi = 1 2(L+κ) . It follows then that Ĉ− k = exp(−φ× bk0 )Ĉbk−1a. 7 Conclusion This paper formulates a sigma point Kalman filter (SPKF) specifically for systems with states that are elements of a matrix Lie group. First, how the the sigma point transforma- tion is generalized and used to compute means and covariances of nonlinear functions with states that are elements of matrix Lie groups is discussed. The matrix Lie Group SPKF (MLG SPKF) is then formulated in general. The MLG SPKF is applied to the problem of simultaneous localization and mapping (SLAM) where the position and attitude of a vehicle are estimated relative to a set of estimated landmark locations and, simultaneously, the estimates of the landmark locations are refined. Approaches to averaging elements of SO(3) are discussed, including using the a chordal distance measure, as well as a method adopted from [4]. Using the chordal distance measure allows for the used of an SVD solution to the averaging of elements of SO(3). References 1. S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. Cambridge, MA: MIT Press, 2006. 2. S. Särkkä, Bayesian Filtering and Smoothing. Cambridge, UK: Cambridge University Press, 2013. 3. S. J. Julier, J. K. Uhlmann, and H. F. Durrant-Whyte, “A New Method for the Nonlinear Transformation of Means and Covariances in Filters and Estimators,” IEEE Trans. Automatic Control, vol. 45, no. 3, pp. 477–482, 2000. 4. J. L. Crassidis and F. L. Markley, “Unscented Filtering for Spacecraft Attitude Estimation,” AIAA J. Guidance, Control, & Dynamics, vol. 26, no. 4, pp. 536–542, 2003. 5. S. Hauberg, F. Lauze, and K. S. Pedersen, “Unscented Kalman Filtering on Riemannian Man- ifolds,” J. Mathematical Imaging and Vision, vol. 46, no. 1, pp. 103–120, 2013. 6. C. Hertzberg, R. Wagner, U. Frese, and L. Schröder, “Integrating Generic Sensor Fusion Algo- rithms with Sound State Representations Through Encapsulation of Manifolds,” Information Fusion, vol. 14, no. 1, pp. 57 – 77, 2013. 7. T. D. Barfoot and P. T. Furgale, “Associating uncertainty with three-dimensional poses for use in estimation problems,” IEEE Trans. Robotics, vol. 30, no. 3, pp. 679–693, 2014. 8. M. Brossard, S. Bonnabel, and J.-P. Condomines, “Unscented Kalman Filtering on Lie Groups,” Soumis à IROS 2017, 2017. , https://hal.archives-ouvertes. fr/hal-01489204v2. 9. A. J. Haug, Bayesian Estimation and Tracking - A Practical Guide. Hoboken, NJ: John Wiley & Sons, Inc., 2012. 10. Y. Wu, D. Hu, M. Wu, and X. Hu, “A Numerical-Integration Perspective on Gaussian Filters,” IEEE Trans. Signal Processing, vol. 54, pp. 2910–2921, Aug 2006. 11. A. Bloch, J. Baillieul, P. Crouch, J. E. Marsden, P. S. Krishnaprasad, R. Murray, and D. Zenkov, Nonholonomic Mechanics and Control, vol. 24. Springer, 2003. 12. E. Eade, “Lie Groups for Computer Vision,” Accessed April 13, 2017. http://ethaneade.com/ lie_groups.pdf. 13. G. S. Chirikjian, Stochastic Models, Information Theory, and Lie Groups, vol. 2. Boston: Birkhauser, Springer Science + Business Media, Inc., 2011. 14. T. D. Barfoot, State Estimation for Robotics. Cambridge, UK: Cambridge University Press, 2017. In preparation (draft compiled February 14, 2017). 15. M. Moakher, “Means and Averaging in the Group of Rotations,” SIAM Journal on Matrix Analysis and Applications, vol. 24, no. 1, pp. 1–16, 2002. 16. R. Hartley, J. Trumpf, Y. Dai, and H. Li, “Rotation Averaging,” Int. J. Computer Vision, vol. 103, no. 3, pp. 267–305, 2013. 17. J. H. Manton, “A Globally Convergent Numerical Algorithm for Computing the Centre of Mass on compact Lie Groups,” in 8th Control, Automation, Robotics and Vision Conference, 2004., vol. 3, pp. 2211–2216 Vol. 3, Dec 2004. 18. S. Fiori and T. Tanaka, “An Algorithm to Compute Averages on Matrix Lie Groups,” IEEE Trans. Signal Processing, vol. 57, pp. 4734–4743, Dec 2009. 19. R. N. Murray, Z. Li, and S. S. Sastry, A Mathematical Introduction to Robotic Manipulation. Boca Raton, FL: CRC Press, Inc., 1993. 20. P. C. Hughes, Spacecraft Attitude Dynamics. Mineola, New York: Dover, 2nd ed., 2004. 21. F. L.Markley, “Attitude Determination Using Vector Observations and the Singular Value De- composition,” J. Astronautical Sciences, vol. 36, no. 3, pp. 245–258, 1988. 22. A. de Ruiter and J. R. Forbes, “On the Solution of Wahba’s Problem on SO(n),” J. Astronau- tical Sciences, vol. 60, no. 1, pp. 1–31, 2015.