Quantification of Model Risk: Data Uncertainty
07/11/2017- Accès libre pour les ayants-droit
Résumé
Collection
- Accès libre pour les ayants-droit
Auteurs
Zuzana Krajcovicova |
Pedro Pablo Perez-Velasco |
Carlos Vázquez |
Média
Métriques
Licence
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4/metadata.xsd"> <identifier identifierType="DOI">10.23723/17410/22589</identifier><creators><creator><creatorName>Zuzana Krajcovicova</creatorName></creator><creator><creatorName>Pedro Pablo Perez-Velasco</creatorName></creator><creator><creatorName>Carlos Vázquez</creatorName></creator></creators><titles> <title>Quantification of Model Risk: Data Uncertainty</title></titles> <publisher>SEE</publisher> <publicationYear>2018</publicationYear> <resourceType resourceTypeGeneral="Text">Text</resourceType><subjects><subject>Information geometry</subject><subject>Riemannian manifold</subject><subject>model risk</subject><subject>calibration</subject><subject>sampling</subject></subjects><dates> <date dateType="Created">Thu 8 Mar 2018</date> <date dateType="Updated">Thu 8 Mar 2018</date> <date dateType="Submitted">Thu 14 Mar 2019</date> </dates> <alternateIdentifiers> <alternateIdentifier alternateIdentifierType="bitstream">8652f3419a64c8bf61f1a72611a1c91a294b5719</alternateIdentifier> </alternateIdentifiers> <formats> <format>application/pdf</format> </formats> <version>37311</version> <descriptions> <description descriptionType="Abstract">Worldwide regulations oblige financial institutions to manage and address model risk (MR) as any other type of risk. MR quantification is essential not only in meeting these requirements but also for institution’s basic internal operative. In (5) the authors introduce a framework for the quantification of MR based on information geometry. The framework is applicable in great generality and accounts for different sources of MR during the entire lifecycle of a model. The aim of this paper is to extend the framework in (5) by studying its relation with the uncertainty associated to the data used for building the model.We define a metric on the space of samples in order to measure the data intrinsic distance, providing a novel way to probe the data for insight, allowing us to work on the sample space, gain business intuition and access tools such as perturbation methods. </description> </descriptions> </resource>
Quantification of Model Risk: Data Uncertainty Z. Krajčovičová*1 , P. P. Pérez Velasco2 , C. Vázquez1 1 Department of Mathematics, University of A Coruña, Spain, 2 Model Risk Area, Banco Santander, Spain *z.krajcovicova@udc.es Abstract. Worldwide regulations oblige financial institutions to manage and ad- dress model risk (MR) as any other type of risk. MR quantification is essential not only in meeting these requirements but also for institution’s basic internal opera- tive. In (5) the authors introduce a framework for the quantification of MR based on information geometry. The framework is applicable in great generality and ac- counts for different sources of MR during the entire lifecycle of a model. The aim of this paper is to extend the framework in (5) by studying its relation with the uncertainty associated to the data used for building the model. We define a metric on the space of samples in order to measure the data intrinsic distance, provid- ing a novel way to probe the data for insight, allowing us to work on the sample space, gain business intuition and access tools such as perturbation methods. Key words: model risk, calibration, sampling, Riemannian manifold, informa- tion geometry 1 Introduction Financial models refer to simplifying mappings of reality that serve a specific purpose by applying mathematical and economic theories to available data. They include math- ematical relations, a way to use data or judgment to compute the parameters, and indi- cations on how to apply them to practical issues, refer to (6). Models focus on specific aspects of reality, degrading or ignoring the rest. Un- derstanding the limitations of the underlying assumptions and their material conse- quences is essential from a MR perspective. Unrealistic assumptions, poor selection of the model, wrong design, misuse or inadequate knowledge of its usage may expose a financial institution to additional risks. MR refers to potential losses institutions may incur as a consequence of decisions based on the output of models. In (5) the authors introduce a general framework for the quantification of MR using information geometry, applicable to most modelling techniques currently under usage in financial institutions. This framework copes with relevant aspects of MR management such as usage, performance, mathematical foundations, calibration or data. Differences between models are determined by the distance under a Riemannian metric. 3 In this framework, models are represented by particular probability distributions that belong to the set of probability measures M (the model manifold), available for 3 Distance in statistical manifolds has been applied in finance to assess the accuracy of model approximation (for derivatives) or for quantitative comparison of models, e.g. (2), (3). modelling. We assume that the examined model p0 is represented by a probability dis- tribution that can be uniquely parametrized using the n–dimensional vector parameter θ0 = (θ1 0, . . . , θn 0 ) and can be described by the probability distribution p0 = p(x, θ0), i.e. p0 ∈ M = {p(x, θ) : θ ∈ Θ ⊂ Rn }, a differentiable manifold. selecting a particu- lar p0 is equivalent to fixing a parameter setting θ0 ∈ Θ and induces M. A nonlinear weight function K on M places a relative relevance to every alternative model, and assigns the credibility of the underlying assumptions that would make other models partially or relatively preferable to the nominal one, p0. 4 Requiring K to be a smooth density function over M induces a new absolutely continuous probability measure ζ with respect to the Riemannian volume dv defined by ζ(U) = Z U dζ = Z U Kdv(p), (1) where U ⊆ M is an open neighborhood around p0 containing alternate models that are not too far in a sense quantified by the relevance to (missing) properties and limitations of the model (i.e., the uncertainty of the model selection). The MR measure considers the usage of the model represented by some predefined mapping f:M→R with p 7→ f(p), the output function. MR is then measured as a norm of an appropriate function of the output differences over a weighted Riemannian man- ifold (K above) endowed with the Fisher–Rao metric and the Levi–Civita connection: Definition 1. Let (F, k·k) be a Banach space of measurable functions f ∈ F with respect to ζ with f as above. The model risk Z of f and p0 is given by Z(f, p0) = kf − f(p0)k . (2) This approach is used in (5) to quantify the MR by embedding M ,→ M0 where M0 = {p(x, θ) : θ ∈ Θ; dim(θ) ≥ dim(θ0)} is a Riemannian manifold contain- ing a collection of probability distributions created by varying θ0, adding properties or considering data and calibration uncertainty (see (5) for more details and examples). The main objective of this paper is to deepen in the influence of data uncertainty in MR by relating the data uncertainties with the model structure. Pulling back the model manifold metric introduces a consistent Riemannian structure on the sample space that allows us to quantify MR, working with samples. In practice it offers a computational alternative, eases application of business intuition and assignment of data uncertainty, as well as insight on MR from the data and the model perspectives. The rest of the paper is organized as follows: Section 2 introduces the concept of input data, and describes the MR that arises from data deficiencies. Also, it is devoted to the sampling and the fitting processes. Section 3 introduces a metric on the sample space, and its Riemannian structure. Quantification of MR in the sample space is pro- posed in Sec. 4. Section 5 concludes followed by the technical proofs in the Appendix. 4 The particular K is depends on the model sensitivity, scenario analysis, relevance of outcomes in decision making, business, intended purpose, or uncertainty of the model foundations. 2 Sample Space and Fitting Process Important components of MR associated with the model are flaws in the input data.5 Issues in financial data that affect consistency and availability of data include backfilling data, availability for only a subset of the cross–section of the subjects, or limited in time. To assess the uncertainty in the data we move the setting for quantification of MR into a sample space that will represent perturbed inputs and thus alternative models in M with its own metric. Distance between samples should be defined taking into account the information stored in the samples and their impact on the model usage. Let σ̄ : M → S̄ be a sampling process where S̄ is the sample space.6 Note that a bunch of samples with different elements and varying sizes are associated to the same distributions via σ̄. Conversely, some distributions may be connected with the same sample via the fitting process (estimation, calibration). Thus, we impose further condi- tions on σ̄ (see eq. 3 below) that will be linked to the fitting process: The sampled data needs to fit back to models in M and models need to resemble the image in S̄. In general, the fitting process refers to setting the model parameter values so that the behaviour of the model matches features of the measured data in as many dimensions as unknown parameters. Therefore, π̄ : S̄ → M associates a probability distribution p(x, θ) = π̄(x) ∈ M to the sample x ∈ S̄. We will assume that this map is a smooth immersion, i.e. a local embedding.7 Besides, we want π̄ to be smooth in the samples: If a small change is applied to x, the change in the parameters should be relatively small. Based on π̄, we define a quotient space out of S̄ with the equivalence classes con- taining samples that are equally valid with respect to the model manifold M. Different samples, regardless of their sample size, will share the same relevant information of the models for they give rise to the same probability distribution through π̄. Definition 2. Define the equivalence relation ∼ on the sample space S̄ by declaring x ∼ x0 if p(x, θ(x)) = p(x0 , θ0 (x0 )) where θ(·) is an estimator, i.e. a mapping from the space of random variables to M. We denote S = S̄/ ∼= {[x] : x ∈ S̄} the corresponding quotient space and the projection by δ : S̄ → S. To guarantee consistency between samples and models, and to ensure uniqueness and compatibility with the sampling process, we require π̄ and σ̄ with Dom(σ̄) = M to satisfy π̄σ̄π̄ = π̄. (3) This in particular implies that not all samples are acceptable. For example, samples with too few elements would not ensure the fitting process to work as inverse of σ̄.8 5 Main categories of financial data: time series, cross–sectional, panel data. Quality and avail- ability deficiencies include errors in definition, lack of historical depth, of critical variables, insufficient sample, migration, proxies, sensitivity to expert judgment or wrong interpretation. 6 Each sample, i.e. a collection of independent sample points of the probability distribution, is a point in S̄. For example, each point in S̄ is a particular instance of the portfolio of loans. 7 For any point x ∈ S, ∃U ⊂ S of x such that π̄ : U → M is an embedding. 8 Many algorithms (Max. Likelihood, Method of Moments, Least Square or Bayesian estima- tion) and calibration processes satisfy 3 when appropriate restrictions are applied (if needed). Proposition 1. Given the fitting process π̄ as in Def. 2, the equivalence relation x ∼ x0 if p(x, θ(x)) = p(x0 , θ0 (x0 )) is well defined. Proof. We assume π̄ : S̄ → M to be a regular embedding that satisfies eq. 3. Let ∼ be the relation induced by π̄: x ∼ y ⇔ π̄(x) = p(x, θ) = p(y, θ) = π̄(y). It is easy to check that ∼ is reflexive, symmetric and transitive, i.e. an equivalence relation on S. By contradiction we prove that the equivalence relation is well–defined: Assume that x ∈ [y] and x ∈ [z] but [y] 6= [z]. Then ∀y0 ∈ [y], z0 ∈ [z] we have x ∼ y0 and x ∼ z0 . This implies that π̄(x) = π̄(y0 ) = p(x, θ) and π̄(x) = π̄(z0 ) = p(x, θ), so π̄(y0 ) = π̄(z0 ). Since π̄σπ̄ = π̄ we have [y] = [z]. The fitting process π̄ is invariant under ∼ since π̄(x) = π̄(x0 ) whenever x ∼ x0 , and induces a unique function π on S, such that π̄ = π ◦ δ, and similarly for σ. 3 Choice of Metric on the Sample Space To introduce a measure and a distance intrinsic to the data we define a metric on S. Given Prop. 1 we endow the sample space with the rich structure of the M by pulling back to S the geometric structure on M. Let g be the Riemannian metric on M and π : S → (M, g) be a smooth immersion (see footnote 7). Then the definition of the pullback of the metric π∗ g = h acting on two tangent vectors u, v in TxS is h(u, v) = (π∗ g)x(u, v) = gπ(x)(π∗(u), π∗(v)), where π∗ : TxS → Tπ(x)M is the tangent (derivative) map. Similarly, we can define a connection ∇∗ on the manifold S as the pullback of the Levi–Civita connection ∇ on M. For X, Y ∈ Γ(TS) vector fields in the tangent bun- dle, the pullback connection is given by ∇∗ XY = (π∗ ∇)XY . The pullback connection exists and is unique, (4), therefore π∗ ∇ = ∇∗ for the pullback metric h. Theorem 1. As above, let π be the fitting process and σ the sampling process such that πσπ = π. (4) Then S becomes a weighted Riemannian manifold through the pullback of π. Proof: See Appendix. Since (S, h) and (M, g) are both Riemannian manifolds and π : S → M is a smooth immersion, for any x ∈ S there is an open neighbourhood U of x ∈ S such that π(x) is a submanifold of M and π U : U → π(U) is a diffeomorphism. Every point x ∈ S has a neighbourhood U such that π U is an isometry of U onto an open subset of M and π is a local diffeomorphism. Hence, (S, h) is locally isometric to (M, g). In the neighbourhood on which π is an isometry, the probability measure ζ defined on M given by eq. 1 can be pulled back to S. The pullback measure of ζ is a Rieman- nian measure π∗ ζ on S with respect to the metric h given by π∗ ζ(f) := ζ(f ◦ σ), f ∈ C∞ (M) Assuming M being oriented, we can endow S with the pullback orientation via the bundle isomorphism TS ∼ = π∗ (TM) over S, and therefore R M ζ = R S π∗ ζ. Proposition 2. Let M be oriented manifold and π U : U → π(U) be an isometry. For any integrable function f on π(U) ⊂ M we have Z π(U) fdζ = Z U (π∗ f)dπ∗ ζ. (5) Proof: See Appendix. Theorem 1 and Prop. 2 show that in spite of the apparent differences between S and M, one being a space of observations and the other being a statistical manifold, they can be both endowed with the same mathematical structure and become locally equivalent from a geometric point of view. 4 Quantification of Model Risk on the Sample Space After pulling back the Riemannian structure from M to S, we can quantify MR directly on S and introduce the sensitivity analysis to different perturbations in the inputs. The model in previous sections was assumed to be some probability distribution p ∈ M, or the corresponding class of samples x ∈ S after the pullback. More likely, a practitioner would define the model as some mapping f : M → R with p 7→ f(p), i.e. a model outputs some quantity. The associated pullback output function is then given by F = π∗ f, F : S → R, with F belonging to a Banach space F ∈ (F, ||·||) with respect to π∗ ζ. As an example, for the weighted Lq norms ||F||q = ( R S |F|q d(π∗ ζ))1/q , 1 ≤ q < ∞, the weighted Banach space would be Lq (S, d(π∗ ζ)) = {F : ||F||q < ∞}. Theorem 2. Let M be a model manifold with all alternative models relevant for the quantification of MR. Through σ satisfying eq. 4 and the projection δ : S̄ → S, S can be endowed with the Riemannian structure via π∗ . Letting (F, ||·||) be a Sobolev space of measurable functions with measure π∗ ζ, we get an equivalent MR measure Z as in Def. 1 on S, i.e. F ∈ F, in the neighbourhood U of x0 where π is a diffeomorphism. The measure is given by Z(F, x0) = ||F − F(x0)||. Proof: See Appendix. The choice of a specific norm depends among other factors on the purpose of the quantification. Two interesting examples are the Lq and Sobolev norms (1): 1. Zq (F, x0) for F ∈ Lq (S, d(π∗ ζ)) is the Lq norm Zq (F, x0) = kF − F(x0)kq defined just before Th. 2. Every choice of norm provides different information of the model regarding the MR it induces. For instance, the L1 norm represents the total relative change in the outputs across all relevant sample classes. The L∞ norm finds the relative worst–case error with respect to p0, pointing to the sources with largest deviances (using the inverse of the exponential map). 2. Zs,q (F, x0), f ∈ Ws,q (S, d(π∗ ζ)), is of interest when the rate of change is rel- evant:9 kF − F(x0)ks,q = ( P |k|≤s R S ∇k F − F(x0) q d(π∗ ζ))1/q , where ∇ denotes the associated connections on the vector bundles. 9 An example can be a derivatives model used not only for pricing but also for hedging. 5 Further Research There are many directions for further research, apart from the quantification of MR, both of theoretical and of practical interest. The framework can be applied to sensitivity analysis by using the directional and total derivatives. It is suitable for stress testing (regulatory and planification exercises), for validation of approximations throughout the modeling process, testing model stability, or applied in the MR management. The general methodology can be tailored and made more efficient for specific risks and algorithms. We may also enlarge the neighborhood around the model or adjoin new di- mension to M 10 that would consider missing properties, additional information about model limitations, or wrong underlying assumptions, for verification of model robust- ness and stability. The framework may be extended using data sub–manifolds in the case of hidden variables and/or incomplete data (suggested by one of the reviewers). Acknowledgements: The authors express their gratitude to the reviewers for valuable comments. This research has been funded by EU in the HORIZON2020 program, EID "Wakeupcall" (Grant agreement number 643945 inside MSCA-ITN-2014-EID). 6 Appendix Proof of Th. 1: π is a smooth immersion that satisfies πσπ = π. From the defini- tion of pullback, ∀x ∈ S and p = π(x) ∈ M, and ∀v1, v2 ∈ TxS tangent vectors, π∗ g(x)(v1, v2) = g(π(x))(Dxπ · v1, Dxπ · v2), so that π∗ g is symmetric and positive semi–definite for any map π. Thus, for v ∈ TxS, v 6= 0, π∗ g is a Riemannian metric iff π∗ g(x)(v, v) > 0 iff Dxπ · v 6= 0 (since g is Riemannian) iff ker Txπ = 0. In this case, π : (S, π∗ g) → (M, g) is an isometric immersion. Using π we can pull back all the extra structure defined on M required for the quantification of MR, including the weight function and the Banach space of the output functions. Proof of Prop. 2. It suffices to prove eq. (5) for functions with compact support. As π(U) is endowed with a canonical parametrization we only consider f with supp f contained in a chart (apply partition of unity otherwise). Let ψ be a chart on π(U) with the coor- dinates (θ1 , . . . , θn ). We can assume that φ = π−1 (ψ) is a chart on S with coordinates (y1 , . . . , yn ). By pushing forward (y1 , . . . , yn ) to π(U), we can consider (y1 , . . . , yn ) as new coordinates in ψ. With this identification of φ and ψ, π∗ becomes the identity. Hence, eq. 5 amounts to proving that π∗ ζ and ζ coincide in ψ. Let gθ ij be the compo- nents of the metric g in ψ in coordinates (θ1 , . . . , θn ), and gx kl the components of the metric g in ψ in coordinates (y1 , . . . , yn ). Then gx kl = gθ ij · ∂θi /∂yk · ∂θj /∂yl . Let h̃kl be the components of the metric h in φ in the coordinates (y1 , . . . , yn ). Since h = π∗ g, we have h̃kl = (π∗ g)kl = gθ ij · ∂θi /∂xk · ∂θj /∂xl , hence h̃kl = gx kl. Since mea- sures π∗ ζ and ζ have the same density function, say P, dζ = P √ det gxdx1 · · · dxn = 10 As an example, consider P&L modeled by a normal distribution M = N(µ, σ). To evaluate the impact of relaxing the assumption of symmetry we may introduce skew, and so embed M into a larger manifold of skew–normal distributions, M̄ = {p(x, µ, σ, s) : µ ∈ R, σ > 0, s ∈ R} where s is the shape parameter. For s = 0 we recover the initial normal distribution. P p det h̃dx1 · · · dxn = dπ∗ ζ, which proves the identity of measures ζ and π∗ ζ. Proof of Th 2: Recall that Z on M for p0 is Z(f, p0) = f − f(p0) where f ∈ (F, ||.||) is a measurable function belonging to a Banach space with respect to ζ. We want to show that Z is equivalent to Z(F, x0) = F − F(x0) defined on S endowed with the pullback structure and measurable functions F = π∗ f belonging to a Sobolev space with respect to π∗ ζ. The fitting process π represents a smooth map π : S → M, and so provides a pullback of differential forms from M to S. Namely, let Dxπ denote the tangent map of π at x ∈ S. The pullback of any tensor ω ∈ T k (M), where T k (M) denotes the set of all C∞ –covariant tensor fields of order k on M by π, is defined at x ∈ S by (π∗ ω)(x) = (Dxπ)∗ (ω(x)). The pullback π∗ is a map from T k (M) to T k (S). The pullback respects exterior products and differentiation: π∗ (ω ∧ η) = π∗ ω ∧ π∗ η, π∗ (dω) = d(π∗ ω), ω, η ∈ T (M) Besides being a smooth map, π is an immersion, i.e. for each p ∈ M, there is a neigh- borhood U of p such that π U : U → π(U) is a diffeomorphism. Since S and M are Riemannian manifolds with respective volume forms ζS and ζ, the tangent map Dxπ : TxS → Tπ(x)M can be represented by an n × n matrix Φ independent of the choice of the orthonormal basis. Following (8), the matrix Φ of a local diffeomorphims π has n positive singular values, α1(x) ≥ · · · ≥ αn(x) > 0. Similarly, the inverse map Tπ(x)(π−1 ) : Tπ(x)M → S is represented by the inverse matrix Φ−1 , whose singular values are the reciprocals of those for Φ, i.e. β1(π(x)) ≥ · · · ≥ βn(π(x)) > 0 which satisfies βi(π(x)) = αn−i+1(x)−1 , i.e. βi = α−1 n−i+1 ◦ π−1 , for i = 1, . . . , n. Then the pullback of the volume form on M is given π∗ ζ = (det Φ)ζS = (α1 . . . αn)ζS. Since π is a local isometry on U, the linear map Dxπ : TxS → Tπ(x)M at each point x ∈ U ⊂ S is an orthogonal linear isomorphism and so Dxπ is invertible (7). Then, the matrix Φ is orthogonal at every x ∈ S, which implies that the singular values are α1 = · · · = αn = 1. So, π preserves the volume, i.e. ζS = π∗ ζ, and the orientation on a neighbourhood U around each point through the bundle isomor- phism TS ∼ = π∗ (TM). In (8), the authors provide a general inequality for the Lq – norm of a pullback for an arbitrary k–form on Riemannian manifolds. Given q, r ∈ [1, ∞] such that 1/q + 1/r = 1, and some k = 0, . . . , n, suppose that the prod- uct (α1 . . . αn−k)1/q (αn−k+1 . . . αn)−1/r is uniformly bounded on S. Then, for any smooth k-form ω ∈ Lq Λk (S), (α1 . . . αk) 1/r (αk+1 . . . αn) −1/q −1 ∞ ||ω||q ≤ ||φ∗ω||q ≤ (α1 . . . αn−k) 1/r (αn−k+1 . . . αn) −1/p ∞ ||ω||q Similarly, for any η ∈ Lq Λk (M), (β1 . . . βk) 1/r (βk+1 . . . βn) −1/q −1 ∞ ||η||q ≤ ||π ∗ η||q ≤ (β1 . . . βn−k) 1/r (βn−k+1 . . . βn) −1/q ∞ ||η||q For isometry, the singular values are α1 = · · · = αn = 1, so that the above stated inequalities reduce to ||ω||q ≤ ||π∗ ω||q ≤ ||ω||q and ||η||q ≤ ||π∗ η||q ≤ ||η||q for any ω ∈ Lq Λk (S) and for any η ∈ Lq Λk (M), respectively. This means that π preserves the Lq norm for all r, and consequently the Sobolev norm since this norm is a finite sum of Lq norms. Bibliography [1] Adams, R. A.: Sobolev spaces. Academic Press (1975) [2] Brigo, D., Liinev, J.: On the distributional distance between the Libor and the Swap market models. preprint (2002). [3] Csiszár, I., Breuer, T.: An information geometry problem in mathematical fi- nance. International Conference on Networked Geometric Science of Information. Springer International Publishing (2015) [4] Elles, J., Lemaire, L.: Selected topics in harmonic maps, Conference Board of the Mathematical Sciences by the American Mathematical Society Providence, Rhode Island (1983) [5] Krajčovičová, Z., Pérez Velasco, P.P., Vázques Cendón, C.: A Novel Approach to Quantification of Model Risk for Practitionners, SSRN:https://ssrn.com/ abstract=2939983 (2017) [6] Morini, M.: Understanding and Managing Model Risk: A practical guide for quants, traders and validators. John Wiley & Sons (2011) [7] Robbin, J.W.: Extrinsic Differential Geometry, Lecture notes http://www. math.wisc.edu/~robbin/ (2008) [8] Stern, A.: Lp change of variables inequalities on manifolds. arXiv preprint arXiv:1004.0401 (2010)