On coarse graining of information and its application to pattern recognition

Auteurs :
Publication MaxEnt 2014
OAI : oai:www.see.asso.fr:9603:11340


On coarse graining of information and its application to pattern recognition


application/pdf On coarse graining of information and its application to pattern recognition


51.04 Ko


Creative Commons Aucune (Tous droits réservés)


Sponsors scientifique


Sponsors logistique


Sponsors financier

<resource  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4/metadata.xsd">
        <identifier identifierType="DOI">10.23723/9603/11340</identifier><creators><creator><creatorName>Ali Ghaderi</creatorName></creator></creators><titles>
            <title>On coarse graining of information and its application to pattern recognition</title></titles>
        <resourceType resourceTypeGeneral="Text">Text</resourceType><dates>
	    <date dateType="Created">Sun 31 Aug 2014</date>
	    <date dateType="Updated">Mon 2 Oct 2017</date>
            <date dateType="Submitted">Mon 18 Feb 2019</date>
	    <alternateIdentifier alternateIdentifierType="bitstream">684b462d6c02652364cead120ff2fa453e268c0d</alternateIdentifier>
            <description descriptionType="Abstract"></description>

ON COARSE GRAINING OF INFORMATION AND ITS APPLICATION TO PATTERN RECOGNITION Ali Ghaderi Telemark University College S.Ali.Ghaderi@gmail.com 19.05.2014 Abstract In pattern recognition one is concern with …nding regularities in data and clas- sifying them into di¤erent categories [1]. Objects in the same category are more similar to each other than those in other categories. However, often the notion of category cannot be precisely de…ned. Therefore, categories in these cases are de…ned as collection of objects which are likely to share the same properties. One common approach to such problems is based on the so called …nite mixture models [2]. More precisely, suppose that X is a random variable which takes values in a sample space X; and its distribution is represented by the probability density function p (xj ) : Then p (xj ) = kP j=1 jfj (xj j) ; x 2 X where kP j=1 j = 1; j 0 and Z X fj (xj j) dx = 1; fj (xj j) 0 In such a case, one says that X has a …nite mixture distribution and that p (xj ) is a …nite mixture density function. The parameters j are called mixing weights and fj the component densities of the mixture. In the context of pattern recognition, k is the number of categories and fj is the density function describing the distribution of members of the category j: Technically, once fj are speci…ed, determining ( j; j) becomes a standard problem in statistical inference. We argue that in order to be able to specify fj one has to be able to relate properties of each member of a category to the properties of the whole category itself. We show how in some cases this can be achieved through coarse graining of information within each category and how it can be used to derive the functional form of fj: The arguments will be elucidated with examples. Key Words: Mixture probability, Coarse graining, Maximum Entropy, Pattern Recognition. References: [1] C. M. Bishop, Pattern Recognition and Machine Learning, Springer Sci- ence+Business Media, 2006. [1] D. M. Titterington, A. F. M. Smith, U. E. Makov, Statistical Analysis of Finite Mixture Distributions, John Wiley & Sons Ltd., 1985.