jagomart
digital resources
picture1_Probabilistic Graphical Models Pdf 88845 | Lect2gm


 174x       Filetype PDF       File size 0.85 MB       Source: mlg.eng.cam.ac.uk


File: Probabilistic Graphical Models Pdf 88845 | Lect2gm
graphical models zoubin ghahramani department of engineering university of cambridge uk zoubin eng cam ac uk http learning eng cam ac uk zoubin mlss 2012 la palma representing knowledge through ...

icon picture PDF Filetype PDF | Posted on 15 Sep 2022 | 3 years ago
Partial capture of text on file.
              Graphical Models
              Zoubin Ghahramani
              Department of Engineering
             University of Cambridge, UK
               zoubin@eng.cam.ac.uk
          http://learning.eng.cam.ac.uk/zoubin/
                 MLSS 2012
                  La Palma
      Representing knowledge through graphical models
                A       B
                    C
                          D
                E
   • Nodes correspond to random variables
   • Edges represent statistical dependencies between the variables
                       Why do we need graphical models?
     • Graphs are an intuitive way of representing and visualising the relationships
       between many variables.     (Examples: family trees, electric circuit diagrams,
       neural networks)
     • A graph allows us to abstract out the conditional independence relationships
       between the variables from the details of their parametric forms. Thus we can
       answer questions like: “Is A dependent on B given that we know the value of
       C?” just by looking at the graph.
     • Graphical models allow us to define general message-passing algorithms that
       implement probabilistic inference efficiently.   Thus we can answer queries like
       “What is p(A|C = c)?” without enumerating all settings of all variables in the
       model.
             Graphical models = statistics × graph theory × computer science.
         Directed Acyclic Graphical Models (Bayesian Networks)
                                  A          B
                                       C
                                                 D
                                E
                                     1
    A DAG Model / Bayesian network corresponds to a factorization of the joint
    probability distribution:
              p(A,B,C,D,E)=p(A)p(B)p(C|A,B)p(D|B,C)p(E|C,D)
    In general:
                                           n
                          p(X ,...,X ) = Yp(X |X        )
                              1      n           i  pa(i)
                                          i=1
    where pa(i) are the parents of node i.
      1“Bayesian networks” can and often are learned using non-Bayesian (i.e. frequentist) methods; Bayesian networks
    (i.e. DAGs) do not require parameter or structure learning using Bayesian methods. Also called “belief networks”.
The words contained in this file might help you see if this file matches what you are looking for:

...Graphical models zoubin ghahramani department of engineering university cambridge uk eng cam ac http learning mlss la palma representing knowledge through a b c d e nodes correspond to random variables edges represent statistical dependencies between the why do we need graphs are an intuitive way and visualising relationships many examples family trees electric circuit diagrams neural networks graph allows us abstract out conditional independence from details their parametric forms thus can answer questions like is dependent on given that know value just by looking at allow dene general message passing algorithms implement probabilistic inference eciently queries what p without enumerating all settings in model statistics theory computer science directed acyclic bayesian dag network corresponds factorization joint probability distribution n x yp i pa where parents node often learned using non frequentist methods dags not require parameter or structure also called belief...

no reviews yet
Please Login to review.