User:Kvihill

Introduction

A Graphoid is a set of statements of the form, "X is irrelevant to Y given that we know Z" where X, Y and Z are sets of variables. The notion of "irrelevance" and "given that we know" may obtain different interpretations, including probabilistic, relational and correlational, depending on the application. These interpretations share common properties that can be captured by paths in graphs (hence the name "graphoid"). The theory of graphoids characterizes these properties in a finite set of axioms that are common to informational irrelevance and its graphical representations.

History

Pearl and Paz{{cite web|last1=Pearl|first1=Judea|last2=Paz|first2=Azaria|title=Graphoids: A Graph-Based Logic for Reasoning About Relevance Relations|year=1985}} coined the term "graphoids" after discovering that a set of axioms that govern conditional independence in probability theory is shared by undirected graphs. Variables are represented as nodes in a graph in such a way that variable sets X and Y are independent conditioned on Z in the distribution whenever node set Z separates X from Y in the graph. Axioms for conditional independence in probability were derived earlier by Dawid{{cite journal|last1=Dawid|first1=A. Philip|title=Conditional independence in statistical theory|journal=Journal of the Royal Statistical Society, Series B (Methodological)|date=1979|pages=1-31}} and Spohn{{cite journal|last1=Spohn|first1=Wolfgang|title=Stochastic independence, causal independence, and shieldability|journal=Journal of Philosophical Logic|date=1980|volume=9|page=73-99}}.

The correspondence between dependence and graphs was later extended to directed acyclic graphs{{cite journal|last1=Pearl|first1=Judea|title=Fusion, propagation and structuring in belief networks|journal=Artificial Intelligence|date=1986|volume=29|issue=3|page=241-288}}{{cite journal|last1=Verma|first1=Thomas|last2=Pearl|first2=Judea|title=Causal networks: Semantics and expressiveness|journal=Proceedings of the 4th Workshop on Uncertainty in Artificial Intelligence|date=1988|page=352--359}} and to other models of dependency{{cite news|last1=Geiger|first1=Dan|title=Graphoids: A Qualitative Framework for Probabilistic Inference|date=1990|format=PhD Dissertation, Technical Report R-142, Computer Science

Department, University of California, Los Angeles}}.

Definition

A dependency model M is a subset of triplets (X,Z,Y) for which the predicate I(X,Z,Y): X is independent of Y given Z, is true. A graphoid is defined as a dependency model that is closed under the following five axioms:

  1. Symmetry: I(X,Z,Y) \Leftrightarrow I(Y,Z,X)
  2. Decomposition: I(X,Z,Y\cup W) \Rightarrow I(X,Z,Y) \& I(X,Z,W)
  3. Weak Union: I(X,Z,Y\cup W) \Rightarrow I(X,Z\cup W,Y)
  4. Contraction: I(X,Z,Y) \& I(X,Z\cup Y,W) \Rightarrow I(X,Z,Y\cup W)
  5. Intersection: I(X,Z\cup W,Y) \& I(X,Z\cup Y,W) \Rightarrow I(X,Z,Y\cup W)

A semi-graphoid is a dependency model closed under (i)-(iv). These five axioms together are known as the graphoid axioms{{cite book|last1=Pearl|first1=Judea|title=Probabilistic reasoning in intelligent systems: networks of plausible inference|date=1988|publisher=Morgan Kaufmann}}. Intuitively, the weak union and contraction

properties mean that irrelevant information should not alter the relevance of other propositions in the system; what was relevant remains relevant and what was irrelevant remains irrelevant.

Types of Graphoids

==Probabilistic Graphoid<ref name=pearl:paz85/><ref name=geiger:90/>==

Conditional independence, defined as

I(X,Z,Y)~\mbox{iff}~P(X, Y|Z) = P(X|Z)

is a semi-graphoid which becomes a full graphoid when P is strictly positive.

==Correlational Graphoid<ref name=pearl:paz85/><ref name=geiger:90/>==

A dependency model is a correlational graphoid if in some probability function we have,

I_c(X,Y,Z) \Leftrightarrow \rho_{xy.z}=0~\mbox{for every}~x \in X~\mbox{and}~y \in Y

where \rho_{xy.z} is the partial correlation between x and y given set $Z$.

In other words, the linear estimation error of the variables in X using measurements on Z would not be reduced by adding measurements of the variables in Y, thus making Y irrelevant to the estimation of X. Correlational and probabilistic dependency models coincide for normal distributions.

==Relational Graphoid<ref name=pearl:paz85/><ref name=geiger:90/>==

A dependency model is a relational graphoid if it satisfies

P(X,Z)>0 \& P(Y,Z)>0 \implies P(X,Y,Z)>0.

In words, the range of values permitted for X is not restricted by the choice of Y, once Z is fixed. Independence statements belonging to this model are similar to embedded multi-valued dependencies (EMVD s) in databases.

If there exists an undirected graph G such that,

I(X,Y,Z) \Leftrightarrow _G,

then the graphoid is called as graph-induced.

In other words, there exists an undirected graph G such that every independence statement in M is reflected as a vertex separation in G and vice-versa. A necessary and sufficient condition for a dependency model to be a graph induced graphoid is that it satisfies the following axioms: symmetry, decomposition, intersection, strong union and transitivity.

Strong union states that,

I(X,Y,Z)\implies I(X,Z\cup W,Y)

Transitivity states that

I(X,Z,Y) \implies I(X,Z,\gamma) ~\mbox{or}~I(\gamma, Z,Y)~~\forall~~\gamma \notin X \cup Y \cup Z.

A graphoid is termed DAG-induced if there exists a directed acyclic graph D such that I(X,Y,Z) \Leftrightarrow _D where _D stands for d-separation in D. d-separation (d-connotes ``directional") extends the notion of vertex separation from undirected graphs to directed acyclic graphs. It permits the reading of conditional independencies from the structure of Bayesian Networks.

References