---
title: Graphical Interaction Models
output:
  bookdown::html_document2:
    base_format: rmarkdown::html_vignette
    toc: true
    number_sections: true
vignette: >
  %\VignetteIndexEntry{Graphical Interaction Models}
  %\VignetteKeyword{Graphical Models}
  %\VignetteKeyword{Hierarchical log-linear models}
  %\VignetteKeyword{Graphical Gaussian models} 
  %\VignettePackage{gRim}
  %\VignetteEngine{knitr::knitr}
  %\VignetteEncoding{UTF-8}
---



# Graphical Interaction Models and the `gRim` package

## Introduction {#intro}

```{r echo=F}
options("width"=85)
library(gRim)
ps.options(family="serif")
``` 


The `gRim` package is an R package for **gRaphical interaction
  models** (hence the name).  `gRim` implements 1) graphical
log--linear models for discrete data, that is for contingency tables
and 2) Gaussian graphical models for continuous data (multivariate
normal data) and 3) mixed homogeneous interaction models for mixed
data (data consisiting of both discrete and continuous variables).

## Introductory examples {#sec:introex}

The main functions for creating models of the various types are:

-  Discrete data: The `dmod()` function creates a hierarchical
   log--linear model.
-  Continuous data: The `cmod()` function creates a Gaussian
   graphical model.
-  Mixed data: The `mmod()` function creates a mixed
  interaction model.

The arguments to the model functions are:

```{r }
args(dmod)
args(cmod)
args(mmod)
``` 


The model objects created by these functions are of the respective
classes `dModel`, `cModel` and `mModel` and they are also of the class
`iModel`.  We focus the presentation on models for discrete data, but
most of the topics we discuss apply to all types of models.

### A Discrete Model 


The `reinis` data from \grbase\ is a $2^6$ contingency table.

```{r }
data(reinis)
str(reinis)
``` 

Models are specified as generating classes. A generating class can be
a list or a right--hand--sided formula. In addition, various model
specification shortcuts are available.  
<!-- Some of these are described in -->
<!-- Section~\@ref(sec:shortcut). -->
The following two
specifications of a log--linear model are equivalent:

```{r print=F}
data(reinis)
dm1 <- dmod(list(c("smoke", "systol"), c("smoke", "mental", "phys")), data=reinis)
dm1 <- dmod(~smoke:systol + smoke:mental:phys, data=reinis)
dm1 
``` 
 
The output reads as follows: `-2logL` is minus twice the maximized
log--likelihood and `mdim` is the number of parameters in the model
(no adjustments have been made for sparsity of data).
The `ideviance` and `idf` gives the deviance and degrees of
freedom between the model and the independence model for the same
variables and `deviance` and `df` is the deviance and degrees of
freedom between the model and the saturated model for the same
variables.

<!-- Section~\@ref(sec:intomodel) describes model objects in more -->
<!-- detail. Here we just notice that the generating class of the model is -->
<!-- contained in the slot `glist`: -->

Notice that the generating class does not appear directly but can be
retrieved using `formula()` and `terms()`:

```{r }
formula(dm1)
terms(dm1)
``` 

<!-- A summary of a model is provided by the `summary()` function: -->


<!-- ```{r } -->
<!-- summary(dm1) ## To be implemented -->
<!-- ```  -->



### Model specification shortcuts {#sec:shortcut}

Below we illustrate various other ways of specifying log--linear
models.

\begin{itemize}
-  A saturated model can be specified using `~.^.` whereas
`~.^2` specifies the model with all--two--factor
interactions. Using `~.^1` specifies the independence model.

-  If we want, say, at most two--factor interactions in the
model we can use the `interactions` argument.

-  Attention can be restricted to a subset of the variables
using the `marginal` argument.

-  Variable names can be abbreviated.

\end{itemize}

The following models illustrate these abbreviations:


```{r print=F}
dm2 <- dmod(~.^2, margin=c("smo","men","phy","sys"),
            data=reinis)
formula(dm2)
``` 


```{r print=F}
dm3 <- dmod(list(c("smoke", "systol"), c("smoke", "mental", "phys")),
            data=reinis, interactions=2)
formula(dm3)
``` 

### Plotting models 

<!-- There are two methods for plotting the dependence graph of a model: -->
<!-- Using `iplot()` and `plot()`. The convention for both -->
<!-- methods is that discrete variables are drawn as grey dots and -->
<!-- continuous variables as white dots.  1) `iplot()` creates an -->
<!-- `igraph} object and plots this. 2) 2) \comi{plot()` creates a -->
<!-- `graphNEL` object and plots this. -->


```{r fig=T}
plot(dm1)
``` 





### A Continuous Model 

For Gaussian models there are at most second order interactions. Hence
we may specify the saturated model in different ways:


```{r }
data(carcass)
cm1 <- cmod(~Fat11:Fat12:Fat13, data=carcass)
cm1 <- cmod(~Fat11:Fat12 + Fat12:Fat13 + Fat11:Fat13, data=carcass)
cm1
``` 

\footnote{Harmonize cmod() output with that of dmod()}


```{r fig=T}
plot(cm1)
``` 


### A Mixed Model


```{r }
data(milkcomp1)
mm1 <- mmod(~.^., data=milkcomp1)
mm1
``` 


```{r fig=T}
plot(mm1) ## FIXME: should use different colours for disc and cont variables.
``` 




## Model editing - `update()`

The `update()` function enables \dmodo\ objects to be modified by the addition
or deletion of interaction terms or edges, using the arguments `aterm()`, `dterm()`,
`aedge()` or `dedge()`. Some examples follow:


```{r }
###  Set a marginal saturated model:
ms <- dmod(~.^., marginal=c("phys","mental","systol","family"), data=reinis)
formula(ms)
###   Delete one edge:
ms1 <- update(ms, list(dedge=~phys:mental))
formula(ms1)
###   Delete two edges:
ms2<- update(ms, list(dedge=~phys:mental+systol:family))
formula(ms2)
###   Delete all edges in a set:
ms3 <- update(ms, list(dedge=~phys:mental:systol))
formula(ms3)
### Delete an interaction term
ms4 <- update(ms, list(dterm=~phys:mental:systol) )
formula(ms4)
``` 





```{r }
###  Set a marginal independence model:
m0 <- dmod(~.^1, marginal=c("phys","mental","systol","family"), data=reinis)
formula(m0)

###  Add three interaction terms:
ms5 <- update(m0, list(aterm=~phys:mental+phys:systol+mental:systol) )
formula(ms5)
###  Add two edges:
ms6 <- update(m0, list(aedge=~phys:mental+systol:family))
formula(ms6)
``` 



A brief explanation of these operations may be helpful. To obtain a hierarchical model when
we delete a term from a model, we must delete any higher-order relatives to the term.
Similarly, when we add an interaction term we must also add all lower-order relatives that
were not already present. Deletion of an edge is equivalent to deleting the corresponding
two-factor term. Let $m-e$ be the result of deleting edge $e$ from a model $m$. Then the
result of adding $e$ is defined as the maximal model $m^*$ for which $m^*-e=m$.



## Testing for conditional independence - `ciTest()`

Tests of general conditional independence hypotheses of the form $u
\perp v | W$ can be performed using the `ciTest()`.
function.


```{r print=T}
cit <- ciTest(reinis, set=c("systol", "smoke", "family", "phys"))
cit 
``` 

The general syntax of the `set` argument is of the form $(u,v,W)$
where $u$ and $v$ are variables and $W$ is a set of variables.
The `set` argument can also be given as a right--hand sided formula.

<!-- Notice that in this case the results are identical to those of -->
<!-- \comics{testdelete()}{\grim}, since we have specified the correct -->
<!-- conditioning set. If we had conditioned on more variables -->

<!-- ```{r print=T} -->
<!-- cit2 <- ciTest(mildew, set=c("locc","a367","mp58","c365","p53a","la10")) -->
<!-- ```  -->

In model terms, the test performed by \comic{ciTest()} corresponds to
the test for removing the edge $\{ u, v \}$ from the saturated model
with variables $\{u, v\} \cup W$.  If we (conceptually) form a factor
$S$ by crossing the factors in $W$, we see that the test can be
formulated as a test of the conditional independence $u \perp v | S$
in a three way table. The deviance decomposes into independent
contributions from each stratum:

\begin{eqnarray*}
\nonumber
 D & =& 2 \sum_{ijs} n_{ijs}\log \frac{n_{ijs}}{\hat m_{ijs}} \\
   &= & \sum_s 2 \sum_{ij} n_{ijs}\log \frac{n_{ijs}}{\hat m_{ijs}}= \sum_s D_s
\end{eqnarray*}

where the contribution $D_s$ from the $s$th slice is the deviance for
the independence model of $u$ and $v$ in that slice. For example,


```{r }
cit$slice
``` 


The $s$th slice is a $|u|\times|v|$ table $\{n_{ijs}\}_{i=1\dots |u|,
  j=1 \dots |v|}$. The number of degrees of freedom corresponding to
the test for independence in this slice is
\begin{displaymath}
df_s=(\#\{i: n_{i\cdot  s}>0\}-1)(\#\{j: n_{\cdot js}>0\}-1)
\end{displaymath}
where $n_{i\cdot s}$ and
$n_{\cdot js}$ are the marginal totals.

So the correct number of degrees of freedom for the test in the
present example is $3$, as calculated by `ciTtest()` and
`testdelete()`.

An alternative to the asymptotic $\chi^2$ test is to determine the
reference distribution using Monte Carlo methods. The marginal totals
are sufficient statistics under the null hypothesis, and in a
conditional test the test statistic is evaluated in the conditional
distribution given the sufficient statistics. Hence one can generate
all possible tables with those given margins, calculate the desired
test statistic for each of these tables and then see how extreme the
observed test statistic is relative to those of the calculated
tables. A Monte Carlo approximation to this procedure is to randomly
generate large number of tables with the given margins, evaluate the
statistic for each simulated table and then see how extreme the
observed test statistic is in this distribution.  This is called a
`Monte Carlo exact test` and it provides a \comi{Monte Carlo
  $p$--value}:


```{r }
ciTest(reinis, set=c("systol","smoke","family","phys"), method='MC')
``` 

## Fundamental methods for inference {#sec:fundamental}

This section describes some fundamental methods for inference in
\grim. As basis for the description consider the following model shown
in Fig. \@ref(fig:fundamentalfig1):

```{r print=T}
dm5 <- dmod(~ment:phys:systol + ment:systol:family + phys:systol:smoke,
            data=reinis)
```

```{r fundamentalfig1,fig.cap="Model for reinis data.", echo=F}
plot(dm5)
``` 

<!-- \begin{figure}[h] -->
<!--   \centering -->
<!--   \includegraphics[]{figures/GRIM-fundamentalfig1} -->
<!--   \caption{A decomposable graphical model for the \reinis\ data.} -->
<!--   {#fig:fundamentalfig1} -->
<!-- \end{figure} -->


### Testing for addition and deletion of edges 

Let $\cal M_0$ be a model and let $e=\{u,v\}$ be an edge in $\cal M_0$.
The candidate model  formed by deleting $e$ from $\cal M_0$ is $\cal M_1$.
The `testdelete()` function can be used to test for deletion of
an edge from a model:


```{r }
testdelete(dm5, ~smoke:systol)
testdelete(dm5, ~family:systol)
``` 


In the first case the $p$--value suggests that the edge can not be
deleted. In the second case the $p$--value suggests that the edge can
be deleted. The reported AIC
value is the difference in AIC between the candidate model and the
original model. A negative value of AIC suggest that the candidate
model is to be preferred.


Next, let $\cal M_0$ be a model and let $e=\{u,v\}$ be an edge not in
$\cal M_0$. The candidate model  formed by adding $e$ to $\cal M_0$ is
denoted $\cal M_1$.
The `testadd()` function can be used to test for deletion of
an edge from a model:


```{r }
testadd(dm5, ~smoke:mental)
``` 

The $p$--value suggests that no significant improvedment of the model
is obtained by adding the edge. The reported AIC value is the
difference in AIC between the candidate model and the original
model. A negative value of AIC would have suggested that the candidate
model is to be preferred.

\footnote{A function for testing addition / deletion of more general
  terms is needed.}



### Finding edges 

The `getInEdges()` function will return a list of all the edges
in the dependency graph $\cal G$ defined by the model. If we set
`type='decomposable'` then the edges returned are as follows: An
edge $e=\{u,v\}$ is returned if $\cal G$ minus the edge $e$ is
decomposable. In connection with model selection this is convenient
because it is thereby possibly to restrict the search to decomposable
models.


```{r print=T}
ed.in <- getInEdges(ugList(terms(dm5)), type="decomposable")
``` 

The `getOutEdges()` function will return a list of all the edges
which are not in the dependency graph $\cal G$ defined by the model. If we set
`type='decomposable'` then the edges returned are as follows: An
edge $e=\{u,v\}$ is returned if $\cal G$ plus the edge $e$ is
decomposable. In connection with model selection this is convenient
because it is thereby possibly to restrict the search to decomposable
models.


```{r print=T}
ed.out <- getOutEdges(ugList(terms(dm5)), type="decomposable")
``` 


### Testing several edges {#sec:labeledges}


```{r }
args(testInEdges)
args(testOutEdges)
``` 



The functions `labelInEdges()} and \code{labelOutEdges()` will
test for deletion of edges and addition of edges. The default is to
use AIC for evaluating each edge. It is possible to specify the penalty parameter for AIC to being other
values than 2 and it is possible to base the evaluation on
significance tests instead of AIC. Setting `headlong=TRUE` causes
the function to exit once an improvement is found.
For example:


```{r }
testInEdges(dm5, getInEdges(ugList(terms(dm5)), type="decomposable"),
             k=log(sum(reinis)))
``` 






## Stepwise model selection {#sec:stepwise}

Two functions are currently available for model selection:
`backward()` and `forward()`. These functions employ the
functions in Section \@ref(sec:labeledges))


### Backward search 

For example, we start with the saturated model and do a backward search.

```{r fig=T}
dm.sat <- dmod(~.^., data=reinis)
dm.back <- backward(dm.sat)
plot(dm.back)
``` 

```{r}
cm.sat <- cmod(~.^., data=carcassall[,1:15])
cm.back <- backward(cm.sat, k=log(nrow(carcass)), type="unrestricted")
plot(cm.back)
```

Default is to search among decomposable models if the initial model is
decomposable. Default is also to label all edges (with AIC values);
however setting `search='headlong'` will cause the labelling to
stop once an improvement has been found.

### Forward search 

Forward search works similarly; for example we start from the
independence model:


```{r fig=T}
dm.i   <- dmod(~.^1, data=reinis)
dm.forw <- forward(dm.i)
plot(dm.forw)
``` 

### Backward and forward search 

The `stepwise()` function will perform a stepwise model
selection. Start from the saturated model:


```{r }
dm.s2<-stepwise(dm.sat, details=1)
``` 

The default selection criterion is AIC (as opposed to significance
test); the default penalty parameter in AIC is $2$ (which gives
genuine AIC). The default search direction is backward (as opposed to
forward). Default is to restrict the search to decomposable models if
the starting model is decomposable; as opposed to unrestricted
search. Default is not to do headlong search which means that all
edges are tested and the best edge is chosen to delete. Headlong on
the other hand means that once a deletable edge is encountered, then
this edge is deleted.


Likewise, we may do a forward search starting from the independence model:


```{r }
dm.i2<-stepwise(dm.i, direction="forward", details=1)
``` 



```{r stepwise01, fig=T, include=F}
par(mfrow=c(1,2))
dm.s2
dm.i2
plot(dm.s2)
plot(dm.i2)
``` 

\begin{figure}[h]
  \centering
  \includegraphics{figures/GRIM-stepwise01}
  \caption{Models for the \reinis\ data obtained by backward (left) and forward (right) stepwise model selection.}
  {#fig:stepwise01}
\end{figure}


Stepwise model selection is in practice only feasible for moderately sized
problems.




### Fixing edges/terms in model as part of model selection


The stepwise model selection can be controlled by fixing specific
edges. For example we can specify edges which are not to be considered
in a bacward selection:


```{r }
fix <- list(c("smoke","phys","systol"), c("systol","protein"))
fix <- do.call(rbind, unlist(lapply(fix, names2pairs),recursive=FALSE))
fix
dm.s3 <- backward(dm.sat, fixin=fix, details=1)
``` 

There is an important detail here: The matrix `fix` specifies a
set of edges. Submitting these in a call to \comic{backward} does
not mean that these edges are forced to be in the model. It means that
those edges in `fixin` which are in the model will not be removed.

Likewise in forward selection:


```{r }
dm.i3 <- forward(dm.i, fixout=fix, details=1)
``` 

Edges in `fix` will not be added to the model but if they are in
the starting model already, they will remain in the final model.


```{r stepwise02, fig=T, include=F}
par(mfrow=c(1,2))
dm.s3
dm.i3
plot(dm.s3)
plot(dm.i3)
```

\begin{figure}[h]
  \centering
  \includegraphics{figures/GRIM-stepwise02}
  \caption{Models for the \reinis\ data obtained by backward (left)
    and forward (right) stepwise model selection when certain edges
    are restricted in the selection procedure. }
  {#fig:stepwise02}
\end{figure}






## Further topics on models for contingency tables 

### Sparse Contingency Tables


```{r fig=T}
data(mildew)
dm1 <- dmod(~.^., data=mildew)
dm1
dm2 <- stepwise(dm1)
dm2
plot(dm2)
```



### Dimension of a log--linear model
{#sec:dimloglin}

The `dim_loglin()` is a general function for finding the dimension
of a log--linear model. It works on the generating class of a model
being represented as a list. For a decomposable model
 it is possible to calculate
and adjusted dimension which accounts for sparsity of data with `dim_loglin_decomp()`:
 

```{r}
ff <- ~la10:locc:mp58:c365+mp58:c365:p53a:a367
mm <- dmod(ff, data=mildew)
plot(mm)
```




```{r}
dim_loglin(terms(mm), mildew)
dim_loglin_decomp(terms(mm), mildew)
```


### A space--efficient implementation of IPS for contingency tables


<!-- \footnote{effloglin() is not the best name. Perhaps loglineff() is a bit -->
<!--   better. Should start with loglin...} -->

The IPS algorithm for hierarchical log--linear models is *inefficient* in the sense that it requires
the entire table to be fitted. For example, if there are $81$ variables
each with $10$ levels then a table with $10^{81}$ will need to be created.
(Incidently, $10^{81}$ is one of the figures reported as the number of
atoms in the universe. It is a large number!).

Consider a hierarchical log--linear model with generating class $\cal A
= \{a_1, \dots, a_M\}$ over a set of variables $\Delta$.  The
Iterative Proportional Scaling (IPS) algorithm (as described e.g.\ in
@lauritzen:96, p.\ 83) as a commonly used method for fitting
such models. The updating steps are of the form

\begin{equation}
  p(i) \leftarrow p(i)\frac{n(i_{a_k})/n}{p(i_{a_k})} \mbox{ for } k=1,\dots,M.
\end{equation}

The IPS algorithm is implemented in the `loglin()` function.

A more *efficient* IPS algorithm is described by
@jirousek:preucil:95, and this is implemented in the
`effloglin()` function. The implementation of
`effloglin()` is made entirely in `R` and therefore the word
*efficient* should be understood in terms of space
requirement (for small problems, `loglin()` is much faster than
`effloglin()`).

The algorithm goes as follows: It is assumed that $\cal A$ is minimally
specified, i.e.\ that no element in $\cal A$ is contained in another
element in $\cal A$.  Form the dependency graph $\cal G(\cal A)$ induced by
$\cal A$. Let $\cal G'$ denoted a triangulation of $\cal G(\cal A)$ and  let
$\cal C=\{C_1,\dots,C_N\}$ denote the cliques of $\cal G'$.
Each $a\in \cal A$ is then contained in exactly one clique $C\in
\cal C$. Let $\cal A_C=\{a\in \cal A:a\subset C\}$ so that $\cal A_{C_1},
\dots, \cal A_{C_N}$ is a disjoint partitioning of $\cal A$.

Any probability $p$ satisfying the constraints of $\cal A$ will also factorize
according to $\cal G'$ so that

\begin{align} 
  p(i) = \prod_{C\in \cal C} \psi_C(i_C)
  (\#eq:effloglin1)
\end{align}


Using e.g.\ the computation architecture of
@lauritzen:spiegelhalter:88 the clique marginals

\begin{align}
  p_{C}(i_{C}), \quad C \in \cal C
  (\#eq:effloglin2)
\end{align}

can be obtained from (\@ref(eq:effloglin1)). In practice calculation of
(\@ref(eq:effloglin2)) is done using the `gRrain` package.
For $C\in \cal C$ and an $a \in \cal A_C$ update $\psi_C$ in
(\@ref(eq:effloglin1)) as

\begin{align}
  \psi_C(i_C) \leftarrow \psi_C(i_C) \frac{n(i_a)/n}{p_a(i_a)}
\end{align}

where $p_a$ is obtained by summing over variables in $C\setminus
a$ in $p_C$ from (\@ref(eq:effloglin2)). Then find the new clique
marginals in (\@ref(eq:effloglin2)), move on to the next $a$ in
$\cal A_{C}$ and so on.




As an example, consider 4--cycle model for reinis data:


```{r }
data(reinis)
ff    <- ~smoke:mental+mental:phys+phys:systol+systol:smoke
dmod(ff, data=reinis)
``` 


This model can be fitted with `loglin()` as


```{r }
glist <- rhsFormula2list(ff)
glist
fv1 <- loglin(reinis, glist, print=FALSE)
fv1[1:3]
``` 

An alternative is `effloglin()` which uses the algorithm above on a 
triangulated graph:


```{r }
fv2 <- effloglin(reinis, glist, print=FALSE)
fv2[c('logL','nparm','df')]
``` 

The real virtue of `effloglin()` lies in that it is possible to
submit data as a list of sufficient marginals:


```{r }
stab <- lapply(glist, function(gg) tableMargin(reinis, gg))
fv3 <- effloglin(stab, glist, print=FALSE)
``` 

A sanity check:


```{r }
m1 <- loglin(reinis, glist, print=F, fit=T)
f1 <- m1$fit
m3 <- effloglin(stab, glist, print=F, fit=T)
f3 <- m3$fit
max(abs(f1 %a-% f3))
``` 




## Testing for addition and deletion of edges

Consider the saturated and the independence models for the
`carcass` data:

```{r print=T}
data(carcass)
cm1 <- cmod(~.^., carcass)
cm2 <- cmod(~.^1, data=carcass)
```



### `testdelete()` 

Let $\cal M_0$ be a model and let $e=\{u,v\}$ be an edge in $\cal M_0$.
The candidate model  formed by deleting $e$ from $\cal M_0$ is $\cal M_1$.
The `testdelete()` function can be used to test for deletion of
an edge from a model:


```{r }
testdelete(cm1, ~Meat11:Fat11)
testdelete(cm1, ~Meat12:Fat13)
```

In the first case the $p$--value suggests that the edge can not be
deleted. In the second case the $p$--value suggests that the edge can
be deleted. The reported AIC
value is the difference in AIC between the candidate model and the
original model. A negative value of AIC suggest that the candidate
model is to be preferred.


### `testadd()` 

Next, let $\cal M_0$ be a model and let $e=\{u,v\}$ be an edge not in
$\cal M_0$. The candidate model  formed by adding $e$ to $\cal M_0$ is
denoted $\cal M_1$.
The `testadd()` function can be used to test for deletion of
an edge from a model:


```{r }
testadd(cm2, ~Meat11:Fat11)
testadd(cm2, ~Meat12:Fat13)
```


In the first case the $p$--value suggests that no significant
improvedment of the model is obtained by adding the edge. In the
second case a significant improvement is optained by adding the edge.
The reported AIC
value is the difference in AIC between the candidate model and the
original model. A negative value of AIC suggest that the candidate
model is to be preferred.


## Finding edges 


### `getInEdges()`

Consider the following model for the \carcass\ data:


```{r fig=T}
data(carcass)
cm1 <- cmod(~LeanMeat:Meat12:Fat12+LeanMeat:Fat11:Fat12+Fat11:Fat12:Fat13, data=carcass)
plot(cm1)
```


The edges in the model are

```{r }
getInEdges(cm1)
``` 

In connection with model selection it is sometimes convenient to get
only the edges which are contained in only one clique:


```{r }
getInEdges(cm1, type="decomposable")
``` 

\footnote{getInEdges/getOutEdges: type=''decomposable'' is a silly value for the argument}

\footnote{getInEdges/getOutEdges: Should be possible to have edges as a
  matrix instead. Perhaps even as default.}

### `getOutEdges()`


The edges not in the model are


```{r }
getOutEdges(cm1)
```

In connection with model selection it is sometimes convenient to get
only the edges which when added will be in only one clique of the new model:


```{r }
getOutEdges(cm1, type="decomposable")
```


## Evaluating edges in the model 


```{r fig=T}
data(carcass)
cm1 <- cmod(~LeanMeat:Meat12:Fat12+LeanMeat:Fat11:Fat12+Fat11:Fat12:Fat13+Fat12:Meat11:Meat13, data=carcass[1:20,])
plot(cm1)
```

### `evalInEdges()`


```{r, eval=T}
in.ed <- getInEdges(cm1)
z <- testInEdges(cm1, edgeList=in.ed)
z
``` 

Hence there are four edges which lead to a decrease in AIC. If we set
`headlong=T` then the function exist as soon as one decrease in
AIC is found:

```{r, eval=T}
z <-testInEdges(cm1, edgeList=in.ed, headlong=T)
z
``` 


### `evalOutEdges()`



```{r, eval=T}
out.ed <- getOutEdges(cm1)
z <- testOutEdges(cm1, edgeList=out.ed)
``` 

Hence there are four edges which lead to a decrease in AIC. If we set
`headlong=T` then the function exist as soon as one decrease in
AIC is found:


```{r, eval=T}
z <- testOutEdges(cm1, edgeList=out.ed, headlong=T)
z
``` 



## Miscellaneous


### The Model Object

It is worth looking at the information in the model object:


```{r }
dm3 <- dmod(list(c("smoke", "systol"), c("smoke", "mental", "phys")),
            data=reinis)
names(dm3)
``` 

\begin{itemize}

-  The model, represented as a list of generators, is


```{r }
str(terms(dm3))
``` 


```{r }
str(dm3$glistNUM)
``` 

The numeric representation of the generators refers back to


```{r }
dm3$varNames
``` 

Notice the model object does not contain a graph object. Graph objects
are generated on the fly when needed.

-  Information about the variables etc. is

```{r }
str(dm3[c("varNames","conNames","conLevels")])
``` 

-  Finally `isFitted` is a logical for whether the model is fitted;
  `data} is the data (as a table) and \code{fitinfo` consists of
  fitted values, logL, df etc.
\end{itemize}


### Methods for model objects



A `summary()` of a model:


```{r }
summary(dm1) ## FIXME
```




```{r }
str(fitted(dm1))
str(dm1$data)
``` 

Hence we can make a simple diagnostic plot of Pearson residuals as FIXME


```{r pearson-1,fig=T,include=F, eval=F}
X2 <- (fitted(dm1)-dm1$datainfo$data)/sqrt(fitted(dm1))
qqnorm(as.numeric(X2))
``` 

\begin{figure}[h]
  \centering
  \includegraphics[]{figures/GRIM-pearson-1}
  \caption{A marginal model for a slice of the \reinis\ data.}
  {#fig:pearson-1}
\end{figure}