The raw data consist of the binary judgments of 101 first-year psychology students who indicated whether or not they would display each of 8 anger-related behaviors when being angry at someone in each of 6 situations.
Each situation is presented as one level of a factor, without specifying a level for the other factor.
library(plfm)
## Loading required package: sfsmisc
## Loading required package: abind
data(anger)
D = anger$data
Plot a graph based on the Ising model pour the subset of the data of situation “like”.
library(IsingFit)
r.nb = 1 # "like"
D.subset = matrix(0,nrow(D),ncol(D[1,,]))
for (ii in 1:nrow(D)){
D.subset[ii,] = c(D[ii,r.nb,])
}
D.subset = data.frame(D.subset)
colnames(D.subset) <- colnames(D[1,,])
mod.ising = IsingFit(D.subset,gamma=.25)
##
|
| | 0%
|
|======== | 12%
|
|================ | 25%
|
|======================== | 38%
|
|================================ | 50%
|
|========================================= | 62%
|
|================================================= | 75%
|
|========================================================= | 88%
|
|=================================================================| 100%
title("Like")
Question: What are the conditionnal independence describd by this graph?
This result can be compared to some plots from the MCA (multivariate correspondance analysis)
library(FactoMineR)
mca = MCA(apply(D.subset,2,as.factor),graph=FALSE)
plot(mca,choix="var")
Question: What are the information on the MCA plot? Can retrieve some informations which help to understand the graph plot?
Plot a graph based on the Ising model pour the subset of the data of situation “like”.
c.nb = c(5,6) #
D.subset = matrix(0,nrow(D),6)
for (ii in 1:nrow(D)){
D.subset[ii,] = c(D[ii,1:3,c.nb])
}
D.subset = data.frame(D.subset)
colnames(D.subset)[1:3] <- paste("Hart","/",rownames(D[1,1:3,]),sep="")
colnames(D.subset)[4:6] <- paste("Story","/",rownames(D[1,1:3,]),sep="")
mod.ising = IsingFit(D.subset,gamma=.01)
##
|
| | 0%
|
|=========== | 17%
|
|====================== | 33%
|
|================================ | 50%
|
|=========================================== | 67%
|
|====================================================== | 83%
|
|=================================================================| 100%
title("Hart+Story")
This result can be compared to some plots from the MCA (multivariate correspondance analysis)
library(FactoMineR)
mca = MCA(apply(D.subset,2,as.factor),graph=FALSE)
plot(mca,choix="var")
Restricted Bolzman Machines are usually used to pretrain deep neural network. However, they can also be used for dimension reduction and/or unsupervized learning.
Here, we will use the RBM from http://alandgraf.blogspot.fr/2013/01/restricted-boltzmann-machines-in-r.html?view=snapshot The visible and hidden units are bernoulli variables.
Let us try on the Anger like situation.
source("https://perso.univ-rennes1.fr/valerie.monbet/GM/rbm.R")
set.seed("313")
rbm.anger = rbm(num_hidden=2,t(D[1:100,,1]),.01,1000,mini_batch_size=10,quiet=TRUE)
rbm.anger # weights
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 0.1980488 -1.090432 -1.1784443 -1.962488 1.4401220 -0.8475423
## [2,] -0.8635005 -1.172795 -0.1382069 -1.976021 -0.1528759 -1.1599088
Then probabilities of hidden state can be computed and hidden state predicted.
proba = 1/(1+exp(-rbm.anger %*% t(D[1:20,,1]))) # visible_state_to_hidden_probabilities
proba
## [,1] [,2] [,3] [,4] [,5] [,6] [,7]
## [1,] 0.5 0.6129493 0.34735061 0.8372859 0.8372859 0.12587058 0.4196239
## [2,] 0.5 0.2396529 0.08888074 0.2657338 0.2657338 0.08845042 0.0477705
## [,8] [,9] [,10] [,11] [,12] [,13] [,14]
## [1,] 0.5 0.5493510 0.2353320 0.6439568 0.2999487 0.14932241 0.5865424
## [2,] 0.5 0.2966085 0.4655032 0.2120212 0.2386839 0.03930881 0.2098764
## [,15] [,16] [,17] [,18] [,19] [,20]
## [1,] 0.031046337 0.1384841 0.3780455 0.5 0.4256055 0.6879663
## [2,] 0.004221608 0.1032507 0.0768753 0.5 0.0339253 0.1019004
hidden_state = apply(proba,2,which.max)
print("Hidden states (or class)")
## [1] "Hidden states (or class)"
hidden_state
## [1] 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1