bnlearn error in structural.em - bayesian

I got an error when try to use structural.em in "bnlearn" package
This is the code:
cut.learn<- structural.em(cut.df, maximize = "hc",
+ maximize.args = "restart",
+ fit="mle", fit.args = list(),
+ impute = "parents", impute.args = list(), return.all = FALSE,
+ max.iter = 5, debug = FALSE)
Error in check.data(x, allow.levels = TRUE, allow.missing = TRUE,
warn.if.no.missing = TRUE, : at least one variable has no observed
values.
Did anyone have the same problems, please tell me how to fix it.
Thank you.

I got structural.em working. I am currently working on a python interface to bnlearn that I call pybnl. I also ran into the problem you desecribe above.
Here is a jupyter notebook that shows how to use StructuralEM from python marks.
The gist of it is described in slides-bnshort.pdf on page 135, "The MARKS Example, Revisited".
You have to create an inital fit with an inital imputed dataframe by hand and then provide the arguments to structural.em like so (ldmarks is the latent-discrete-marks dataframe where the LAT column only contains missing/NA values):
library(bnlearn)
data('marks')
dmarks = discretize(marks, breaks = 2, method = "interval")
ldmarks = data.frame(dmarks, LAT = factor(rep(NA, nrow(dmarks)), levels = c("A", "B")))
imputed = ldmarks
# Randomly set values of the unobserved variable in the imputed data.frame
imputed$LAT = sample(factor(c("A", "B")), nrow(dmarks2), replace = TRUE)
# Fit the parameters over an empty graph
dag = empty.graph(nodes = names(ldmarks))
fitted = bn.fit(dag, imputed)
# Although we've set imputed values randomly, nonetheless override them with a uniform distribution
fitted$LAT = array(c(0.5, 0.5), dim = 2, dimnames = list(c("A", "B")))
# Use whitelist to enforce arcs from the latent node to all others
r = structural.em(ldmarks, fit = "bayes", impute="bayes-lw", start=fitted, maximize.args=list(whitelist = data.frame(from = "LAT", to = names(dmarks))), return.all = TRUE)
You have to use bnlearn 4.4-20180620 or later, because it fixes a bug in the underlying impute function.

Related

Is non-identical not enough to be considered 'distinct' for kmeans centroids?

I have an issue with kmeans clustering providing centroids. I saw the same problem already asked (
K-means: Initial centers are not distinct), but the solution in that post is not working in my case.
I selected the centroids using ClusterR::Kmeans_arma. I confirmed that my centroids are not identical using mgcv::uniquecombs, but still got the initial centers are not distinct error.
> dim(t(dat))
[1] 13540 11553
> centroids = ClusterR::KMeans_arma(data = t(dat), centers = 561,
n_iter = 50, seed_mode = "random_subset",
verbose = FALSE, CENTROIDS = NULL)
> dim(centroids)
[1] 561 11553
> x = mgcv::uniquecombs(centroids)
> dim(x)
[1] 561 11553
> res = kmeans(t(dat), centers = centroids, iter.max = 200)
Error in kmeans(t(dat), centers = centroids, iter.max = 200) :
initial centers are not distinct
Any suggestion to resolve this? Thanks!
I replicated the issue you've mentioned with the following data:
cols = 13540
rows = 11553
set.seed(1)
vec_dat = runif(rows * cols)
dat = matrix(vec_dat, nrow = rows, ncol = cols)
dim(dat)
dat = t(dat)
dim(dat)
There is no 'centers' parameter in the 'ClusterR::KMeans_arma()' function, therefore I've assumed you actually mean 'clusters',
centroids = ClusterR::KMeans_arma(data = dat,
clusters = 561,
n_iter = 50,
seed_mode = "random_subset",
verbose = TRUE,
CENTROIDS = NULL)
str(centroids)
dim(centroids)
The 'centroids' is a matrix of class "k-means clustering". If your intention is to come to the clusters then you can use,
clust = ClusterR::predict_KMeans(data = dat,
CENTROIDS = centroids,
threads = 6)
length(unique(clust)) # 561
class(centroids) # "k-means clustering"
If you want to pass the 'centroids' to the base R 'kmeans' function you have to set the 'class' of the 'centroids' object to NULL and that because the base R 'kmeans' function uses internally the base R 'duplicated()' function (you can view this by using print(kmeans) in the R console) which does not recognize the 'centroids' object as a matrix or data.frame (it is an object of class "k-means clustering") and performs the checking column-wise rather than row-wise. Therefore, the following should work for your case,
class(centroids) = NULL
dups = duplicated(centroids)
sum(dups) # this should actually give 0
res = kmeans(dat, centers = centroids, iter.max = 200)
I've made a few adjustments to the "ClusterR::predict_KMeans()" and particularly I've added the "threads" parameter and a check for duplicates, therefore if you want to come to the clusters using multiple cores you have to install the package from Github using,
remotes::install_github('mlampros/ClusterR',
upgrade = 'always',
dependencies = TRUE,
repos = 'https://cloud.r-project.org/')
The changes will take effect in the next version of the CRAN package which will be "1.2.2"
UPDATE regarding output and performance (based on your comment):
data(dietary_survey_IBS, package = 'ClusterR')
kmeans_arma = function(data) {
km_cl = ClusterR::KMeans_arma(data,
clusters = 2,
n_iter = 10,
seed_mode = "random_subset",
seed = 1)
pred_cl = ClusterR::predict_KMeans(data = data,
CENTROIDS = km_cl,
threads = 1)
return(pred_cl)
}
km_arma = kmeans_arma(data = dietary_survey_IBS)
km_algos = c("Hartigan-Wong", "Lloyd", "Forgy", "MacQueen")
for (algo in km_algos) {
cat('base-kmeans-algo:', algo, '\n')
km_base = kmeans(dietary_survey_IBS,
centers = 2,
iter.max = 10,
nstart = 1, # can be set to 5 or 10 etc.
algorithm = algo)
km_cl = as.vector(km_base$cluster)
print(table(km_arma, km_cl))
cat('--------------------------\n')
}
microbenchmark::microbenchmark(kmeans(dietary_survey_IBS,
centers = 2,
iter.max = 10,
nstart = 1, # can be set to 5 or 10 etc.
algorithm = algo), kmeans_arma(data = dietary_survey_IBS), times = 100)
I don't see any significant difference in the output clusters between the 'base R kmeans' and the 'kmeans_arma' function for all available 'base R kmeans' algorithms (you can test it also for your own data sets). I am not sure which algorithm the 'armadillo' library uses internally and moreover the 'base R kmeans' includes the 'nstart' parameter (you can consult the documentation for more info). Regarding performance you won't see any substantial differences for small to medium data sets but due to the fact that the armadillo library uses OpenMP internally in case that your computer has more than 1 cores then for big data sets I think the 'ClusterR::KMeans_arma' function will return the 'centroids' faster.

add networkx layout to holoview graph

I applied facebook network sample from this documentation on my work, to get this code:
edges_df = pd.read_csv('rel.csv', delimiter= ";")
nodes_df = pd.read_csv('monfichier.csv', delimiter= ";")
padding = dict(x=(-1.1, 1.1), y=(-1.1, 1.1))
fb_nodes = hv.Dataset(nodes_df, 'index')
fb_graph = hv.Graph((edges_df, fb_nodes)).redim.range(**padding)
colors = ['#000000'] + hv.Cycle('Category20').values
fb_graph.opts(color_index='age', show_frame=False,
xaxis=True, yaxis=True, node_size=10, edge_line_width=1, cmap=colors)
renderer = hv.renderer('bokeh')
plot = renderer.get_plot(fb_graph).state
show(plot)
It works fine. But the resulted network was a graph without a specific layout (as shown in attached figure). I want to specify the network layout as in networkx. How to do that ?
I found, this instruction:
hv.Graph.from_networkx(G, nx.layout.spring_layout).opts(tools=['hover'])
But I did not find how to use it with my code, since my G is already an holoview and not a networkx graph.
Do you have any suggestion ?
There is a function called layout_nodes in HoloViews which can apply networkx (and other) layouts to an existing graph:
N = 8
node_indices = np.arange(N)
source = np.zeros(N)
target = node_indices
padding = dict(x=(-1.2, 1.2), y=(-1.2, 1.2))
simple_graph = hv.Graph(((source, target),)).redim.range(**padding)
hv.element.graphs.layout_nodes(simple_graph, layout=nx.spring_layout)

how to display x and y error bars using openpyxl?

I am working on a project that requires the generation of xy plots in Excel. The x and y values have standard deviations associated with them. I would like to display both the x and y standard deviations as error bars. I can get the each of the error bars to display individually, but not both at the same time.
Below is an example of the code. Any suggestions would be greatly appreciated.
D201_avgX = Reference(resultsws,min_col=D201_avg_col,min_row=2,max_col=D201_avg_col,max_row=smprow_n)
D201_stdevX = NumDataSource(NumRef(Reference(resultsws,min_col=D201_stdev_col,min_row=2,max_col=D201_stdev_col,max_row=smprow_n)))
D201_stdev = ErrorBars(plus = D201_stdevX,minus = D201_stdevX,errBarType = 'both',errDir = 'x', errValType = 'cust')
D199_avgY = Reference(resultsws,min_col=D199_avg_col,min_row=1,max_col=D199_avg_col,max_row=smprow_n)
D199_stdevY = NumDataSource(NumRef(Reference(resultsws,min_col=D199_stdev_col,min_row=1,max_col=D199_stdev_col,max_row=smprow_n)))
D199_stdev = ErrorBars(plus = D199_stdevY,minus = D199_stdevY,errBarType = 'both',errDir = 'y', errValType = 'cust')
D201_D199_ser = SeriesFactory(D199_avgY,D201_avgX,title_from_data=True)
D201_D199_ser.marker.symbol = 'circle'
D201_D199_ser.graphicalProperties.line.noFill = True
D201_D199_ser.errBars = D201_stdev
D201_D199_ser.errBars = D199_stdev
D201_D199_chart = ScatterChart()
D201_D199_chart.series.append(D201_D199_ser)
D201_D199_ser.trendline = Trendline(dispRSqr=True)
D201_D199_chart.x_axis.title = 'D201'
D201_D199_chart.y_axis.title = 'D199'
D201_D199_chart.height = 10
D201_D199_chart.width = 15
It's possible that the implementation isn't perfect: it's based on the specification and this isn't clear on some details. I suggest you create a relevant chart in Excel or similar and look at the generated XML. You should be able to create the same structure using openpyxl, which attempts to faithfully implement the spec. If there are any problems then please submit a bug report or pull request.

R EVMIX convert pdf to uniform marginals

I'm trying to convert a distribution into a pseudo-uniform distribution. Using the spd R package, it is easy and it works as expected.
library(spd)
x <- c(rnorm(100,-1,0.7),rnorm(100,3,1))
fit<-spdfit(x,upper=0.9,lower=0.1,tailfit="GPD", kernelfit="epanech")
uniformX = pspd(x,fit)
I want to generalize extreme value modeling to include threshold uncertainity. So I used the evmix package.
library(evmix)
x <- c(rnorm(100,-1,0.7),rnorm(100,3,1))
fit = fgkg(x, phiul = FALSE, phiur = FALSE, std.err = FALSE)
pgkg(x,fit$lambda, fit$ul, fit$sigmaul, fit$xil, fit$phiul, fit$ur,
fit$sigmaur, fit$xir, fit$phiur)
Im messing up somewhere.
Please check out the help for pgkg function:
help(pgkg)
which gives the syntax:
pgkg(q, kerncentres, lambda = NULL, ul = as.vector(quantile(kerncentres,
0.1)), sigmaul = sqrt(6 * var(kerncentres))/pi, xil = 0, phiul = TRUE,
ur = as.vector(quantile(kerncentres, 0.9)), sigmaur = sqrt(6 *
var(kerncentres))/pi, xir = 0, phiur = TRUE, bw = NULL,
kernel = "gaussian", lower.tail = TRUE)
You have missed the kernel centres (the data), which is always needed for kernel density estimators. Here is the corrected code:
library(evmix)
x <- c(rnorm(100,-1,0.7),rnorm(100,3,1))
fit = fgkg(x, phiul = FALSE, phiur = FALSE, std.err = FALSE)
prob = pgkg(x, x, fit$lambda, fit$ul, fit$sigmaul, fit$xil, fit$phiul,
fit$ur, fit$sigmaur, fit$xir, fit$phiur)
hist(prob) % now uniform as expected

Storing plot objects in a list

I asked this question yesterday about storing a plot within an object. I tried implementing the first approach (aware that I did not specify that I was using qplot() in my original question) and noticed that it did not work as expected.
library(ggplot2) # add ggplot2
string = "C:/example.pdf" # Setup pdf
pdf(string,height=6,width=9)
x_range <- range(1,50) # Specify Range
# Create a list to hold the plot objects.
pltList <- list()
pltList[]
for(i in 1 : 16){
# Organise data
y = (1:50) * i * 1000 # Get y col
x = (1:50) # get x col
y = log(y) # Use natural log
# Regression
lm.0 = lm(formula = y ~ x) # make linear model
inter = summary(lm.0)$coefficients[1,1] # Get intercept
slop = summary(lm.0)$coefficients[2,1] # Get slope
# Make plot name
pltName <- paste( 'a', i, sep = '' )
# make plot object
p <- qplot(
x, y,
xlab = "Radius [km]",
ylab = "Services [log]",
xlim = x_range,
main = paste("Sample",i)
) + geom_abline(intercept = inter, slope = slop, colour = "red", size = 1)
print(p)
pltList[[pltName]] = p
}
# close the PDF file
dev.off()
I have used sample numbers in this case so the code runs if it is just copied. I did spend a few hours puzzling over this but I cannot figure out what is going wrong. It writes the first set of pdfs without problem, so I have 16 pdfs with the correct plots.
Then when I use this piece of code:
string = "C:/test_tabloid.pdf"
pdf(string, height = 11, width = 17)
grid.newpage()
pushViewport( viewport( layout = grid.layout(3, 3) ) )
vplayout <- function(x, y){viewport(layout.pos.row = x, layout.pos.col = y)}
counter = 1
# Page 1
for (i in 1:3){
for (j in 1:3){
pltName <- paste( 'a', counter, sep = '' )
print( pltList[[pltName]], vp = vplayout(i,j) )
counter = counter + 1
}
}
dev.off()
the result I get is the last linear model line (abline) on every graph, but the data does not change. When I check my list of plots, it seems that all of them become overwritten by the most recent plot (with the exception of the abline object).
A less important secondary question was how to generate a muli-page pdf with several plots on each page, but the main goal of my code was to store the plots in a list that I could access at a later date.
Ok, so if your plot command is changed to
p <- qplot(data = data.frame(x = x, y = y),
x, y,
xlab = "Radius [km]",
ylab = "Services [log]",
xlim = x_range,
ylim = c(0,10),
main = paste("Sample",i)
) + geom_abline(intercept = inter, slope = slop, colour = "red", size = 1)
then everything works as expected. Here's what I suspect is happening (although Hadley could probably clarify things). When ggplot2 "saves" the data, what it actually does is save a data frame, and the names of the parameters. So for the command as I have given it, you get
> summary(pltList[["a1"]])
data: x, y [50x2]
mapping: x = x, y = y
scales: x, y
faceting: facet_grid(. ~ ., FALSE)
-----------------------------------
geom_point:
stat_identity:
position_identity: (width = NULL, height = NULL)
mapping: group = 1
geom_abline: colour = red, size = 1
stat_abline: intercept = 2.55595281266726, slope = 0.05543539319091
position_identity: (width = NULL, height = NULL)
However, if you don't specify a data parameter in qplot, all the variables get evaluated in the current scope, because there is no attached (read: saved) data frame.
data: [0x0]
mapping: x = x, y = y
scales: x, y
faceting: facet_grid(. ~ ., FALSE)
-----------------------------------
geom_point:
stat_identity:
position_identity: (width = NULL, height = NULL)
mapping: group = 1
geom_abline: colour = red, size = 1
stat_abline: intercept = 2.55595281266726, slope = 0.05543539319091
position_identity: (width = NULL, height = NULL)
So when the plot is generated the second time around, rather than using the original values, it uses the current values of x and y.
I think you should use the data argument in qplot, i.e., store your vectors in a data frame.
See Hadley's book, Section 4.4:
The restriction on the data is simple: it must be a data frame. This is restrictive, and unlike other graphics packages in R. Lattice functions can take an optional data frame or use vectors directly from the global environment. ...
The data is stored in the plot object as a copy, not a reference. This has two
important consequences: if your data changes, the plot will not; and ggplot2 objects are entirely self-contained so that they can be save()d to disk and later load()ed and plotted without needing anything else from that session.
There is a bug in your code concerning list subscripting. It should be
pltList[[pltName]]
not
pltList[pltName]
Note:
class(pltList[1])
[1] "list"
pltList[1] is a list containing the first element of pltList.
class(pltList[[1]])
[1] "ggplot"
pltList[[1]] is the first element of pltList.
For your second question: Multi-page pdfs are easy -- see help(pdf):
onefile: logical: if true (the default) allow multiple figures in one
file. If false, generate a file with name containing the
page number for each page. Defaults to ‘TRUE’.
For your main question, I don't understand if you want to store the plot inputs in a list for later processing, or the plot outputs. If it is the latter, I am not sure that plot() returns an object you can store and retrieve.
Another suggestion regarding your second question would be to use either Sweave or Brew as they will give you complete control over how you display your multi-page pdf.
Have a look at this related question.