After fitting a stan_glmer() or stan_glm() functions with mcgv::betar as a family, I get an error when I try to call posterior_predict on it. R says:
Error in exp(eta) : non-numeric argument to mathematical function
A minimal example:
library(rstanarm)
library(loo)
library(mgcv)
a <- rnorm(100, 0.5, 0.1)
b <- a+rnorm(100, 0.6, 0.01)
d <- data.frame(a=a, b=b)
fit <- stan_glm(a ~ b,
data = d,
family=betar,
chains = 10,
seed = 1)
posterior_predict(fit)
I found the answer here: https://discourse.mc-stan.org/t/rstanarm-mgcv-betar-family/2947/4 . It is a bug in rstanarm.
Related
I have a dataset similar to the one in the code below. The response variable is binary and the two predictor variables are categorical (one is binary and the other has four categories). I have created a candidate set of models, and I want to find the model with the lowest AIC in the candidate set. However, I get two error messages when running the models.
I think the problem is that it is not possible to build a spline due to the small data available or the fewer combinations of categories across the two predictor variables.
Is there a way of analysing my data using GAMs (i.e. overcoming the errors below)?
library(mgcv)
set.seed(123)
# Dummy data
dat <- data.frame(resp = sample(c(0, 1), 280, replace = T, prob = c(0.8, 0.2)),
pre1 = sample(c(0, 1), 280, replace = T, prob = c(0.6, 0.4)),
pre2 = factor(sample(c("none", "little", "some", "plenty"), 280, replace = T,
prob = c(0.25, 0.25, 0.15, 0.35))))
# Define candidate set of models
m1 <- gam(resp ~ 1, method = "REML", data = dat)
m2 <- gam(resp ~ s(pre1, k = 2), method = "REML", data = dat)
Error in smooth.construct.tp.smooth.spec(object, dk$data, dk$knots) :
A term has fewer unique covariate combinations than specified maximum degrees of freedom
In addition: Warning message:
In smooth.construct.tp.smooth.spec(object, dk$data, dk$knots) : basis dimension, k, increased to minimum possible
m3 <- gam(resp ~ s(pre2, k = 2), method = "REML", data = dat)
Error in smooth.construct.tp.smooth.spec(object, dk$data, dk$knots) :
NA/NaN/Inf in foreign function call (arg 1)
In addition: Warning messages:
1: In mean.default(xx) : argument is not numeric or logical: returning NA
2: In Ops.factor(xx, shift[i]) : ‘-’ not meaningful for factors
3: In smooth.construct.tp.smooth.spec(object, dk$data, dk$knots) : basis dimension, k, increased to minimum possible
m4 <- gam(resp ~ s(pre1, k = 2) + s(pre2, k = 2), method = "REML", data = dat)
m5 <- gam(resp ~ s(pre1, k = 2) * s(pre2, k = 2), method = "REML", data = dat)
# Calculate AICs
AIC(m1, m2, m3, m4, m5)
I cannot get a plot for the effects I get from a fixed-effects model in plm. I tried using effect(), predict() and all kinds of packages like sjPlot, etc.
Is there a way of plotting it, especially also with interactions?
I always get error messages like:
Error in mod.matrix %*% scoef : non-conformable arguments
Try fixef? For instance, see below:
plm_2 <- plm(wealth ~ Volatility, data = ds_panel,index=c("rho"), model = "within")
y1 <- fixef(plm_2)
x1 <- as.numeric(names(y1))
plot(y1~x1, pch = 20, ylab = "FE", xlab = expression(rho))
I am trying to reproduce nice stargazer model (lm) output for model that is not supperted by stargazer.
can linear model stargazer output be produced by hand? Since we can create a dataframe from every model and than insert the created dataframe to stargazer:
library(spdep)
data(afcon, package="spData")
afcon$Y = rnorm(42, 50, 20)
cns <- knearneigh(cbind(afcon$x, afcon$y), k=7, longlat=T)
scnsn <- knn2nb(cns, row.names = NULL, sym = T)
W <- nb2listw(scnsn, zero.policy = TRUE)
ols <- lm(totcon ~ Y, data = afcon)
spatial.lag <- lagsarlm(totcon ~ Y, data = afcon, W)
summary(model)
stargazer(ols, type = "text")
summary(spatial.lag)
data.frame(
spatial.lag$coefficients,
spatial.lag$rest.se
) %>%
rename(coeffs = spatial.lag.coefficients,
se = spatial.lag.rest.se) %>%
stargazer(type = "text", summary = F)
when we do stargazer(ols) output is very nice, I woud like to reproduce same output by hand for spatial.lag is there a way how to do so, how superscript etc...
You mean ^{*}? If so it's not possible in stargazer!! I've already tried it so I recommend you to check the xtable package like I did here.
I will show one approach that can be used: stargazer is really nice and you CAN even create table like above even with the model objects that are not yet supported, e.g. lets say that quantile regression model is not supported by stargazer (even thought is is):
Trick is, you need to be able to obtain coefficients and standart error e.g. as vector. Then supply stargazer with model object that is suppoerted e.g. lm as a template and then mechanically specify which coefficients and standart errors should be used:
library(stargazer)
library(tidyverse)
library(quantreg)
df <- mtcars
model1 <- lm(hp ~ factor(gear) + qsec + disp, data = df)
quantreg <- rq(hp ~ factor(gear) + qsec + disp, data = df)
summary_qr <- summary(quantreg, se = "boot")
# Standart Error for quant reg
se_qr = c(211.78266, 29.17307, 58.61105, 9.70908, 0.12090)
stargazer(model1, model1,
coef = list(NULL, summary_qr$coefficients),
se = list(NULL, se_qr),
type = "text")
I do maximum likelihood estimation for a loglikelihood function of a poisson distribution. After the estimation i want to compute the standard errors of the coeffients. For that i need the hessian matrix. Now i dont know which function i should use to get the hessian matrix , optim() or hessian() from the numderiv package.
Both function give me different solution. And if i try to compute Standard errors from the hessian that i get from optim i get one NaN entry in the result.
Whats the difference between these two functions for the compution of the hessian matrix?
logLikePois <- function(parameter,y, z) {
betaKoef <- parameter
lambda <- exp(betaKoef %*% t(z))
logLikeliHood <- -(sum(-lambda+y*log(lambda)-log(factorial(y))))
return(logLikeliHood)
}
grad <- function (y,z,parameter){
betaKoef <- parameter
# Lamba der Poissonregression
lambda <- exp(betaKoef%*%t(z))
gradient <- -((y-lambda)%*%(z))
return(gradient)
}
data(discoveries)
disc <- data.frame(count=as.numeric(discoveries),
year=seq(0,(length(discoveries)-1),1))
yearSqr <- disc$year^2
formula <- count ~ year + yearSqr
form <- formula(formula)
model <- model.frame(formula, data = disc)
z <- model.matrix(formula, data = disc)
y <- model.response(model)
parFullModell <- rep(0,ncol(z))
optimierung <- optim(par = parFullModell,gr=grad, fn = logLikePois,
z = z, y = y, method = "BFGS" ,hessian = TRUE)
optimHessian <- optimierung$hessian
numderivHessian <- hessian(func = logLikePois, x = optimierung$par, y=y,z=z)
sqrt(diag(solve(optimHessian)))
sqrt(diag(solve(numderivHessian )))
I try to compare the results of some numpy.array calculations with expected results, and I constantly get false comparison, but the printed arrays look the same, e.g:
def test_gen_sine():
A, f, phi, fs, t = 1.0, 10.0, 1.0, 50.0, 0.1
expected = array([0.54030231, -0.63332387, -0.93171798, 0.05749049, 0.96724906])
result = gen_sine(A, f, phi, fs, t)
npt.assert_array_equal(expected, result)
prints back:
> raise AssertionError(msg)
E AssertionError:
E Arrays are not equal
E
E (mismatch 100.0%)
E x: array([ 0.540302, -0.633324, -0.931718, 0.05749 , 0.967249])
E y: array([ 0.540302, -0.633324, -0.931718, 0.05749 , 0.967249])
My gen_sine function is:
def gen_sine(A, f, phi, fs, t):
sampling_period = 1 / fs
num_samples = fs * t
samples_range = (np.arange(0, num_samples) * 2 * f * np.pi * sampling_period) + phi
return A * np.cos(samples_range)
Why is that? How should I compare the two arrays?
(I'm using numpy 1.9.3 and pytest 2.8.1)
The problem is that np.assert_array_equal returns None and does the assert statement internally. It is incorrect to preface it with a separate assert as you do:
assert np.assert_array_equal(x,y)
Instead in your test you would just do something like:
import numpy as np
from numpy.testing import assert_array_equal
def test_equal():
assert_array_equal(np.arange(0,3), np.array([0,1,2]) # No assertion raised
assert_array_equal(np.arange(0,3), np.array([2,0,1]) # Raises AssertionError
Update:
A few comments
Don't rewrite your entire original question, because then it was unclear what an answer was actually addressing.
As far as your updated question, the issue is that assert_array_equal is not appropriate for comparing floating point arrays as is explained in the documentation. Instead use assert_allclose and then set the desired relative and absolute tolerances.