res_prim = output$res
df_age = res_prim %>%
dplyr::select(taxon, ends_with("age"))
df_fig_age = df_age %>%
filter(df_age == 1) %>%
arrange(desc(lfc_age)) %>%
mutate(direct = ifelse(lfc_age > 0, "Positive LFC", "Negative LFC"))
Error in arrange():
! Problem with the implicit transmute() step.
✖ Problem while computing ..1 = lfc_age.
Caused by error in mask$eval_all_mutate():
! object 'lfc_age' not found
Run rlang::last_error() to see where the error occurred.
rlang::last_error()
Error: object 'rlang::last_error()' not found
rlang::last_error()
<error/rlang_error>
Error in arrange():
! Problem with the implicit transmute() step.
✖ Problem while computing ..1 = lfc_age.
Caused by error in mask$eval_all_mutate():
! object 'lfc_age' not found
Backtrace:
... %>% ...
base::.handleSimpleError(...)
dplyr (local) h(simpleError(msg, call))
Run rlang::last_trace() to see the full context.
I get the above error messages and not sure how to fix it. Please let me know what to do to get rid of the error.
Thank you very much, PAR
Looked in the internet but could not find an answer
Related
I am attempting some data analysis via topic modeling. I have followed the guide posted here: https://bookdown.org/joone/ComputationalMethods/topicmodeling.html
The code works fine until the last step. Here is my code snippet for your reference:
LePen_top <- tibble(topic = terms$topicnums, prob = apply(terms$prob, 1, paste, collapse = ", "),
frex = apply(terms$frex, 1, paste, collapse = ", "))
When I run it, the following message pops up: Error in terms$topicnums : object of type 'closure' is not subsettable
I have looked around the forums, but I have not been able to solve it due to my inexperience. I would greatly appreciate your help, and if you could explain what have I done wrong.
Best regards,
MRizak
I am trying to convert SparkDataFrame to R data frame.
%python
temp_df.createOrReplaceTempView("temp_df_r")
%r
temp_sql = sql("select * from temp_df_r")
temp_r = as.data.frame(temp_sql)
Error in as.data.frame.default(temp_sql) :
cannot coerce class ‘structure("SparkDataFrame", package = "SparkR")’ to a data.frame
Sometimes I get error, it's still unknown why I get error sometimes and sometimes not.
I need more details. What environment do you use?
I'm trying to run Amelia to impute some missing data on several variables with the following code:
set.seed(1)
zz[,c("id", "sex", "team", "minsSocial", "satisTravail", "performance")] <-
Amelia::amelia(zz[,c("id", "sex", "team", "minsSocial", "satisTravail", "performance")],
m=1, idvars="id", noms=c("sex","team"))$imputations$imp1
Unfortunately, I get this error message :
Erreur : Subscript AMr1.orig is a matrix, the data x.imp[, -possibleFactors][AMr1.orig] must have size 1.
Any toughts on where is the problem and how I could fix it? Is it because my data contains values <1?
Thank you!
I think this might be due to some recent changes to error handling in tibbles. If you cast your data as a data.frame instead (assuming that zz is a tibble), the error should go away (this worked for me).
zz <- as.data.frame(zz)
Not sure about the reason behind the error message though. I get a similar error message from rlang::last_error(), and the code worked with earlier versions of the packages.
<error/tibble_error_subset_matrix_must_be_scalar>
Subscript `AMr1.orig` is a matrix, the data `x.imp[AMr1.orig]` must have size 1.
Backtrace:
1. Amelia::amelia(...)
2. Amelia::amelia.default(...)
3. base::lapply(seq_len(m), do.amelia)
4. Amelia:::FUN(X[[i]], ...)
5. Amelia:::impfill(...)
7. tibble:::`[<-.tbl_df`(...)
8. tibble:::tbl_subassign_matrix(x, j, value, j_arg, substitute(value))
I have been having issues indexing in Python 3.7. I would greatly appreciate your insights and clarification on this.
I have tried to research and fix this issue but I am not able to understand what I am doing. I would greatly appreciate your help
enroll = pd.read_csv('enrollment_forecast.csv')
enroll.columns = ['year','roll','unem','hgrad','inc']
# the correlation between variables
enroll.corr()
enroll_data = enroll.ix[:(2,3)].values
print(enroll_data)
enroll_target = enroll.ix[:,1].values
print(enroll_target)
enroll_data_names = ['unem','hgrad']
Exception has occurred: AssertionError
End slice bound is non-scalar
Just a heads up, Pandas .ix index accessor is deprecated.
It's throwing the error error because you are passing it a tuple:
enroll_data = enroll.ix[:(2,3)].values
Try passing it a list instead of a tuple:
enroll_data = enroll.ix[:[2,3]].values
I have Revolution R Enterprise 8.0 with RRO 3.2.2 installed and try to run a simple example - make a histogram from Titanic dataset:
library("RevoScaleR")
dataCsv <- read.csv("http://s3.amazonaws.com/assets.datacamp.com/course/Kaggle/train.csv")
dataXdf <- file.path("titanic.xdf")
rxImport(inData = dataCsv, outFile = dataXdf, overwrite = TRUE)
rxHistogram( ~ Age, data = dataXdf, xAxisMinMax = c(0, 520), numBreaks = 100, xNumTicks = 10)
and rxHistogram returns mystic error:
Error in doTryCatch(return(expr), name, parentenv, handler) :
The element bIsPrediction does not exist in the list.
Anybody knows how to fix it and what is actually the problem? Googling didn't give any results.
PS: hit the same error running rxDataStep
Problem was solved by uninstalling Revolution R and installing it back again.