How to fix Erreur : Subscript `AMr1.orig` is a matrix, the data `x.imp[, -possibleFactors][AMr1.orig]` must have size 1 - error-handling

I'm trying to run Amelia to impute some missing data on several variables with the following code:
set.seed(1)
zz[,c("id", "sex", "team", "minsSocial", "satisTravail", "performance")] <-
Amelia::amelia(zz[,c("id", "sex", "team", "minsSocial", "satisTravail", "performance")],
m=1, idvars="id", noms=c("sex","team"))$imputations$imp1
Unfortunately, I get this error message :
Erreur : Subscript AMr1.orig is a matrix, the data x.imp[, -possibleFactors][AMr1.orig] must have size 1.
Any toughts on where is the problem and how I could fix it? Is it because my data contains values <1?
Thank you!

I think this might be due to some recent changes to error handling in tibbles. If you cast your data as a data.frame instead (assuming that zz is a tibble), the error should go away (this worked for me).
zz <- as.data.frame(zz)
Not sure about the reason behind the error message though. I get a similar error message from rlang::last_error(), and the code worked with earlier versions of the packages.
<error/tibble_error_subset_matrix_must_be_scalar>
Subscript `AMr1.orig` is a matrix, the data `x.imp[AMr1.orig]` must have size 1.
Backtrace:
1. Amelia::amelia(...)
2. Amelia::amelia.default(...)
3. base::lapply(seq_len(m), do.amelia)
4. Amelia:::FUN(X[[i]], ...)
5. Amelia:::impfill(...)
7. tibble:::`[<-.tbl_df`(...)
8. tibble:::tbl_subassign_matrix(x, j, value, j_arg, substitute(value))

Related

Sparklyr : sql temporary error : argument is not interpretable as logical

Hi I'm new to sparklyr and I'm essentially running a query to create a temporary object in spark.
The code is something like
ts_data<-tbl(sc,"db.table") %>% filter(condition) %>% compute("ts_data")
sc is my spark connection.
I have run the same code before and it works but now I get the following error.
Error in if (temporary) sql("TEMPORARY ") : argument is not
interpretable as logical
I have tried changing filters, tried it with new tables, R versions and snapshots. Yet it still gives the same exact error. I am positive there are no syntax errors
Can someone help me understand how to fix this?
I ran into the same problem. Changing compute("x") to compute(name = "x") fixed it for me.
This was a bug of sparklyr, and is fixed with version 1.7.0. So either use matching by argument (name = x) or update your sparklyr version.

WARNING: Ipopt finished with status Invalid_Number_Detected

Trying to solve a large NLP. the code is approximately as follows:
using JuMP
using Ipopt
m=Model(solver=IpoptSolver())
#variable(m,k,start=1.2)
....
#NLparameter(m,α==0.28)
.....
#NLconstraint(m,cons1, 0<=((6.376151933328191*θ_1k^2*θ_3c -....<=0)
......
#NLobjective(m,Max,1.0)
solve(m)
With the first set of start values (some negative and other positive) for the variables, I receive the following error message:
WARNING: Ipopt finished with status Invalid_Number_Detected
When I alter the values of initials (all positive), I receive the following message instead
Warning: Cutting back alpha due to evaluation error.
Please, what could be the meaning of such behavior? Is there an Ipopt option that could help to solve the problem? Thanks for helping

Dymola Results of checkModel()

checkmodel([Some Model]) opens the GUI "Dymola Messages", tab "Translation" and displays Errors, Warnings, and Messages.
Does anyone know how to write these infos to a logfile or get them as kind of return value of checkModel(). All I've found in the documentation was, that checkModel() only returns a success-boolean. Are these infos saved temporarily somewhere?
Note, that I only want to apply checkModel() but not actually translating the code.
I finally found a solution at least for Dymola 2016 and newer, so if someone is interested - here it is (it is not very user-friendly, but it works):
The key-command is getLastError() which not only returns the last error (as one could think...), but all errors that are detected by checkModel() as well as the overall statistics.
All informations are sampled in one string, in which the last lines looks like:
"[...]
Local classes checked, checking <[Some Path]>
ERROR: 2 errors were found
WARNING: 13 warnings were issued
= false
"
Following operations will return the number of actual errors (for warnings it is more or less the same):
b = checkmodel([Some Model])
s = getLastError()
ind1 = Modelica.Utilities.Strings.findLast(s,"ERROR:")
ind2 = Modelica.Utilities.Strings.findLast(s," errors were found")
nErrors = Modelica.Utilities.Strings.substring(s,ind1+6,ind2) //6 = len(ERROR:)
nErrors = Modelica.Utilities.Strings.replace(nErrors," ","")
nErrors
= "2"
Note:
I used findLast as I know, that the lines of interest are at the very end of the string. So this is significantly faster than using find
This only works, if the line "ERROR: ...." actually exists. Otherwise, the substring call will throw an error.
Of course this could be done in less lines, but maybe this version is easier to read.
NOTE: This will only works with Dymola 2016 and newer. The return-string of getLastError is of a different structure in Dymola 2015 and older.
The following should handle it:
clearlog(); // To start fresh
Advanced.TranslationInCommandLog=true;
checkModel(...);
savelog(...);
This is mentioned in the Dymola User Manual Volume 1, section "Parameter studies by running Dymola a number of time in “batch mode”" on pg 630 or so.

Could not infer the matching function for org.apache.pig.builtin.SUM as multiple or none of them fit. Please use an explicit cast

I wanted to do the sum of a column which contains long type numbers.
I tried many possible ways but still the cast error is not getting resolved.
My pig code:
raw_ds = LOAD '/tmp/bimallik/data/part-r-00098' using PigStorage(',') AS (
d1:chararray, d2:chararray, d3:chararray, d4:chararray, d5:chararray,
d6:chararray, d7:chararray, d8:chararray, d9:chararray );
parsed_ds = FOREACH raw_ds GENERATE d8 as inBytes:long, d9 as outBytes:long;
X = FOREACH parsed_ds GENERATE (long)SUM(parsed_ds.inBytes) AS inBytes;
dump X;
Error snapshot:
2015-11-20 02:16:26,631 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1045:
Could not infer the matching function for org.apache.pig.builtin.SUM as multiple or none of them fit. Please use an explicit cast.
Details at logfile: /users/bimallik/pig_1448014584395.log
2015-11-20 02:17:03,629 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 50% complet
#ManjunathBallur Thanks for the input.
I changed my code as below now
<..same as before ...>
A = GROUP parsed_ds by inBytes;
X = FOREACH A GENERATE SUM(parsed_ds.inBytes) as h;
DUMP X;
Now A is generating a bag of common inBytes and X is giving sum of each bag's inBytes's summation which is again consisting of multiple rows where as I need one single summation value.
In local mode I was getting the same issue (pig -x local).
I have tried all the solutions available on the internet but noting seems to be working for me.
I toggled my PIG from local to the mapreduce mode and tried the solution. It worked.
In mapreduce mode all the solutions seem to be working.

CSV file input not working together with set field value step in Pentaho Kettle

I have a very simple Pentaho Kettle transformation that causes a strange error. It consists of reading a field X from a CSV, add a field Y, set Y=X and finally write it back to another CSV.
Here you can see the steps and the configuration for them:
You can also download the ktr file from here. The input data is just this:
1
2
3
When I run this transformation, I get this error message:
ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Unexpected error
ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : org.pentaho.di.core.exception.KettleStepException:
Error writing line
Error writing field content to file
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeRowToFile(TextFiIeOutput.java:273)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFileOutput.processRow(TextFiIeOutput.java:195)
at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
atjava.Iang.Thread.run(Unknown Source)
Caused by: org.pentaho.di.core.exception.KettleStepException:
Error writing field content to file
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeField(TextFileOutput.java:435)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeRowToFile(TextFiIeOutput.java:249)
3 more
Caused by: org.pentaho.di.core.exception.KettleVaIueException:
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.core.row.vaIue.VaIueMetaBase.getBinaryString(VaIueMetaBase.java:2185)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.formatField(TextFiIeOutput.java:290)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeField(TextFileOutput.java:392)
4 more
All of the above lines start with 2015/09/23 12:51:18 - Text file output.0 -, but I edited it out for brevity. I think the relevant, and confusing, part of the error message is this:
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
Some further notes:
If I bypass the set field value step by using the lower hop instead, the transformation finish without errors. This leads me to believe that it is the set field value step that causes the problem.
If I replace the CSV file input with a data frame with the same data (1,2,3) everything works just fine.
If I replace the file output step with a dummy the transformation finish without errors. However, if I preview the dummy, it causes a similar error and the field Y has the value <null> on all three rows.
Before I created this MCVE I got the error on all sorts of seemingly random steps, even when there was no file output present. So I do not think this is related to the file output.
If I change the format from Number to Integer, nothing changes. But if I change it to string the transformations finish without errors, and I get this output:
X;Y
1;[B#49e96951
2;[B#7b016abf
3;[B#1a0760b0
Is this a bug? Am I doing something wrong? How can I make this work?
It's because of lazy conversion. Turn it off. This is behaving exactly as designed - although admittedly the error and user experience could be improved.
Lazy conversion must not be used when you need to access the field value in your transformation. That's exactly what it does. The default should probably be off rather than on.
If your field is going directly to a database, then use it and it will be faster.
You can even have "partially lazy" streams, where you use lazy conversion for speed, but then use select values step, to "un-lazify" the fields you want to access, whilst the remainder remain lazy.
Cunning huh?