This is my first try at using R to collect data from pubchem. However i am getting the following error every time for every cid that i have used.
library("rpubchem")
get.cid(46926545)
Error in do.call(cbind, unlist(Filter(function(x) !is.null(x), cvals), :
second argument must be a list
Can anyone point out what am i doing wrong. The link for the cid is "https://pubchem.ncbi.nlm.nih.gov/compound/46926545#section=Top"
Related
I'm trying to basically subtract time to get the time difference in SSRS to take out total working hours. However, I'm getting error "An unexpected error occurred while compiling expressions. Native compiler return value: '[BC30025] Property missing 'End Property'.'." from the available solutions in here or over the internet.
I tried to use lookup as well: =Lookup(Fields!Date.Value & Fields!Employee_Name.Value, Fields!Date.Value & Fields!Name_or_title.Value, Fields!Check_Out_Time.Value, "DataSet2") - (Fields!Check_In_Time.Value) which shows error as enter image description here
Can someone guide me how to get time difference from two different datasets please? I'm new to SSRSSQL
While using the code able I'm getting an error message as below and also I'm getting the output table.
input(put(serv_to_DT_KEY,8.),yymmdd8.)
between datepart(D.throughdate)
and datepart(intnx('day',d.throughdate,31))
Error: INPUT function reported 'ERROR: Invalid date value' while processing WHERE clause.
ERROR: Limit set by ERRORS= option reached. Further errors for this INPUT function will not be printed.
Could you please help
It's not a true error, exactly; it's a data error, which SAS will let by. What it's saying is the value in serv_to_DT_KEY is not a valid yymmdd8 for some records. (The rest of the message, about "limit set by ...", is just SAS saying it's only going to tell you about 20 or so individual data errors instead of showing you every single one, so your log isn't hopeless.)
To fix this you have several options:
If there is a data issue [ie, all rows should have a valid date], then fix that.
If it's okay that some rows don't have a valid date (for example, they might have a missing), you can use the ?? modifier in the informat in input to tell it to ignore these.
Like this:
input(put(serv_to_DT_KEY,8.),?yymmdd8.)
between datepart(D.throughdate)
and datepart(intnx('day',d.throughdate,31))
I am trying to load JSON file from Staging area (S3) into Stage table using COPY INTO command.
Table:
create or replace TABLE stage_tableA (
RAW_JSON VARIANT NOT NULL
);
Copy Command:
copy into stage_tableA from #stgS3/filename_45.gz file_format = (format_name = 'file_json')
Got the below error when executing the above (sample provided)
SQL Error [100069] [22P02]: Error parsing JSON: document is too large, max size 16777216 bytes If you would like to continue loading
when an error is encountered, use other values such as 'SKIP_FILE' or
'CONTINUE' for the ON_ERROR option. For more information on loading
options, please run 'info loading_data' in a SQL client.
When I had put "ON_ERROR=CONTINUE" , records got partially loaded, i.e until the record with more than max size. But no records after the Error record was loaded.
Was "ON_ERROR=CONTINUE" supposed to skip only the record that has max size and load records before and after it ?
Yes, the ON_ERROR=CONTINUE skips the offending line and continues to load the rest of the file.
To help us provide more insight, can you answer the following:
How many records are in your file?
How many got loaded?
At what line was the error first encountered?
You can find this information using the COPY_HISTORY() table function
Try setting the option strip_outer_array = true for file format and attempt the loading again.
The considerations for loading large size semi-structured data are documented in the below article:
https://docs.snowflake.com/en/user-guide/semistructured-considerations.html
I partially agree with Chris. The ON_ERROR=CONTINUE option only helps if the there are in fact more than 1 JSON objects in the file. If it's 1 massive object then you would simply not get an error or the record loaded when using ON_ERROR=CONTINUE.
If you know your JSON payload is smaller than 16mb then definitely try the strip_outer_array = true. Also, if your JSON has a lot of nulls ("NULL") as values use the STRIP_NULL_VALUES = TRUE as this will slim your payload as well. Hope that helps.
I am trying to perform an operation in SQL server using LOG() and a few fields that are within my table. When I try and run the query, I get the error
'An invalid floating point operation occurred.'
I tried the solution provided in An invalid floating point operation occurred but I was not able to get it to solve. This was the original code that I used
SELECT
MIN(ROUND((ROUND(Log(amt1 / ABS(rate * amt2 - amt1)),5) / (round(LOG(1 + rate),10))),0))
FROM table
WHERE amt2 - amt1 > 0
I expected the output to show a value. I can run this for one specific field, but as I span this out to the whole data set the error occurs.
I have a very simple Pentaho Kettle transformation that causes a strange error. It consists of reading a field X from a CSV, add a field Y, set Y=X and finally write it back to another CSV.
Here you can see the steps and the configuration for them:
You can also download the ktr file from here. The input data is just this:
1
2
3
When I run this transformation, I get this error message:
ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Unexpected error
ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : org.pentaho.di.core.exception.KettleStepException:
Error writing line
Error writing field content to file
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeRowToFile(TextFiIeOutput.java:273)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFileOutput.processRow(TextFiIeOutput.java:195)
at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
atjava.Iang.Thread.run(Unknown Source)
Caused by: org.pentaho.di.core.exception.KettleStepException:
Error writing field content to file
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeField(TextFileOutput.java:435)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeRowToFile(TextFiIeOutput.java:249)
3 more
Caused by: org.pentaho.di.core.exception.KettleVaIueException:
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.core.row.vaIue.VaIueMetaBase.getBinaryString(VaIueMetaBase.java:2185)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.formatField(TextFiIeOutput.java:290)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeField(TextFileOutput.java:392)
4 more
All of the above lines start with 2015/09/23 12:51:18 - Text file output.0 -, but I edited it out for brevity. I think the relevant, and confusing, part of the error message is this:
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
Some further notes:
If I bypass the set field value step by using the lower hop instead, the transformation finish without errors. This leads me to believe that it is the set field value step that causes the problem.
If I replace the CSV file input with a data frame with the same data (1,2,3) everything works just fine.
If I replace the file output step with a dummy the transformation finish without errors. However, if I preview the dummy, it causes a similar error and the field Y has the value <null> on all three rows.
Before I created this MCVE I got the error on all sorts of seemingly random steps, even when there was no file output present. So I do not think this is related to the file output.
If I change the format from Number to Integer, nothing changes. But if I change it to string the transformations finish without errors, and I get this output:
X;Y
1;[B#49e96951
2;[B#7b016abf
3;[B#1a0760b0
Is this a bug? Am I doing something wrong? How can I make this work?
It's because of lazy conversion. Turn it off. This is behaving exactly as designed - although admittedly the error and user experience could be improved.
Lazy conversion must not be used when you need to access the field value in your transformation. That's exactly what it does. The default should probably be off rather than on.
If your field is going directly to a database, then use it and it will be faster.
You can even have "partially lazy" streams, where you use lazy conversion for speed, but then use select values step, to "un-lazify" the fields you want to access, whilst the remainder remain lazy.
Cunning huh?