Importing dataset from Stata - variables

I am very new to R studio.
I have imported a dataset from Stata.
Even though the variables appear in R, However, it doesn't recognise any of them.
The following has happened to me:
time = finaltime
Error: object 'finaltime' not found
event= GSTATUS_DTHCNS_KI
Error: object 'GSTATUS_DTHCNS_KI' not found
X=cbind (sex BMI_TCR COLD_ISCH_KI SERUM_CREAT finalpra AGE STEROIDS_MAINT induction DIAB dgf timeondialysis DGN_TCR ETHCAT KDPI finalcmv)
Error: unexpected symbol in "X=cbind (sex BMI_TCR"
Y=cbind ( time, event)
Error in cbind(time, event) : object 'event' not found
Coxph= coxph(Surv (time, event)~X, method “Breslow”)
Error: unexpected input in "Coxph= coxph(Surv (time, event)~X, method “"
When I wrote: ls()
It gave me only the name of the dataset so :
ls()
"data_for_analysis"
When I clicked on the name of the datafile in the Environment window, it started to give me a breakdown of the variables in the dataset, finally. However, there was a dollar sign before the name of each variable.
Would the dollar sign affect all my syntaxes and the names of the variables?
How can I use the names of the variables then to write my syntaxes?

Related

invalid datetime format

I have a question about powercenter message code: RR-4035. I have a mapping in which i am using a sql override query, this error is in sql override. This mapping is failing with an error,
'[IBM][CLI DRIVER]CLIO113E SQLSTATE 22007:An invalid datetime format
was detected, that is an invalid string representation or value was
specified'.
> Database driver error:
Function name:Fetch
SQL STMNT:
select s.employee_record_id,s.employee_id,s.record_origin,
cnt.employee_contract_id,cnt.employee_contract_efctv_dt,cnt.employee_contract_term_dt,club.employee_club
from
employee_main_info s
inner join
(select
employee_id,record_origin,employee_contract_term_dt,employee_contract_efctv_dt
from employee_perm
union
select
employee_id,record_origin,employee_contract_term_dt,employee_contract_efctv_dt
from employee_temp
) cnt on s.employee_id=cnt.employee_id,
employee_club_data club
where
club.employee_id=s.employee_id
and (cnt.employee_contract_efctv_dt <=sysdate or cnt.employee_contract_efctv_dt<'2016-01-01')
and s.employee_record_term_dt>sysdate;
native error code= -99999
I have tried everything, my previous mappings have run fine with the same datetime formats but this one is failing. One thing i have noticed is that if i remove all the transformations in between the source qualifier and target the mapping succeeds and data gets loaded to target, but as soon as i put any lookups or expressions between source qualifier and target except a pass through expression, the mapping fails again.
Any suggestion, any help regarding this is appreciated.
We've seen this error occurring when SELECTing from a table with a timestamp column via the IBM Data Server ODBC/CLI driver. It only happened on one Windows machine and we were able to make the error disappear by changing the regional setting main selection from Israel to USA.
While not tested yet, it may be that the IBM DB2 ODBC configuration option DateTimeStringFormat or the attributes SQL_ATTR_DATE_FMT and SQL_ATTR_TIME_FMT can be used to force a specific format (such as JIS). See https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.apdv.cli.doc/doc/r0011525.html

CSV file input not working together with set field value step in Pentaho Kettle

I have a very simple Pentaho Kettle transformation that causes a strange error. It consists of reading a field X from a CSV, add a field Y, set Y=X and finally write it back to another CSV.
Here you can see the steps and the configuration for them:
You can also download the ktr file from here. The input data is just this:
1
2
3
When I run this transformation, I get this error message:
ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Unexpected error
ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : org.pentaho.di.core.exception.KettleStepException:
Error writing line
Error writing field content to file
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeRowToFile(TextFiIeOutput.java:273)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFileOutput.processRow(TextFiIeOutput.java:195)
at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
atjava.Iang.Thread.run(Unknown Source)
Caused by: org.pentaho.di.core.exception.KettleStepException:
Error writing field content to file
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeField(TextFileOutput.java:435)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeRowToFile(TextFiIeOutput.java:249)
3 more
Caused by: org.pentaho.di.core.exception.KettleVaIueException:
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.core.row.vaIue.VaIueMetaBase.getBinaryString(VaIueMetaBase.java:2185)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.formatField(TextFiIeOutput.java:290)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeField(TextFileOutput.java:392)
4 more
All of the above lines start with 2015/09/23 12:51:18 - Text file output.0 -, but I edited it out for brevity. I think the relevant, and confusing, part of the error message is this:
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
Some further notes:
If I bypass the set field value step by using the lower hop instead, the transformation finish without errors. This leads me to believe that it is the set field value step that causes the problem.
If I replace the CSV file input with a data frame with the same data (1,2,3) everything works just fine.
If I replace the file output step with a dummy the transformation finish without errors. However, if I preview the dummy, it causes a similar error and the field Y has the value <null> on all three rows.
Before I created this MCVE I got the error on all sorts of seemingly random steps, even when there was no file output present. So I do not think this is related to the file output.
If I change the format from Number to Integer, nothing changes. But if I change it to string the transformations finish without errors, and I get this output:
X;Y
1;[B#49e96951
2;[B#7b016abf
3;[B#1a0760b0
Is this a bug? Am I doing something wrong? How can I make this work?
It's because of lazy conversion. Turn it off. This is behaving exactly as designed - although admittedly the error and user experience could be improved.
Lazy conversion must not be used when you need to access the field value in your transformation. That's exactly what it does. The default should probably be off rather than on.
If your field is going directly to a database, then use it and it will be faster.
You can even have "partially lazy" streams, where you use lazy conversion for speed, but then use select values step, to "un-lazify" the fields you want to access, whilst the remainder remain lazy.
Cunning huh?

NuPIC OPF Runtime error getOutputData unknown output categoriesOut

I'm trying to run TemporalClassification model using OPF to recognize patterns from stream. I've adjusted model params so it has two Sensor inputs: ScalarEncoder and SDRCategoryEncoder. The latter marked as classifierOnly. And also it's set as predictedField in inferences.
When trying to feed model with input data I get
RuntimeError: getOutputData unknown output 'categoriesOut' on region Classifier.
NontemporalClassification (only inferenceType changed) model runs without such error.
I've found 6 occurances of categoriesOut in nupic code: https://github.com/numenta/nupic/search?utf8=%E2%9C%93&q=categoriesOut
And error arises in nupic/frameworks/opf/clamodel.py at line 558
classificationDist = classifier.getOutputData('categoriesOut')
Seems that ClassifierRegion in the network is not prepared properly to output data.
Can anyone explain why the classfier region doesn't have 'categoriesOut'? I guess there's misconfiguration in my model params, but there were no errors or warnings during initialization of model. Is there any mandatory parameters and assignments (except noticed in NUPIC documentation) necessary for TemporalClassification model to run?
There are several types of ClassifierRegions in NuPIC. You can find them in nupic/regions folder. I've checked sources and found that 'categoriesOut' is in the outputs dict of the KNNClassifierRegion
https://github.com/numenta/nupic/blob/469f6372082e95dd5d2a96181b745ba36d2e7a8a/nupic/regions/KNNClassifierRegion.py
outputs=dict(
categoriesOut=dict(
description='A vector representing, for each category '
'index, the likelihood that the input to the node belongs '
'to that category based on the number of neighbors of '
'that category that are among the nearest K.',
dataType='Real32',
count=0,
regionLevel=True,
isDefaultOutput=True),
Ensure you use KNNClassifierRegion when configuring your TemporalClassification model. Samples for NontemporalClassification use CLAClassifier, but CLAClassifierRegion has no categoriesOut in its outputs and error described in your question will arise if you keep
'regionName' : 'CLAClassifierRegion'
for TemporalClassification model.

Error: Test object id not in the object map: dmTablePopupMenu2 in a RFT script

Exception occurred during playback of script [Firewall.ASDMDcerpcInspectMap] [CRFCN0019E: RationalTestScriptException on line 150 of script Firewall.ASDMDcerpcInspectMap - com.rational.test.ft.ObjectNotInMapException: CRFCN0763E: Test object id not in the object map: dmTablePopupMenu2.].
I am using IBM Rational Functional Tester Version: 8.3.0.1 and I found the above exception in few of my scripts. I cannot see any error on the script for objects present in the script but missing in the object map , Can Anybody tell me why am I facing this problem ans how can I fix it?
Thanks in advance..
This error gets thrown when the actual object does not exist (or most likely deleted) from the Object Map however the script still has a reference for that object.
As per the error message above , could you locate what is the code on line 150 of the Script ASDMDcerpcInpectMap , and then try to track that object in the object map ?
So if the line 150 says .. button123().click(); .. then in the script explorer you should have an object by the name button123 when upon doubleclick should bring up the object map with button123() selected.
I supect the button123 is mising from the object map( deleted most likely).
Try to re-add that object to the object map ( by using the TestObject-> Insert Test Object from the Object Map) and then right click on that object in the object map and select "Add to script" , that should take care of it.

SQLERRM - like function for other error types

This is a continuation from this question.
My Question: I'm looking for a function like SQLERRM which will give me a description for all Oracle error codes.
From this website, I found this list of oracle error types:
AMD, AUD, CLS, DBV, DGM, DRG, EXP, IMG, IMP, KUP, LCD, LFI, LPX, LRM,
LSX, NCR, NID, NMP, NNC, NNF, NNL, NNO, NPL, NZE, O2F, O2I, O2U, OCI,
ORA-CODE, PCB, PCC, PLS, PLW, PRO, QSM, RMA, SQL, TNS, UDE, UDI, VID
Am I misunderstanding something or is this possible?
Something like
SQL> !oerr ora 04043
04043, 00000, “object %s does not exist”
// *Cause: An object name was specified that was not recognized by the system.
// There are several possible causes:
// – An invalid name for a table, view, sequence, procedure, function,
// package, or package body was entered. Since the system could not
// recognize the invalid name, it responded with the message that the
// named object does not exist.
// – An attempt was made to rename an index or a cluster, or some
// other object that cannot be renamed.
// *Action: Check the spelling of the named object and rerun the code. (Valid
// names of tables, views, functions, etc. can be listed by querying
// the data dictionary.)
This is the error lookup utility in Oracle.
Usage: oerr facility error
facility is any of the error types like ora, amd etc. and error is the code. But you need to make sure that you have access privilege to all installed directories.
This is what you need in PDF format.