I've created a model in R, published into SQL Server table and validated the model by calling it into SQL Server. However, I am failing to execute model stored procedure in SQL server.
I get this error message:
Msg 39004, Level 16, State 20, Line 2
A 'R' script error occurred during execution of 'sp_execute_external_script' with HRESULT 0x80004004.
Msg 39019, Level 16, State 2, Line 2
An external script error occurred:
Error in unserialize(rx_model) : read error
Calls: source -> withVisible -> eval -> eval -> unserialize
Error in execution. Check the output for more information.
Error in eval(expr, envir, enclos) :
Error in execution. Check the output for more information.
Calls: source -> withVisible -> eval -> eval -> .Call
Execution halted
Tried this;
model <- unserialize(rx_model);
or
model <- unserialize(as.raw(rx_model));
also tried;
model = unserialize(rx_model);
Still getting same error.
New to R/ML in SQL Server, help would be appreciated
Related
I am executing dbt run -s model_name on CLI and the task completes successfully. However, when I run the exact same command on dbt cloud, I get this error:
Syntax or semantic analysis error thrown in server while executing query.
Error message from server: org.apache.hive.service.cli.HiveSQLException:
Error running query: org.apache.spark.sql.AnalysisException: cannot
resolve '`pv.meta.uuid`' given input columns: []; line 6 pos 4;
\n'Project ['pv.meta.uuid AS page_view_uuid#249595,
'pv.user.visitorCookieId AS (80) (SQLExecDirectW)")
it looks like it fails recognizing 'pv.meta.uuid' syntax which extract data from a json format. It is not clear to me what is going on. Any thoughts? Thank you!
I am a newbie to NS3. I want to understand the execution status of handover in the Randomwalk2d module and visualize it. The default is two Ue and two enb, but errors will always occur during execution. Can anyone help me solve the problem?
This is my code link:https://drive.google.com/file/d/163NQOyvs0bTh2J4P9_vpS4Y7iqocB3HJ/view?usp=sharing
When I execute the command : ./waf --run scratch/lte_handover --visualize, the following error appear
../scratch/lte_handover.cc:In funtion 'int main(int, char**)':
../scratch/lte_handover.cc:296:78: error: expected ')' before ';' token
"Bounds",RectangleValue (Rectangle (0,2000,0,2000)));
^
Build failed
->task in 'lte_handover' failed with exit status 1 (run with -v to display more information)
Follow the instructions to enter the command :./waf --run scratch/lte_handover -v, and the following information appears
Several tasks use the same identifier. Please check the information on
https://waf.io/apidocs/Task.html?highlight=uid#waflib.Task.Task.uid
object 'SuidBuild_task'(
{task 139759060979784: SuidBuild_task -> }) defined in 'tap-creator'
object 'SuidBuild_task'(
{task 139759060980008: SuidBuild_task -> }) defined in 'tap-creator'
object 'SuidBuild_task'(
{task 139759065638504: SuidBuild_task -> }) defined in 'tap-creator'
Seems that you have an extra ) in that line above. You are not closing this command as you commented all the lines
ueMobility.SetPositionAllocator ("ns3::RandomRectanglePositionAllocator", // <-- close
ueMobility.SetMobilityModel ("ns3::RandomWalk2dMobilityModel","Bounds", RectangleValue (Rectangle (0,2000,0,2000)));
There's situation when my database(with memory optimized tables) has gone into 'Recovery Pending' state. I tried to put it into
emergency mode-->Single User Mode--> DBCC CHECKDB(<DBName>)--->set it online--->Multiuser mode.
But I am facing below error message while doing ONLINE mode.
Msg 5181, Level 16, State 5, Line 9 Could not restart database
"DBName". Reverting to the previous status. Msg 5069, Level 16, State
1, Line 9 ALTER DATABASE statement failed. Msg 41316, Level 23, State
3, Line 3395 Restore operation failed for database 'DBName' with
internal error code '0x0000000a'
I tried to check SQL error Log file and there's below message.
[ERROR] Database ID: [6] ''. Failed to load XTP checkpoint. Error
code: 0x88000001.
(d:\b\s2\sources\sql\ntdbms\hekaton\sqlhost\sqlmin\hkhostdb.cpp : 5288
- 'HkHostRecoverDatabaseHelper::ReportAndRaiseFailure')
And Rebuilding log file for Memory Optimized database is also not supported. Does anyone know familiar with such error?
I have 4 files A, B, C, D under the directory /user/bizlog/cpc on HDFS, and the record looks like this:
87465422^C376832^C27786^C21161214^Ckey
Here is my pig script:
cpc_all = load '/user/bizlog/cpc' using PigStorage('\u0003') as (cpcid, accountid, cpcplanid, cpcgrpid, key);
cpc = foreach cpc_all generate accountid, key;
account_group = group cpc by accountid;
account_sort = order account_group by group;
account_key = foreach account_sort generate group, BagToTuple(cpc.key);
store account_key into 'last' using PigStorage('\u0003');
It will get results such as:
376832^Ckey1^Ckey2
Above script suppose to process all the 4 files, but I get this error:
Backend error message
---------------------
org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception while executing (Name: account_key: New For Each(false,false)[bag] - scope-18 Operator Key: scope-18): org.apache.pig.backend.executionengine.ExecException: ERROR 0: Error while executing ForEach at []
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:289)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:242)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:464)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:432)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:412)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.
Pig Stack Trace
---------------
ERROR 0: Exception while executing (Name: account_key: New For Each(false,false)[bag] - scope-18 Operator Key: scope-18): org.apache.pig.backend.executionengine.ExecException: ERROR 0: Error while executing ForEach at []
org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception while executing (Name: account_key: New For Each(false,false)[bag] - scope-18 Operator Key: scope-18): org.apache.pig.backend.executionengine.ExecException: ERROR 0: Error while executing ForEach at []
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:289)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:242)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:464)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:432)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:412)
================================================================================
Oddly if I load one single file such as load '/user/bizlog/cpc/A' then the script will succeed.
If I load each file first and then union them, it will work fine too.
If I put the sort step at the last and the error goes away
The version of hadoop is 0.20.2 and the pig version is 0.12.1, any help will be appreciated
As mentioned in the comments:
I put the sort step at the last and the error goes away
Though I did not find much on the topic, it appears that pig does not like to rearrange the group itself.
As such the 'solution' is to rearrane the output of what is generated for the group, instead of ordering the group itself.
I have one .sql file that is execute using ant, when I execute it with the tag I recived a different output as when i used calling "sqlcmd".
sql tag output:
[sql] Executing resource: C:\SqlTesting\TestScriptDependencies\Executor.sql
[sql] Failed to execute: Use Library Exec CallSelectSP
[sql] com.microsoft.sqlserver.jdbc.SQLServerException: Invalid object name'Libraty.dbo.libraryDocumentType'.
[sql] 0 of 1 SQL statements executed successfully
exec tag output:
[exec] First SP
[exec] Msg 208, Level 16, State 1, Server MyPC-PC, Procedure getFirstDocumentType, Line 3
[exec] Invalid object name 'Libraty.dbo.libraryDocumentType'.
[exec] Second SP
[exec] Msg 208, Level 16, State 1, Server MyPC-PC, Procedure badSP, Line 3
[exec] Invalid object name 'Libraty.dbo.libraryDocumentType'.
And this is the .sql file.
Print 'First SP'
Exec getFirstDocumentType
Print 'Second SP'
Exec badSP
Go
I wonder if it is a way of the SQL tag reproduce the same output as the EXEC tag.
Thanks.
Looks like the first one is submitting the whole script as a single batch via jdbc. Whereas the second appears to be sending each sql statement via sqlcmd - hence the print statements succeed (and result in synchronized output - which is not always guaranteed with print - raiserror(str, 10, 1) with nowait; is the only guarantee of timely messaging) and both sp calls are attempted, each producing their own (sql) error.