How can I insert the result of store procedure in available table in my sql server? - sql-server-2016

I need to save the result of my store procedure in the defined table in my sql server. I test different code for that ,but, non of them was useful.
Could you please hep me?
This is my code:
AS BEGIN EXEC sp_execute_external_script #language =N'R', #script=N' library(e1071); naivemodel <-naiveBayes(datafile[,3:12], datafile[,13]); trained_model <- data.frame(payload = as.raw(serialize(naivemodel, connection=NULL)));
',
#input_data_1 =N'SELECT * from R_CT', #input_data_1_name = N'datafile', #output_data_1_name = N'trained_model' with result sets ((model varbinary(max))); end; insert into my_models (model) exec sp_RDA_Prd_naiveBayes; update my_models set model_name = 'e1071 - Naive Bayes' where model_name = 'default model'; select * from my_models;
but,I confronted with this error:
Column name or number of supplied values does not match table definition.
This is the defined table:
create table my_models (
model_name varchar(30) not null default('default model') primary key,
model varbinary(max) not null
);

I would suggest
INSERT INTO my_models (model) VALUES (model)
the first 'model' is you column name,the second 'model' is your actual value

Related

Is there a way to populate column based on conditions stored as rows in a table

I am working on a project that has a C# front end that will be used to select a file for importing into an MSSQL SQL Database. In the table there will be an additional column called 'recommendedAction' (tinyint - 0-5 only)
I would like to have sql fill in the 'recommendedAction' column based on criteria in a different table.
Is there a way that when SQL is importing (SSIS or pure TSQL) it could read the values of a table and fill in the 'action' based on the criteria? Or is this something that should be done in the C# frontend?
EDIT
SQL table structure for imported data (with additional column)
Create Table ImportedData (
Column1 INT Identity,
Column2 VARCHAR(10) NOT NULL,
Column3 CHAR(6) NOT NULL,
RecommendedAction TINYINT NOT NULL
)
Table structure of recommended action criteria
Create Table RecommendedActions(
ID INT Identity,
ActionID TINYINT NOT NULL, --value to put in the RecommendedAction column if criteria is a match
CriteriaColumn VARCHAR(255) NOT NULL --Criteria to match against the records
)
Example records for RecommendedActions
ID ActionID CriteriaColumn
1 2 'Column2 LIKE ''6%'''
2 3 'Column2 LIKE ''4%'''
Now when a new set of data is imported, if Column2 has a value of '6032' it would fill in a RecommendedAction of 2
Many ways exist. For example you can insert into the tb table a value selected from the ta table according to criteria.
Example setup
create table ta(
Id int,
val int);
insert into ta(ID, val) values
(1, 30)
,(2, 29)
,(3, 28)
,(4, 27)
,(5, 26);
create table tb
(Id int,
ref int);
Example insert
-- parameters
declare #p1 int = 1,
#p2 int = 27;
-- parameterized INSERT
insert tb(Id, ref)
values(#p1, (select ta.id from ta where ta.val=#p2));
Below added Stored procedure will do the job. It gets the Action column value based on the Column2 parameter and insert into the ImportedData table. You can execute this Stored procedure inside the C# code with required parameters. I added sample execute statements for to test the query.
Sample data inserted to the RecommendedActions Table:
INSERT INTO RecommendedActions
VALUES
(2, 'Column2 LIKE ''6%''')
,(3, 'Column2 LIKE ''4%''')
Stored Procedure Implementation :
CREATE PROCEDURE Insert_ImportedData(
#Column2 AS VARCHAR(10)
,#Column3 AS CHAR(3)
)
AS
BEGIN
DECLARE #RecommendedAction AS TINYINT
SELECT #RecommendedAction = ActionID
FROM RecommendedActions
WHERE SUBSTRING(CriteriaColumn, 15, 1) = LEFT(#Column2 , 1)
INSERT INTO ImportedData VALUES (#Column2,#Column3,#RecommendedAction)
END
GO
This is the execute statement for the Above Stored procedure
EXEC Insert_ImportedData '43258' , 'ATT'
EXEC Insert_ImportedData '63258' , 'AOT'
you can use sqlalchemy in python and load your data into a dataframe then append the dataframe to the sql table. You can set the dtype for each of the field datatype in the read_csv using a dictionary. Loading data with Python is super powerful because the bulk load is fast. Use your c# code to build the csv file using stream io and use linq to for your conditions for data fields. Then use python to load your csv.
import pandas as pd
from sqlalchemy import create_engine
engine = create_engine(connectionstring)
df = pd.read_csv("your_data.csv", header=None)
df.columns = ['field1', 'field2', 'field3']
df.to_sql(name="my_sql_table", con=connection, if_exists='append', index=False)

Generating a unique batch id (SQL Server)

This is possible 2x questions in 1x. Sorry about that, but here goes:
PROBLEM
I am creating a unique batch id everytime a user uploads some data to SQL Server. Currently, I do this by looking at the last value of the 'Identity Specification' and add +1 to that.
Problem arises, as you might have guessed, if multiple users input data at the same, they both would get the same batch id...
Possible Solution
In order to mitigate this issue, I have come up with this method to generate 3 letter + random number; and the (last id value + 1):
DECLARE #tmp CHAR(3) = CHAR(CAST(RAND()*26 AS int)+65) + CHAR(CAST(RAND()*26 AS int)+65) + CHAR(CAST(RAND()*26 AS int)+65);
SELECT #tmp;
select cast(RAND()*9999 as int)
(1) I am not sure how to concatenate this into one line of string.
(2) The other question, is there a way to 100% guarantee every user is given a unique batch id every time they submit a request, regardless of how many are doing it simultaneously?
I would really appreciate your input in this.
1 - Concatenation part is very simple, you can do the following:
DECLARE #tmp VARCHAR(10);
SET #tmp = CHAR(CAST(RAND()*26 AS int)+65)
+ CHAR(CAST(RAND()*26 AS int)+65)
+ CHAR(CAST(RAND()*26 AS int)+65)
+ CAST(cast(RAND()*9999 as int) AS VARCHAR(4));
SELECT #tmp;
2 - I would suggest to populate a table with the Random values you would like to issue to users and then select from it, to avoid the race-condition.
Create a table called BatchNumbers with two Columns BatchNumber and Used.
Populate the batch number table and 0 as default value for Used Column.
Then everytime you need a batch number do the following.
CREATE PROC dbo.usp_Get_BatchNumber
#BatchNumber VARCHAR(10) OUTPUT
AS
BEGIN
SET NOCOUNT ON;
Declare #t TABLE (BN VARCHAR(10));
UPDATE TOP (1) BatchNumbers
SET Used = 1
OUTPUT inserted.BatchNumber INTO #t (BN )
WHERE Used = 0;
SELECT #BatchNumber = BN FROM #t;
END
You need an "Upload" table with a Bigint Identity column for the BatchID, then add a new row for every user upload.
The server will maintain the correct values and prevent collisions.
I would use the built in function for this:
select newid()
> 240CA878-135E-4176-AE57-0FA83FF74037
For the first problem, you can either create a variable for your random number as a char(4) and just simply concatenate the 2, or create it as an int and then CAST it as a VARCHAR while concatenating. Everything that is concatenated into a string must be a string.
DECLARE #tmp CHAR(3) = CHAR(CAST(RAND()*26 AS int)+65) + CHAR(CAST(RAND()*26 AS int)+65) + CHAR(CAST(RAND()*26 AS int)+65);
SELECT #tmp;
DECLARE #randNum VARCHAR(4) = CAST(RAND()*9999 AS INT)
-- OR DECLARE #randNum INT = CAST(Rand()*9999) AS INT)
SELECT #randNum
DECLARE #batchID VARCHAR(MAX) = #tmp + #randNum
-- OR DECLARE #batchID VARCHAR(MAX) = #tmp + CAST(#randNum AS VARCHAR)
SELECT #batchID
try the following:
1)
DECLARE #tmp CHAR(7) = CHAR(CAST(RAND()*26 AS int)+65) + CHAR(CAST(RAND()*26 AS int)+65) + CHAR(CAST(RAND()*26 AS int)+65) + cast(cast(RAND()*9999 as int) as varchar(4));
SELECT #tmp;
2) Yes, I think so.
I upvoted Terry Carmen's answer, but from his comments it sounds like he's suggesting something different from what I first thought, so here's a complete example. I think you want a table that has a key defined with the IDENTITY property, which will tell SQL Server that you want unique, sequential values in that column and you want the database to worry about the details of guaranteeing that this is so.
create table dbo.Import
(
-- identity(1, 1) means that SQL Server will automatically assign values for
-- this column when you insert a record, with 1 being the first value
-- assigned and each subsequent value incrementing by 1.
Identifier bigint not null identity(1, 1),
-- This column for illustration only; replace it with whatever data you need
-- to store.
YourStuffHere varchar(max)
);
-- Now simply use any INSERT or MERGE command against dbo.Import, and omit the
-- Identifier column from the list of columns whose values the command supplies.
-- Then you can use the SCOPE_IDENTITY() function or an OUTPUT clause to capture
-- the Identifier value that SQL Server has inserted.
-- Example 1: INSERT with explicit values and OUTPUT.
insert dbo.Import
(YourStuffHere)
output
inserted.Identifier
values
('Example 1');
-- Example 2: INSERT/SELECT with OUTPUT.
insert dbo.Import
(YourStuffHere)
output
inserted.Identifier
select
'Example 2';
-- Example 3: INSERT with SCOPE_IDENTITY().
insert dbo.Import
(YourStuffHere)
values
('Example 3');
select Identifier = convert(bigint, scope_identity());
-- Show table contents.
select * from dbo.Import;
The first INSERT statement above produces the following result:
Identifier
1
The second:
Identifier
2
The SELECT following the third INPUT gives:
Identifier
3
And the final SELECT shows you the contents of the table:
Identifier YourStuffHere
1 Example 1
2 Example 2
3 Example 3
This is the easiest way to go about this as it allows SQL Server to do all the real work for you. Please let me know if I've misunderstood your requirements.

SQL Server Truncates variables but throws error while insert into Table

Consider a table : dummy
DROP TABLE dummy;
CREATE TABLE dummy( name nvarchar( 200 ));
Try to insert data larger than 200 nvarchar length ( that is 400 1 byte chars). It throws error: (good)
String or binary data would be truncated. The statement has been
terminated.
Now try to do similar on a variable.
declare #txt nvarchar(200) = 'period:daily?h=2&m=5|period:daily?h=1&m=14|period:daily?h=1&m=16|period:daily?h=1&m=23|period:daily?h=1&m=37|period:daily?h=1&m=17|period:daily?h=1&m=9|period:daily?h=1&m=25|period:daily?h=1&m=28|period:daily?h=1&m=0|period:daily?h=1&m=2|period:daily?h=1&m=52';
select #txt;
Now the select output is : ( Observe that the output is truncated to fit into variable #txt )
period:daily?h=2&m=5|period:daily?h=1&m=14|period:daily?h=1&m=16|period:daily?h=1&m=23|period:daily?h=1&m=37|period:daily?h=1&m=17|period:daily?h=1&m=9|period:daily?h=1&m=25|period:daily?h=1&m=28|peri
So my question is, why does the variable values are truncated but not the table inserts?
This is potentially an issues as any SP using variable to store value from a table T1 Column1 and insert into T2 column1 after some checks will partially loose the data present in T1 Column1 )
Shouldn't the logic to handle larger length data be same in both the case to keep the behavior consistent?
Example:
DROP TABLE dummy;
CREATE TABLE dummy( name nvarchar( 200 ));
DROP TABLE dummy_realdata;
CREATE TABLE dummy_realdata( name nvarchar( 2048 ));
INSERT INTO dummy_realdata
VALUES( 'period:daily?h=2&m=5|period:daily?h=1&m=14|period:daily?h=1&m=16|period:daily?h=1&m=23|period:daily?h=1&m=37|period:daily?h=1&m=17|period:daily?h=1&m=9|period:daily?h=1&m=25|period:daily?h=1&m=28|period:daily?h=1&m=0|period:daily?h=1&m=2|period:daily?h=1&m=52' );
declare #txt nvarchar(200);
select #txt = name from dummy_realdata ;
insert into dummy values( #txt);
select * from dummy ; -- truncated. hence lose of data!
select * from dummy_realdata ; -- Real data
The issue is simple if you can look at table dummy_realdata.name size is 2048. Try vice versa i.e insert into dummy and then dummy_realdata you will face same issue. Because the length of your insertion name is approximately 258. But you requested to store only 200.
Try this to get rid
DROP TABLE dummy;
CREATE TABLE dummy( name nvarchar( 2048 ));
DROP TABLE dummy_realdata;
CREATE TABLE dummy_realdata( name nvarchar( 2048 ));
INSERT INTO dummy_realdata
VALUES( 'period:daily?h=2&m=5|period:daily?h=1&m=14|period:daily?h=1&m=16|period:daily?h=1&m=23|period:daily?h=1&m=37|period:daily?h=1&m=17|period:daily?h=1&m=9|period:daily?h=1&m=25|period:daily?h=1&m=28|period:daily?h=1&m=0|period:daily?h=1&m=2|period:daily?h=1&m=52' );
declare #txt nvarchar(2048);
select #txt = name from dummy_realdata ;
insert into dummy values( #txt);
select * from dummy ; -- truncated. hence lose of data!
select * from dummy_realdata ; -- Real data

DB2 stored procedure w/ parameters

I'm having a hard time producing the correct results from my stored procedure. I'm using a db2 database and I have 3 input parameters division, department, project. My call statement looks like this.
CALL schema.stored_procedure ('IT', 'MARKETING', 'ONLINE FULFILLMENT')
I need to produce results that will display the row of data when the third parameter is specified or has a value for the Project Name (as from the example above 'Online_fulfillment') and to display all the results when the third parameter has a value 'ALL' for Project Name (per the example below 'ALL').
CALL schema.stored_procedure ('IT', 'MARKETING', 'ALL')
My query below currently is returning just the column header names with no results and I'm having trouble debugging it. Here is my current stored procedure.
CREATE PROCEDURE schema.stored_procedure
(IN in_DIVISION_NAME VARCHAR(200)
,IN in_DEPARTMENT_NAME VARCHAR(20)
,IN in_PROJECT_NAME VARCHAR(400)
)
DYNAMIC RESULT SETS 1
BEGIN
IF (in_PROJECT_NAME = 'ALL') THEN
BEGIN
DECLARE GLOBAL TEMPORARY TABLE TEMP_DW_1
(DIM_PROJECT_ID INT
,PROJECT_NAME VARCHAR (400)
,DIM_DEPARTMENT_ID INT
,DEPARTMENT_NAME VARCHAR(100)
,DIVISION_NAME VARCHAR(100)
) ON COMMIT DELETE ROWS NOT LOGGED WITH REPLACE;
END;
INSERT INTO SESSION.TEMP_DW_1 (DIM_PROJECT_ID, PROJECT_NAME, DIM_DEPARTMENT_ID,
DEPARTMENT_NAME,DIVISION_NAME)
SELECT DISTINCT DJ.DIM_PROJECT_ID
,PROJECT_NAME
,DIM_DEPARTMENT_ID
,DEPARTMENT_NAME
,DIVISION_NAME
FROM SCHEMA.FACT_TABLE
WHERE DEPARTMENT_NAME = in_DEPARTMENT_NAME
AND DIVISION_NAME = in_DIVISION_NAME;
BEGIN
DECLARE exitCursor CURSOR WITH RETURN FOR
SELECT *
FROM SESSION.TEMP_DW_1;
OPEN exitCursor;
END;
END
EXPECTED RESULTS:
CALL schema.stored_procedure ('IT', 'MARKETING', 'ONLINE FULFILLMENT')
EXPECTED RESULTS:
CALL schema.stored_procedure ('IT', 'MARKETING', 'ALL')
I believe I have solved this by adding an additional IF statement setting the in_PROJECT_NAME <> 'ALL' and adding an additional filter to the second query that sets the PROJECT_NAME = in_PROJECT_NAME. Could be an easier way to solve this but it works:
IF (in_PROJECT_NAME <> 'ALL') THEN
BEGIN
DECLARE GLOBAL TEMPORARY TABLE TEMP_DW_1
(DIM_PROJECT_ID INT
,PROJECT_NAME VARCHAR (400)
,DIM_DEPARTMENT_ID INT
,DEPARTMENT_NAME VARCHAR(100)
,DIVISION_NAME VARCHAR(100)
) ON COMMIT DELETE ROWS NOT LOGGED WITH REPLACE;
END;
INSERT INTO SESSION.TEMP_DW_1 (DIM_PROJECT_ID, PROJECT_NAME, DIM_DEPARTMENT_ID,
DEPARTMENT_NAME ,DIVISION_NAME)
SELECT DISTINCT DJ.DIM_PROJECT_ID
,PROJECT_NAME
,DIM_DEPARTMENT_ID
,DEPARTMENT_NAME
,DIVISION_NAME
FROM SCHEMA.FACT_TABLE
WHERE DEPARTMENT_NAME = in_DEPARTMENT_NAME
AND DIVISION_NAME = in_DIVISION_NAME;
AND PROJECT_NAME = in_PROJECT_NAME

Sybase BCP - include Column header

Sybase BCP exports nicely but only includes the data. Is there a way to include column names in the output?
AFAIK It's a very difficult to include column names in the bcp output.
Try free sqsh isql replacement http://www.sqsh.org/ with pipe and redirect features.
F.e.
1> select * from sysobjects
2> go 2>/dev/null >/tmp/objects.txt
I suppose you can achive necessary result.
With bcp you can't get the table columns.
You can get it with a query like this:
select c.name from sysobjects o
inner join syscolumns c on o.id = c.id and o.name = tablename
I solved this problem not too long ago via a proc will loop through the tables columns, and concatenate them. I removed all the error checking and procedure wrapper from this example. this should give you the idea. I then BCP'd out of the below table into headers.txt, then BCP'd the results into detail.txt and used dos copy /b header.txt+detail.txt file.txt to combine the header and detail records...this wall all done in a batch script.
The table you will BCP
create table dbo.header_record
(
headers_delimited varchar(5000)
)
Then massage the below commands into a stored proc. use isql to call this proc before your BCP extracts.
declare
#last_col int,
#curr_col int,
#header_conc varchar(5000),
#table_name varchar(35),
#delim varchar(5),
#delim_size int
select
#header_conc = '',
#table_name = 'dbo.detail_table',
#delim = '~'
set #delim_size = len(#delim)
--
--create column list table to hold our identity() columns so we can work through it
--
create local temporary table col_list
(
col_head int identity
,column_name varchar(50)
) on commit preserve rows
--
-- Delete existing rows in case columns have changed
--
delete from header_record
--
-- insert our column values in the order that they were created
--
insert into col_list (column_name)
select
trim(column_name)
from SYS.SYSCOLUMN --sybase IQ specific, you will need to adjust.
where table_id+100000 = object_id(#table_name) --Sybase IQ 12.7 specific, 15.x will need to be changed.
order by column_id asc
--
--select the biggest identity in the col_list table
--
select #last_col = max(col_head)
from col_list
--
-- Start # column 1
--
set #curr_col = 1
--
-- while our current columns are less than or equal to the column we need to
-- process, continue else end
--
while (#curr_col <= #last_col)
BEGIN
select
#header_conc =
#header_conc + #delim + column_name
from col_list where col_head = #curr_col
set #curr_col = #curr_col + 1
END
--
-- insert our final concatenated value into 1 field, ignore the first delimiter
--
insert into dbo.header_record
select substring(#header_conc, #delim_size, len(#header_conc) )
--
-- Drop temp table
--
drop table col_list
I created a view with the first row being the column names unioned to the actual table.
create view bcp_view
as 'name' col1, 'age' col2, ....
union
select name, convert(varchar, age),.... from people
Just remember to convert any non-varchar columns.