create table using select statement in place of table_name - sql

create table using select statement in place of table_name.
I want to create a table with the name B100 where '100' is the maximum value of id in table 'A'
example:
table A:
id name
100 harsh
78 Vishal
23 Ivan
34 Hardik
need to create table with name 'B{max_value_of_id_in_A}'.
the fields in table B is the same (id, name);
what I try:
create table CONCAT('B', (Select max(id) from A))
(
id int,
name varchar(50)
)

To do this, you need to use dynamic sql. A quick and dirty example is:
create table test(id smallint, name varchar(15));
insert test (id, name) values
(98, 'harsh'), (78, 'Vishal'), (23, 'Ivan'), (34, 'Hardik');
declare #sql nvarchar(200);
set #sql = N'create table B' + format((select max(id) from test), 'D3')
+ N'(
id int,
name varchar(50)
);'
select #sql;
exec(#sql);
exec('select * from B098');
Notice that I had to resort to dynamic sql to actually use that table within the same batch. As the others have suggested, you should reconsider the path you have chosen for many reasons. Perhaps foremost is that this requires a rather advanced level of skill - you will likely need much help to make use of your table. You should consult with your DBA to get their opinion (and permission).

Related

How do I create a variable/parameter that is a string of values in SQL SSMS that I can use as a substitute in my where clause?

This may be a very basic question, but I have been struggling with this.
I have a SSMS query that I'll be using multiple times for a large set of client Ids. Its quite cumbersome to have to amend the parameters in all the where clauses every time I want to run it.
For simplicity, I want to convert a query like the one below:
SELECT
ID,
Description
From TestDb
Where ID in ('1-234908','1-345678','1-12345')
to a query of the format below so that I only need to change my variable field once and it can be applied across my query:
USE TestDb
DECLARE #ixns NVARCHAR(100)
SET #ixns = '''1-234908'',''1-345678'',''1-12345'''
SELECT
ID,
Description
From TestDb
Where ID IN #ixns
However, the above format doesn't work. Can anyone help me on how I can use a varchar/string variable in my "where" clause for my query so that I can query multiple IDs at the same time and only have to adjust/set my variable once?
Thanks in advance :D
The most appropriate solution would be to use a table variable:
DECLARE #ixns TABLE (id NVARCHAR(100));
INSERT INTO #ixns(id) VALUES
('1-234908'),
('1-345678'),
('1-12345');
SELECT ID, Description
FROM TestDb
WHERE ID IN (SELECT id FROM #ixns);
You can load ids to temp table use that in where condition
USE TestDb
DECLARE #tmpIDs TABLE
(
id VARCHAR(50)
)
insert into #tmpIDs values ('1-234908')
insert into #tmpIDs values ('1-345678')
insert into #tmpIDs values ('1-12345')
SELECT
ID,
Description
From TestDb
Where ID IN (select id from #tmpIDs)
The most appropriate way is to create a table type because it is possible to pass this type as parameters.
1) Creating the table type with the ID column.
create type MyListID as table
(
Id int not null
)
go
2) Creating the procedure that receives this type as a parameter.
create procedure MyProcedure
(
#MyListID as MyListID readonly
)
as
select
column1,
column2
...
from
MyTable
where
Id in (select Id from #MyListID)
3) In this example you can see how to fill this type through your application ..: https://stackoverflow.com/a/25871046/8286724

Generating a unique batch id (SQL Server)

This is possible 2x questions in 1x. Sorry about that, but here goes:
PROBLEM
I am creating a unique batch id everytime a user uploads some data to SQL Server. Currently, I do this by looking at the last value of the 'Identity Specification' and add +1 to that.
Problem arises, as you might have guessed, if multiple users input data at the same, they both would get the same batch id...
Possible Solution
In order to mitigate this issue, I have come up with this method to generate 3 letter + random number; and the (last id value + 1):
DECLARE #tmp CHAR(3) = CHAR(CAST(RAND()*26 AS int)+65) + CHAR(CAST(RAND()*26 AS int)+65) + CHAR(CAST(RAND()*26 AS int)+65);
SELECT #tmp;
select cast(RAND()*9999 as int)
(1) I am not sure how to concatenate this into one line of string.
(2) The other question, is there a way to 100% guarantee every user is given a unique batch id every time they submit a request, regardless of how many are doing it simultaneously?
I would really appreciate your input in this.
1 - Concatenation part is very simple, you can do the following:
DECLARE #tmp VARCHAR(10);
SET #tmp = CHAR(CAST(RAND()*26 AS int)+65)
+ CHAR(CAST(RAND()*26 AS int)+65)
+ CHAR(CAST(RAND()*26 AS int)+65)
+ CAST(cast(RAND()*9999 as int) AS VARCHAR(4));
SELECT #tmp;
2 - I would suggest to populate a table with the Random values you would like to issue to users and then select from it, to avoid the race-condition.
Create a table called BatchNumbers with two Columns BatchNumber and Used.
Populate the batch number table and 0 as default value for Used Column.
Then everytime you need a batch number do the following.
CREATE PROC dbo.usp_Get_BatchNumber
#BatchNumber VARCHAR(10) OUTPUT
AS
BEGIN
SET NOCOUNT ON;
Declare #t TABLE (BN VARCHAR(10));
UPDATE TOP (1) BatchNumbers
SET Used = 1
OUTPUT inserted.BatchNumber INTO #t (BN )
WHERE Used = 0;
SELECT #BatchNumber = BN FROM #t;
END
You need an "Upload" table with a Bigint Identity column for the BatchID, then add a new row for every user upload.
The server will maintain the correct values and prevent collisions.
I would use the built in function for this:
select newid()
> 240CA878-135E-4176-AE57-0FA83FF74037
For the first problem, you can either create a variable for your random number as a char(4) and just simply concatenate the 2, or create it as an int and then CAST it as a VARCHAR while concatenating. Everything that is concatenated into a string must be a string.
DECLARE #tmp CHAR(3) = CHAR(CAST(RAND()*26 AS int)+65) + CHAR(CAST(RAND()*26 AS int)+65) + CHAR(CAST(RAND()*26 AS int)+65);
SELECT #tmp;
DECLARE #randNum VARCHAR(4) = CAST(RAND()*9999 AS INT)
-- OR DECLARE #randNum INT = CAST(Rand()*9999) AS INT)
SELECT #randNum
DECLARE #batchID VARCHAR(MAX) = #tmp + #randNum
-- OR DECLARE #batchID VARCHAR(MAX) = #tmp + CAST(#randNum AS VARCHAR)
SELECT #batchID
try the following:
1)
DECLARE #tmp CHAR(7) = CHAR(CAST(RAND()*26 AS int)+65) + CHAR(CAST(RAND()*26 AS int)+65) + CHAR(CAST(RAND()*26 AS int)+65) + cast(cast(RAND()*9999 as int) as varchar(4));
SELECT #tmp;
2) Yes, I think so.
I upvoted Terry Carmen's answer, but from his comments it sounds like he's suggesting something different from what I first thought, so here's a complete example. I think you want a table that has a key defined with the IDENTITY property, which will tell SQL Server that you want unique, sequential values in that column and you want the database to worry about the details of guaranteeing that this is so.
create table dbo.Import
(
-- identity(1, 1) means that SQL Server will automatically assign values for
-- this column when you insert a record, with 1 being the first value
-- assigned and each subsequent value incrementing by 1.
Identifier bigint not null identity(1, 1),
-- This column for illustration only; replace it with whatever data you need
-- to store.
YourStuffHere varchar(max)
);
-- Now simply use any INSERT or MERGE command against dbo.Import, and omit the
-- Identifier column from the list of columns whose values the command supplies.
-- Then you can use the SCOPE_IDENTITY() function or an OUTPUT clause to capture
-- the Identifier value that SQL Server has inserted.
-- Example 1: INSERT with explicit values and OUTPUT.
insert dbo.Import
(YourStuffHere)
output
inserted.Identifier
values
('Example 1');
-- Example 2: INSERT/SELECT with OUTPUT.
insert dbo.Import
(YourStuffHere)
output
inserted.Identifier
select
'Example 2';
-- Example 3: INSERT with SCOPE_IDENTITY().
insert dbo.Import
(YourStuffHere)
values
('Example 3');
select Identifier = convert(bigint, scope_identity());
-- Show table contents.
select * from dbo.Import;
The first INSERT statement above produces the following result:
Identifier
1
The second:
Identifier
2
The SELECT following the third INPUT gives:
Identifier
3
And the final SELECT shows you the contents of the table:
Identifier YourStuffHere
1 Example 1
2 Example 2
3 Example 3
This is the easiest way to go about this as it allows SQL Server to do all the real work for you. Please let me know if I've misunderstood your requirements.

Insert query to insert name into db

I want to store my user information into database with Uname first letter upper case and lname first letter uppercase. Doesn't matter how user insert data, it should be store in db as
Input: sachin tendulkar
Database: Sachin Tendulkar
I need Insert query for this. I know we can use update / select query but specially looking for insert query.
Appreciate your answer.
Thanks,
In SQL Server. If first and last names are stored in separate columns you can do something like:
CREATE TABLE #Tbl
(
FirstName NVARCHAR(MAX)
)
DECLARE #FirstName NVARCHAR(MAX) = 'tom'
INSERT INTO #Tbl (FirstName) VALUES (UPPER(LEFT(#FirstName,1))+LOWER(SUBSTRING(#FirstName,2,LEN(#FirstName))))

Dynamic table design (common lookup table), need a nice query to get the values

sql2005
This is my simplified example:
(in reality there are 40+ tables in here, I only showed 2)
I got a table called tb_modules, with 3 columns (id, description, tablename as varchar):
1, UserType, tb_usertype
2, Religion, tb_religion
(Last column is actually the name of a different table)
I got an other table that looks like this:
tb_value (columns:id, tb_modules_ID, usertype_OR_religion_ID)
values:
1111, 1, 45
1112, 1, 55
1113, 2, 123
1114, 2, 234
so, I mean 45, 55, 123, 234 are usertype OR religion ID's
(45, 55 usertype, 123, 234 religion ID`s)
Don't judge, I didn't design the database
Question
How can I make a select, showing * from tb_value, plus one column
That one column would be TITLE from the tb_usertype or RELIGIONNAME from the tb_religion table
I would like to make a general thing.
Was thinking initially about maybe a SQL function that returns a string, but I think I would need dynamic SQL, which is not ok in a function.
Anyone a better idea ?
At the beginning we have this -- which is quite messy.
To clean-up a bit I add two views and a synonym:
create view v_Value as
select
ID as ValueID
, tb_modules_ID as ModuleID
, usertype_OR_religion_ID as RemoteID
from tb_value ;
go
create view v_Religion as
select
ID
, ReligionName as Title
from tb_religion ;
go
create synonym v_UserType for tb_UserType ;
go
And now the model looks like
It is easier now to write the query
;
with
q_mod as (
select
m.ID as ModuleID
, coalesce(x1.ID , x2.ID) as RemoteID
, coalesce(x1.Title , x2.Title) as Title
, m.Description as ModuleType
from tb_Modules as m
left join v_UserType as x1 on m.TableName = 'tb_UserType'
left join v_Religion as x2 on m.TableName = 'tb_Religion'
)
select
a.ModuleID
, v.ValueID
, a.RemoteID
, a.ModuleType
, a.Title
from q_mod as a
join v_Value as v on (v.ModuleID = a.ModuleID and v.RemoteID = a.RemoteID) ;
There is an obvious pattern in this query, so it can be created as dynamic sql if you have to add another module-type table. When adding another table, use ID and Title to avoid having to use a view.
EDIT
To build dynamic sql (or query on application level)
Modify lines 6 and 7, the x-index is tb_modules.id
coalesce(x1. , x2. , x3. ..)
Add lines to the left join (below line 11)
left join v_SomeName as x3 on m.TableName = 'tb_SomeName'
The SomeName is tb_modules.description and x-index is matching tb_modules.id
EDIT 2
The simplest would probably be to package the above query into a view and then each time the schema changes dynamically crate and run ALTER VIEW. This way the query would not change from the point of the application.
Since we're all agreed the design is flaky, I'll skip any comments on that. The pattern of the query is this:
-- Query 1
select tb_value.*,tb_religion.religion_name as ANY_DESCRIPTION
from tb_value
JOIN tb_religion on tb_value.ANY_KIND_OF_ID = tb_religion.id
WHERE tb_value.module_id = 2
-- combine it with...
UNION ALL
-- ...Query 2
select tb_value.*,tb_religion.title as ANY_DESCRIPTION
from tb_value
JOIN tb_userType on tb_value.ANY_KIND_OF_ID = tb_userType.id
WHERE tb_value.module_id = 1
-- combine it with...
UNION ALL
-- ...Query 3
select lather, rinse, repeat for 40 tables!
You can actually define a view that hardcodes all 40 cases, and then put filters onto queries for the particular modules you want.
To do this dynamically you need to be able to create a sql statement that looks like this
select tb_value.*, tb_usertype.title as Descr
from tb_value
inner join tb_usertype
on tb_value.extid = tb_usertype.id
where tb_value.tb_module_id = 1
union all
select tb_value.*, tb_religion.religionname as Descr
from tb_value
inner join tb_religion
on tb_value.extid = tb_religion.id
where tb_value.tb_module_id = 2
-- union 40 other tables
Currently you can not do that because you do not have any information in the db telling you which column to use from tb_religion and tb_usertype etc. You can add that as a new field in tb_module.
If you have fieldname to use in tb_module you can build a view that does what you want.
And you could add a trigger to table tb_modules that alters the view whenever tb_modules is modified. That way you do not need to use dynamic sql from the client when doing queries. The only thing you need to worry about is that the table needs to be created in the db before you add a new row to tb_modules
Edit 1
Of course the code in the trigger needs to dynamically build the alter view statement.
Edit 2 You also need to have a field with information about what column in tb_usertype and tb_religion etc. to join against tb_value.extid (usertype_OR_religion_ID). Or you can assume that the field will always be called id
Edit 3 Here is how you could build the trigger on tb_module that alters the view v_values. I have added fieldname as a column in tb_modules and I assume that the id field in the related tables is called id.
create trigger tb_modules_change on tb_modules after insert, delete, update
as
declare #sql nvarchar(max)
declare #moduleid int
declare #tablename varchar(50)
declare #fieldname varchar(50)
set #sql = 'alter view v_value as '
declare mcur cursor for
select id, tablename, fieldname
from tb_modules
open mcur
fetch next from mcur into #moduleid, #tablename, #fieldname
while ##FETCH_STATUS = 0
begin
set #sql = #sql + 'select tb_value.*, '+#tablename+'.'+#fieldname+' '+
'from tb_value '+
'inner join '+#tablename+' '+
'on tb_value.extid = '+#tablename+'.id '+
'where tb_value.tb_module_id = '+cast(#moduleid as varchar(10))
fetch next from mcur into #moduleid, #tablename, #fieldname
if ##FETCH_STATUS = 0
begin
set #sql = #sql + ' union all '
end
end
close mcur
deallocate mcur
exec sp_executesql #sql
Hm..there are probably better solutions available but here's my five cents:
SELECT
id,tb_modules_ID,usertype_OR_religion_ID,
COALESCE(
(SELECT TITLE FROM tb_usertype WHERE Id = usertype_OR_religion_ID),
(SELECT RELIGIONNAME FROM tb_religion WHERE Id = usertype_OR_religion_ID),
'N/A'
) AS SourceTable
FROM tb_valuehere
Note that I don't have the possibility to check the statement right now so I'm reserving myself for any syntax errors...
First, using your current design the only reasonable solution is dynamic SQL. You should write a module in your middle-tier that queries for the appropriate table names and builds the queries on the fly. Trying to accomplish that in T-SQL will be a nightmare. T-SQL was not designed for string construction.
The right solution is to build a new database designed properly, migrate the data and scrap the existing design. The problems you will encounter with your current design will simply grow. It will be harder for new developers to learn the new system. It will be prone to errors. There will be no data integrity (e.g. forcing the attribute "Start Date" to be parsable as a date). Custom queries will be a chore to write and so on. Eventually, you will hit the day when the types of information desired from the system are simply too difficult to extract given the current design.
First take the undesigner out the back and put them out of their misery. They are hurting people.
Due to their incompetence, every time you add a row to Module, you have to modify every query that uses it. Good for www.dailywtf.com.
You do not have Referential Integrity either, because you cannot define an FK on the this_or_that column. Your data is exposed, probably to "code" written by the same undesigner. No doubt you are aware that this is where the deadlocks are created.
That it is a "judgement", that is so that you understand the gravity of the undesign, and you can justify replacing it, to your managers.
SQL was designed for Relational Databases, that means Normalised. It is not good for mangled files. Sure, some queries may be better than others (just look at the answers), but there is no way to get around the undesign, any SQL query will be hamstrung, and need change whenever a Module row is added.
"Dynamic" is reserved for Databases, not possible for flat flies.
Two answers. One to stop the continuing idiocy of changing the existing queries every time a Module row is added (you're welcome); the second to answer your question.
Safe Future Queries
CREATE VIEW UserReligion_vw AS
SELECT [XxxxId] = id, -- replace Xxxx
[ReligionId] = usertype_OR_religion_ID
FROM tb_value
WHERE tb_modules_ID = 1
CREATE VIEW UserReligion_vw AS
SELECT [XxxxId] = id,
[ReligionId] = usertype_OR_religion_ID
FROM tb_value
WHERE tb_modules_ID = 2
From now on, make sure the all queries currently using the undesign, are modified to use the correct View instead. Do not use the Views for Update/Delete/Insert.
Answer
Ok, now for the main question. I can think of other approaches, but this one is the best. You have stated, you want the third column to also be an unnormalised piece of chicken excreta and the supply Title for [EITHER_Religion_OR_UserType_OR_This_OR_That]. Right, so you are teaching the user to be confused as well; when the no of modules grow, they will have great fun figuring out what the column contains. Yes a problem does always compound itself.
SELECT [XxxxId] = id,
[Whatever] = CASE tb_modules_ID
WHEN 1 THEN ( SELECT name -- title, whatever
FROM tb_religion
WHERE id = V.usertype_OR_religion_ID
)
WHEN 2 THEN ( SELECT name -- title, whatever
FROM tb_usertype
WHERE id = V.usertype_OR_religion_ID
)
ELSE "(UnknownModule)" -- do not remove the brackets
END
FROM tb_value V
WHERE conditions... -- you need something here
This is called a Correlated Scalar Subquery.
It works on any version of Sybase since 4.9.2 with no limitations. And SQL 2005 (last time I looked, anyway, Aug 2009). But on MS you will get a StackTrace if the volume of tb_value is large, so make sure the WHERE clause has some conditions on it.
But MS have broken the server with their "new" 2008 codeline, so it does not work in all circumstances (the worse your mangled files, the less likely it will work; the better your database design, the more likely it will work). That is why some MS people pray every day for the next Service pack, and others never attend church.
I guess you want something like this:
Adding tables and one row per table into tb_modules is straight forward.
SET NOCOUNT ON
if OBJECT_ID('tb_modules') > 0 drop table tb_modules;
if OBJECT_ID('tb_value') > 0 drop table tb_value;
if OBJECT_ID('tb_usertype') > 0 drop table tb_usertype;
if OBJECT_ID('tb_religion') > 0 drop table tb_religion;
go
create table dbo.tb_modules (
id int,
description varchar(20),
tablename varchar(255)
);
insert into tb_modules values ( 1, 'UserType', 'tb_usertype');
insert into tb_modules values ( 2, 'Religion', 'tb_religion');
create table dbo.tb_value(
id int,
tb_modules_ID int,
usertype_OR_religion_ID int
);
insert into tb_value values ( 1111, 1, 45);
insert into tb_value values ( 1112, 1, 55);
insert into tb_value values ( 1113, 2, 123);
insert into tb_value values ( 1114, 2, 234);
create table dbo.tb_usertype(
id int,
UserType varchar(30)
);
insert into tb_usertype values ( 45, 'User_type_45');
insert into tb_usertype values ( 55, 'User_type_55');
create table dbo.tb_religion(
id int,
Religion varchar(30)
);
insert into tb_religion values ( 123, 'Religion_123');
insert into tb_religion values ( 234, 'Religion_234');
-- start of query
declare #sql varchar(max) = null
Select #sql = case when #sql is null then ' ' else #sql + char(10) + 'union all ' end
+ 'Select ' + str(id) + ' type, id, ' + description + ' description from ' + tablename from tb_modules
set #sql = 'select v.id, tb_modules_ID , usertype_OR_religion_ID , t.description
from tb_value v
join ( ' + #sql + ') as t
on v.tb_modules_ID = t.type and v.usertype_OR_religion_ID = t.id
'
Print #sql
exec( #sql)
I think it's intended to be used with dynamic sql.
Maybe break out each tb_value.tb_modules_ID row into its own temp table, named with the tb_modules.tablename.
Then have an sp iterate through the temp tables matching your naming convention (by prefix or suffix) building the sql and doing your join.

Column name or number of supplied values does not match table definition

In the SQL Server, I am trying to insert values from one table to another by using the below query:
delete from tblTable1
insert into tblTable1 select * from tblTable1_Link
I am getting the following error:
Column name or number of supplied values does not match table definition.
I am sure that both the tables have the same structure, same column names and same data types.
They don't have the same structure... I can guarantee they are different
I know you've already created it... There is already an object named ‘tbltable1’ in the database
What you may want is this (which also fixes your other issue):
Drop table tblTable1
select * into tblTable1 from tblTable1_Link
I want to also mention that if you have something like
insert into blah
select * from blah2
and blah and blah2 are identical keep in mind that a computed column will throw this same error...
I just realized that when the above failed and I tried
insert into blah (cola, colb, colc)
select cola, colb, colc from blah2
In my example it was fullname field (computed from first and last, etc)
for inserts it is always better to specify the column names see the following
DECLARE #Table TABLE(
Val1 VARCHAR(MAX)
)
INSERT INTO #Table SELECT '1'
works fine, changing the table def to causes the error
DECLARE #Table TABLE(
Val1 VARCHAR(MAX),
Val2 VARCHAR(MAX)
)
INSERT INTO #Table SELECT '1'
Msg 213, Level 16, State 1, Line 6
Insert Error: Column name or number of
supplied values does not match table
definition.
But changing the above to
DECLARE #Table TABLE(
Val1 VARCHAR(MAX),
Val2 VARCHAR(MAX)
)
INSERT INTO #Table (Val1) SELECT '1'
works. You need to be more specific with the columns specified
supply the structures and we can have a look
The problem is that you are trying to insert data into the database without using columns. SQL server gives you that error message.
Error: insert into users values('1', '2','3') - this works fine as long you only have 3 columns
If you have 4 columns but only want to insert into 3 of them
Correct: insert into users (firstName,lastName,city) values ('Tom', 'Jones', 'Miami')
Beware of triggers. Maybe the issue is with some operation in the trigger for inserted rows.
Dropping the table was not an option for me, since I'm keeping a running log. If every time I needed to insert I had to drop, the table would be meaningless.
My error was because I had a couple columns in the create table statement that were products of other columns, changing these fixed my problem. eg
create table foo (
field1 as int
,field2 as int
,field12 as field1 + field2 )
create table copyOfFoo (
field1 as int
,field2 as int
,field12 as field1 + field2) --this is the problem, should just be 'as int'
insert into copyOfFoo
SELECT * FROM foo
The computed columns make the problem.
Do not use SELECT *. You must specify each fields after SELECT except computed fields
some sources for this issues are as below
1- Identity column ,
2- Calculated Column
3- different structure
so check those 3 , i found my issue was the second one ,
For me the culprit is int value assigned to salary
Insert into Employees(ID,FirstName,LastName,Gender,Salary) values(3,'Canada', 'pa', 'm',15,000)
in salary column When we assign 15,000 the compiler understand 15 and 000.
This correction works fine for me.
Insert into Employees(ID,FirstName,LastName,Gender,Salary) values(4,'US', 'sam', 'm',15000)
Update to SQL server 2016/2017/…
We have some stored procedures in place to import and export databases.
In the sp we use (amongst other things) RESTORE FILELISTONLY FROM DISK where we create a
table "#restoretemp" for the restore from file.
With SQL server 2016, MS has added a field SnapshotURL nvarchar(360) (restore url Azure) what has caused the error message.
After I have enhanced the additional field, the restore has worked again.
Code snipped (see last field):
SET #query = 'RESTORE FILELISTONLY FROM DISK = ' + QUOTENAME(#BackupFile , '''')
CREATE TABLE #restoretemp
(
LogicalName nvarchar(128)
,PhysicalName nvarchar(128)
,[Type] char(1)
,FileGroupName nvarchar(128)
,[Size] numeric(20,0)
,[MaxSize] numeric(20,0)
,FileID bigint
,CreateLSN numeric(25,0)
,DropLSN numeric(25,0) NULL
,UniqueID uniqueidentifier
,ReadOnlyLSN numeric(25,0)
,ReadWriteLSN numeric(25,0)
,BackupSizeInByte bigint
,SourceBlockSize int
,FilegroupID int
,LogGroupGUID uniqueidentifier NULL
,DifferentialBaseLSN numeric(25,0)
,DifferentialbaseGUID uniqueidentifier
,IsReadOnly bit
,IsPresent bit
,TDEThumbprint varbinary(32)
-- Added field 01.10.2018 needed from SQL Server 2016 (Azure URL)
,SnapshotURL nvarchar(360)
)
INSERT #restoretemp EXEC (#query)
SET #errorstat = ##ERROR
if #errorstat <> 0
Begin
if #Rueckgabe = 0 SET #Rueckgabe = 6
End
Print #Rueckgabe
Check your id. Is it Identity? If it is then make sure it is declared as ID not null Identity(1,1)
And before creating your table , Drop table and then create table.
The problem I had that caused this error was that I was trying to insert null values into a NOT NULL column.
I had the same problem, and the way I worked around it is probably not the best but it is working now.
It involves creating a linked server and using dynamic sql - not the best, but if anyone can suggest something better, please comment/answer.
declare #sql nvarchar(max)
DECLARE #DB_SPACE TABLE (
[DatabaseName] NVARCHAR(128) NOT NULL,
[FILEID] [smallint] NOT NULL,
[FILE_SIZE_MB] INT NOT NULL DEFAULT (0),
[SPACE_USED_MB] INT NULL DEFAULT (0),
[FREE_SPACE_MB] INT NULL DEFAULT (0),
[LOGICALNAME] SYSNAME NOT NULL,
[DRIVE] NCHAR(1) NOT NULL,
[FILENAME] NVARCHAR(260) NOT NULL,
[FILE_TYPE] NVARCHAR(260) NOT NULL,
[THE_AUTOGROWTH_IN_KB] INT NOT NULL DEFAULT(0)
,filegroup VARCHAR(128)
,maxsize VARCHAR(25)
PRIMARY KEY CLUSTERED ([DatabaseName] ,[FILEID] )
)
SELECT #SQL ='SELECT [DatabaseName],
[FILEID],
[FILE_SIZE_MB],
[SPACE_USED_MB],
[FREE_SPACE_MB],
[LOGICALNAME],
[DRIVE],
[FILENAME],
[FILE_TYPE],
[THE_AUTOGROWTH_IN_KB]
,filegroup
,maxsize FROM OPENQUERY('+ QUOTENAME('THE_MONITOR') + ','''+ ' EXEC MASTER.DBO.monitoring_database_details ' +''')'
exec sp_executesql #sql
INSERT INTO #DB_SPACE(
[DatabaseName],
[FILEID],
[FILE_SIZE_MB],
[SPACE_USED_MB],
[FREE_SPACE_MB],
[LOGICALNAME],
[DRIVE],
[FILENAME],
[FILE_TYPE],
THE_AUTOGROWTH_IN_KB,
[filegroup],
maxsize
)
EXEC SP_EXECUTESQL #SQL
This is working for me now.
I can guarantee the number of columns and type of columns returned by the stored procedure are the same as in this table, simply because I return the same table from the stored procedure.
In my case, I had:
insert into table1 one
select * from same_schema_as_table1 same_schema
left join...
and I had to change select * to select same_schema.*.
You're missing column name after TableName in insert query:
INSERT INTO TableName**(Col_1,Col_2,Col_3)** VALUES(val_1,val_2,val_3)
In my case the problem was that the SP I was executing returned two result sets, and only the second result set was matching the table definition.