I am having trouble with a sql script that seems to be related to the execution order of the statements or possibly just error checking that sql server does prior to starting.
This is a simplification of the code but for background, the table finalTable is cleared and repopulated by this script. It already exists in the DB but does not have new columns that are being added. For reference in the example, pretend Col1 is an existing column and Col2 is new.
If I run the code all together, I get a message saying Invalid Column name 'Col2'. If I run each block individually, everything works fine.
Block A:
SQL to create temporary tables
Block B:
drop table dbo.finalTable;
create table dbo.finalTable (col1 int, col2 int);
Block C:
insert into dbo.finalTable(col1, col2) select col1, col2 from #tempTable;
The script snippets you posted are not detailed enough to pinpoint the exact cause of the error but the symptoms suggest deferred name resolution. The issue is not statement execution order but compilation order.
SQL Server checks for syntax errors when a batch of statements is submitted for execution. If syntactically correct, statements referencing existing objects are validated against existing schema. Compilation of statements referencing non-existing objects are deferred until execution time. Deferred name resolution allows one to create a table and use it in the same batch:
CREATE TABLE dbo.finalTable (col1 int);
--Compilation of this statement is not done until execution time due to deferred name resolution
SELECT col1 FROM dbo.finaltable;
GO
Compilation of the entire batch will fail when a statement references a non-existing column of an existing table. No statements, including the first SELECT ALTER TABLE, are executed in this batch because col2 does not exist when the batch is compiled:
SELECT col1 FROM dbo.finaltable;
ALTER TABLE dbo.finaltable
ADD col2 int NULL;
SELECT col1, col2 FROM dbo.finaltable;
GO
Is it the same database the script execution and finalTable exists when you executed the script as whole? You may try to
IF OBJECT_ID('databasename.dbo.finalTable', 'U') IS NOT NULL
DROP TABLE databasename.dbo.finalTable;
Related
I've just gotten a new query error that I haven't changed anything to. Any advice on what to do? Thanks
SQL compilation error:
View definition for '**********' declared 115 column(s), but view query produces 117 column(s).
This is speculation, but it sounds like your view is using select x.*, where * means to get all the columns from some table.
Then, the underlying table changes . . . and voila, you might have a problem.
I've just gotten a new query error that I haven't changed anything to. Any advice on what to do?
If the query started to produce errors it means that the defintion of view is no longer "valid/up-to-date". Most likely the underlying table has been altered.
CREATE VIEW
View definitions are not dynamic. A view is not automatically updated if the underlying sources are modified such that they no longer match the view definition, particularly when columns are dropped. For example:
A view is created referencing a specific column in a source table and the column is subsequently dropped from the table.
A view is created using SELECT * from a table and any column is subsequently dropped from the table.
In either of these scenarios, querying the view returns a column mismatch error.
Steps to reproduce the scenario:
CREATE OR REPLACE TABLE t(col1 INT, col2 INT);
INSERT INTO t(col1, col2) VALUES (1,1);
CREATE OR REPLACE VIEW v_t AS SELECT * FROM t;
SELECT * FROM v_t;
--COL1 COL2
--1 1
So far so good. Now altering the underlying table by adding new column:
ALTER TABLE t ADD COLUMN col3 INT DEFAULT 3;
SELECT * FROM v_t;
SQL compilation error: View definition for 'V_T' declared 2 column(s), but view query produces 3 column(s).
Recreation of the view and keeping its definition on par with underlying tables should resolve it:
CREATE OR REPLACE VIEW v_t
COPY GRANTS
AS
SELECT * FROM t;
-- using * will work to refresh it but I would not recommend it
-- and explicitly describe columns instead
CREATE OR REPLACE VIEW v_t
COPY GRANTS -- to preserve already granted permissions
AS
SELECT col1, col2, col3 FROM t;
SELECT * FROM v_t;
-- COL1 COL2 COL3
-- 1 1 3
I found myself with a similar issue this morning. I had copied my query from a txt file I had and pasted it into a worksheet and tired to run it and got the same error. I had used the Table with the definition issue for a join and only for 1 column so I didn't see why it was giving me such an error.
All a look of check tables I commented on the Worksheet the error I was seeing. But I decided to run it again and it worked.
This tells me that Snowflake was using what it had cached but after editing the query it saw it as a new query and re-ran it instead of erroring out when what is in the cache doesn't match the definition.
I have read only access to a DB2 database and i want to create an "in flight/on the fly" or temporary table which only exists within the SQL, then populate it with values, then compare the results against an existing table.
So far I am trying to validate the premise and have the following query compiling but failing to pick anything up with the select statement.
Can anyone assist me with what I am doing wrong or advise on what I am attempting to do is possible? (Or perhaps a better way of doing things)
Thanks
Justin
--Create a table that only exists within the query
DECLARE GLOBAL TEMPORARY TABLE SESSION.TEMPEVENT (EVENT_TYPE INTEGER);
--Insert a value into the temporary table
INSERT INTO SESSION.TEMPEVENT (EVENT_TYPE) VALUES ('1');
--Select all values from the temporary table
SELECT * FROM SESSION.TEMPEVENT;
--Drop the table so the query can be run again
DROP TABLE SESSION.TEMPEVENT;
If you look at the syntax diagram of the DECLARE GLOBAL TEMPORARY TABLE statement, you may note the following block:
.-ON COMMIT DELETE ROWS---.
--●--+-------------------------+--●----------------------------
'-ON COMMIT PRESERVE ROWS-'
This means that ON COMMIT DELETE ROWS is default behavior. If you issue your statements with the autocommit mode turned on, the commit statement issued automatically after each statement implicitly, which deletes all the rows in your DGTT.
If you want DB2 not to delete rows in DGTT upon commit, you have to explicitly specify the ON COMMIT PRESERVE ROWS clause in the DGTT declaration.
I´m trying to execute stored procedure but I get an issue of an existing temporal table, but I just create one time and use into another part of code
SELECT ...
INTO #tmpUnidadesPresupuestadas
FROM proce.table1
--Insertar in table src..
INSERT INTO table (
....)
SELECT
....
FROM
#tmpUnidadesPresupuestadas
I get this message:
There is already an object named
'#tmpUnidadesPresupuestadas' in the database.
How can I solve it? Regards
A temp table lives for the entirety of the current session. If you run this statement more than once, then the table will already be there. Either detect that and truncate it, or before selecting into it drop it if it exists:
DROP TABLE IF EXISTS #tmpUnidadesPresupuestadas
If prior to SQL Server 2016, then you drop as such:
IF OBJECT_ID('tempdb.dbo.#tmpUnidadesPresupuestadas', 'U') IS NOT NULL
DROP TABLE #tmpUnidadesPresupuestadas;
Without seeing more of the code, it's not possible to know if the following situation is your problem, but it could be.
When you have mutually exclusive branches of code that both do a SELECT...INTO to the same temp table, a flaw causes this error. SELECT...INTO to a temp table creates the table with the structure of the query used to fill it. The parser assumes if that occurs twice, it is a mistake, since you can't recreate the structure of the table once it already has data.
if #Debug=1
select * into #MyTemp from MyTable;
else
select * into #MyTemp from MyTable;
While obviously not terribly meaningful, this alone will show the problem. The two paths are mutually exclusive, but the parser thinks they may both get executed, and issues the fatal error. You extend that, wrapping each branch in a BEGIN...END, and add the drop table (conditional or not) and the parser will still give the error.
To be fair, in fact both paths COULD be executed, if there were a loop or GOTO so that one time around #Debug = 1, and the other time it does not, so it may be asking too much of a parser. Unfortunately, I don't know of a workaround, and using INSERT INTO instead of SELECT INTO is the only way I know to avoid the problem, even though that can be terribly onerous to name all the columns in a particularly column-heavy query.
I am a bit unclear as to what you are attempting. I assume you don't want to drop the table at this point. I believe the syntax you may be looking for is
Insert Into
Insert into #tmpUnidadesPresupuestadas (Col1, col2, ... colN)
Select firstcol, secondcol... nthCol
From Data
If you do indeed wish to drop the table, the previous answers have that covered.
This might be useful for someone else, keep in mind that If more than one temporary table is created inside a single stored procedure or batch, they must have different names. If you use the same name you won't be able to ALTER the PROCEDURE.
https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2012/ms174979(v=sql.110)#temporary-tables
Make sure the stored procedure and the table doesn't have same name.
Add logic to delete if exists. Most likely you ran it previously. The table remains from the previous running of the stored procedure. If you log out and log in then run it, that would likely clear it. But the cleanest way is to check if it exists and delete it if it does. I assume this is MsSql.
At first you should check if temp table is already exist if yes then delete it then create a empty table then use insert statement. refer below example.
IF OBJECT_ID('tempdb..#TmpTBL') IS NOT NULL
DROP TABLE #TmpTBL;
SELECT TOP(0) Name , Address,PhoneNumber
INTO #TmpTBL
FROM EmpDetail
if #Condition=1
INSERT INTO #TmpTBL (Name , Address,PhoneNumber)
SELECT Name , Address,PhoneNumber FROM EmpDetail;
else
INSERT INTO #TmpTBL (Name , Address,PhoneNumber)
SELECT Name , Address,PhoneNumber FROM EmpDetail;
I use the following SQL statement to batch insert large amount of data into another table, for example:
INSERT INTO table2 (col1, col2)
SELECT col1, col2
FROM table1
WHERE condition and some logics ...;
Normally, roughly 5000 rows are inserted into table2.
However, if 2 rows are invalid in a batch, and they causes some an error when inserting data.
SQL Server raises an error and stops (or rollback) the statement.
And therefore, no row are inserted into table2 because of an error.
My questions are:
How do I insert all valid data as much as possible into a target table if an error occurs in a batch insertion ?
In addition, how do I identify the rows which cause errors after SQL Server raises errors ?
I searched the Internet for any possible solution, but I can't find any hit about it.
In transactional database management system, constraints are made to maintain data integrity and can be extremely various depending on the table schema and how you wrote them.
INSERT is a standard SQL operation that allows you to put information into your database according to your schema. Hence, to answer your questions:
How do I insert all valid data as much as possible into a target table if an error occurs in a batch insertion ?
With INSERT, there is no way you can achieve it directly in your operation as the constraints can be broadly scoped. But there is MAXERRORS argument that comes with BULK INSERT command allowing you to let valid data through when inserting records before it hit the error threshold and stop the operation.
The only way you can achieve it using INSERT command is by specifying WHERE clause to your INSERT ... SELECT ... command according to the destination table's constraints.
In addition, how do I identify the rows which cause errors after SQL server raises errors ?
Unfortunately SQL Server does not specifically point out the row since in most, or may be all RDBMS, INSERT is plain record insertion command. There is another argument in BULK INSERT command named ERRORFILE that you can use to check failed row and the reason behind.
Read a complete BULK INSERT reference here.
To sum it up, INSERT is not the most sophisticated know-it-all command there is for inserting data into SQL database. This function should be handled in your application side rather than database side.
However, there is BULK INSERT command that can take you there from a different approach.
Here if you want to insert all valid data then you need to add where condition and filter out invalid data like below
INSERT INTO table2 (col1, col2)
SELECT col1, col2 FROM table1 WHERE col1 IS NOT NULL AND col12 IS NOT NULL
We may add other condition which will cause issue in Insert, also same way we will also able to know the invalid data, just reverse the where condition as below:
SELECT col1, col2 FROM table1 WHERE col1 IS NULL AND col12 IS NULL
I have resolved this problem because I have overlooked something that is already part of my code and this situation is not needed.
In SQL Server 2008, I have two IF statements
If value = ''
begin
select * into #temptable from table 1
end
Else If value <> ''
begin
select * into #temptable from table 2
end
but when I try to execute it gives me because of the second
temptable:
There is already an object named '#temptable' in the database.
I don't want to use another temp table name as I would have to change the after code a lot. Is there a way to bypass this?
I would recommend making some changes so that your code is a little more maintainable. One problem with the way you have it set up here is with the SELECT * syntax you're using. If you later decide to make a change to the schema of table1 or table2, you could have non-obvious consequences. In production code, it's better to spell these things out so that it's clear exactly which columns you're using and where.
Also, are you really using all of the columns from table 1 and table 2 in the code that follows? You might be taking a performance hit loading more data than you need. I'd go through the code that uses #temptable and figure out which columns it's actually using. Then start by creating your temp table:
CREATE TABLE #temptable(col1 int, col2 int, col3 int, col4 int)
Include all of the possible columns that could be used, even if some of them might be null in certain cases. Presumably, the code that follows already understands that. Then you can set up your IF statements:
IF value = ''
BEGIN
INSERT INTO #temptable(col1, col2, col3)
SELECT x,y,z
FROM table1
END
ELSE
INSERT INTO #temptable(col1, col4)
SELECT alpha,beta
FROM table2
END
Your SELECT statement, as written, is creating the temp table and INSERTING into it all in one statement. Create the temp table separately with a CREATE TABLE statement, then INSERT INTO in your two IF statements.
Using SELECT INTO creates the table on the fly, as you know. Even if your query only referenced #temptable once, if you were to run it more than once (without dropping the table after the first run), you would get the same error (although if it were inside a stored procedure, it would probably only exist in the scope of the stored procedure).
However, you can't even compile this query. Using the Parse command (Ctrl+F5) on the following query, for example, fails even though the same table is used as the source table.
select * into #temptable from SourceTable
select * into #temptable from SourceTable
If the structure of tables 1 and 2 were the same, you could do something like the following.
select * into #temptable from
(select * from Table1 where #value = ''
union
select * from Table2 where #value <> '') as T
If, however, the tables have different structures, then I'm not sure what you can do, other than what agt and D. Lambert recommended.