I am trying to run INSERT INTO SELECT statement in cur.execute. The code executes without any error, but there are no records inserted in the table.
Here is the table data:
I execute the following query,
cursor = conn.cursor()
query = 'INSERT INTO destination_test_hist SELECT * FROM destination_test'
cursor.execute(query)
conn.commit()
Where am I going wrong? I need some assistance
Syntax-wise, I don't see any errors.
Just a few simple things to check to be sure:
Check that your connection is correct and you are checking that server/db, your source table(in this case destination_test) and target table (in this case destination_test_hist) are correct.
Check your database environment and relevant tables in the same. You might have checked the QA test environment maybe or post the query with output of no records with database name and tables.
Also, at programming level it may appear that it might not have got executed in the database do debug the query output what it returns at programming level too.
Related
I'm getting an error while trying to create 2 tables in Green Screen STRSQL.
CREATE TABLE QTEMP/CUSTOMER AS (SELECT * FROM CBHHUBFP/SSCUSTP)
CREATE TABLE QTEMP/ADDRESS AS (SELECT * FROM QTEMP/CUSTOMER)
ERROR: Keyword Create not expected
Valied Tokens End-Of-Statement
Am I missing something here?
Using STRSQL you can only execute one SQL statement at time.
Re my comment to the accepted answer by #dcieslak, the following is an example of a Dynamic Compound Statement (DCS) with syntax that should be valid for use with the /*SYS naming-option, on any system [level of DB2 for IBM i], since the availability of that DCS feature; notice the addition of the WITH DATA clause to make the statement syntactically correct, and enclosing the two semicolon separated requests as CREATE TABLE statements, inside of the BEGIN and END:
begin
CREATE TABLE QTEMP/CUSTOMER AS (SELECT * FROM qiws/qcustcdt )
with data
;
CREATE TABLE QTEMP/ADDRESS AS (SELECT * FROM QTEMP/CUSTOMER)
with data
;
end
-- Table ADDRESS created in QTEMP. /* <-- feedback of final rqs */
While that is possible to enter as a single request, there is likely no point in coding that, per the extra overhead; perhaps if run under isolation and doing more work and coding exception handling, then there would be value. IOW, the Start Interactive SQL Session (STRSQL) scripting environment allows the isolation and user decisions to react to exceptions when the statement are entered individually, successively, Enter pressed after each.
So unless the idea is to test what might be written in a routine [as a compound statement, statements between BEGIN-END pairs] without actually coding the CREATE PROCEDURE [or CREATE FUNCTION ¿or CREATE TRIGGER?] with a routine-body, then the implicitly created routine [as procedure] that is then run and deleted to implement the DCS, is probably mostly just a bunch of extra/unnecessary work.
Firstly, a lot of people are unfamiliar with output in my experience. If so, this link is very handy: Hidden Features of SQL Server
I have the following update statement:
UPDATE lease_deal.lease_budget
SET change_type = NULL
OUTPUT inserted.*
WHERE ISNULL(change_type, '') = ''
Although I thought this would return the updated records for me I'm receiving the following error:
Msg 334, Level 16, State 1, Line 9 The target table
'lease_deal.lease_budget' of the DML statement cannot have any enabled
triggers if the statement contains an OUTPUT clause without INTO
clause.
I know I can successfully create a temporary table and redirect the updated records there using the output statement, but why can't I return it to the IDE? I'm sure (certain) I've been able to do this before but can't seem to find a suitable example anywhere to help me understand what I'm doing wrong. Is this simply not possible when you have triggers on a column that you're updating?
This
http://msdn.microsoft.com/en-au/library/ms177564.aspx
says this
If the OUTPUT clause is specified without also specifying the INTO keyword, the target of the DML operation cannot have any enabled trigger defined on it for the given DML action. For example, if the OUTPUT clause is defined in an UPDATE statement, the target table cannot have any enabled UPDATE triggers.
I'm using SQL Server 2012, and I'm debugging a store procedure that do some INSERT INTO #temporal table SELECT.
There is any way to view the data selected in the command (the subquery of the insert into?)
There is any way to view the data inserted and/or the temporal table where the insert maked the changes?
It doesn't matter if is the total rows, not one by one
UPDATE:
Requirements from AT Compliance and Company Policy requires that any modification can be done in the process of test and it's probable this will be managed by another team. There is any way to avoid any change on the script?
The main idea is that the AT user check in their workdesktop the outputs, copy and paste them, without make any change on environment or product.
Thanks and kind regards.
If I understand your question correctly, then take a look at the OUTPUT clause:
Returns information from, or expressions based on, each row affected
by an INSERT, UPDATE, DELETE, or MERGE statement. These results can be
returned to the processing application for use in such things as
confirmation messages, archiving, and other such application
requirements.
For instance:
INSERT INTO #temporaltable
OUTPUT inserted.*
SELECT *
FROM ...
Will give you all the rows from the INSERT statement that was inserted into the temporal table, which were selected from the other table.
Is there any reason you can't just do this: SELECT * FROM #temporal? (And debug it in SQL Server Management Studio, passing in the same parameters your application is passing in).
It's a quick and dirty way of doing it, but one reason you might want to do it this way over the other (cleaner/better) answer, is that you get a bit more control here. And, if you're in a situation where you have multiple inserts to your temp table (hopefully you aren't), you can just do a single select to see all of the inserted rows at once.
I would still probably do it the other way though (now I know about it).
I know of no way to do this without changing the script. Howeer, for the future, you should never write a complex strored proc or script without a debug parameter that allows you to put in the data tests you will want. Make it the last parameter with a default value of 0 and you won't even have to change your current code that calls the proc.
Then you can add statements like the below everywhere you will want to check intermediate results. Further in debug mode you might always rollback any transactions so that a bug will not affect the data.
IF #debug = 1
BEGIN
SELECT * FROM #temp
END
Every time I insert new data to my table using the command line, it is inserted successfully (and I check my insertion by typing select * from table_name;) but when I close the command line and open it again and write the sql statement which (select * from table_name;) it says that there are no rows selected. I mean the inserted row is not saved in the database.
Please can someone help me to know the problem?
Use COMMIT; to apply your SQL statements
Executing the following statement with SQL Server 2005 (My tests are through SSMS) results in success upon first execution and failure upon subsequent executions.
IF OBJECT_ID('tempdb..#test') IS NULL
CREATE TABLE #test ( GoodColumn INT )
IF 1 = 0
SELECT BadColumn
FROM #test
What this means is that something is comparing the columns I am accessing in my select statement against the columns that exist on a table when the script is "compiled". For my purposes this is undesirable functionality. My question is if there is anything that can be done so that this code would execute successfully on every run, or if that is not possible perhaps someone could explain why the demonstrated functionality is desirable. The only solutions I have currently is to wrap the select with EXEC or select *, but I don't like either of those solution.
Thanks
If you put:
IF OBJECT_ID('tempdb..#test') IS NOT NULL
DROP TABLE #test
GO
At the start, then the problem will go away, as the batch will get parsed before the #test table exists.
What you're asking is for the system to recognise that "1=0" will always evaluate to false. If it were ever true (which could potentially be the case for most real-life conditions), then you'd probably want to know that you were about to run something that would cause failure.
If you drop the temporary table and then create a stored procedure that does the same:
CREATE PROC dbo.test
AS
BEGIN
IF OBJECT_ID('tempdb..#test') IS NULL
CREATE TABLE #test ( GoodColumn INT )
IF 1 = 0
SELECT BadColumn
FROM #test
END
Then this will happily be created, and you can run it as many times as you like.
Rob
Whether or not this behaviour is "desirable" from a programmer's point of view is debatable of course -- it basically comes down to the difference between statically typed and dynamically typed languages. From a performance point of view, it's desirable because SQL Server needs complete information in order to compile and optimize the execution plan (and also cache execution plans).
In a word, T-SQL is not an interpretted or dynamically typed language, and so you cannot write code like this. Your options are either to use EXEC, or to use another language and embed the SQL queries within it.
This problem is also visible in these situations:
IF 1 = 1
select dummy = GETDATE() into #tmp
ELSE
select dummy = GETDATE() into #tmp
Although the second statement is never executed the same error occurs.
It seems the query engine first level validation ignores all conditional statements.
You say you have problems with subsequent request and that is because the object already exits. It it recommended that you drop your temporary tables as soon as possible when you are done with it.
Read more about temporary table performance at:
SQL Server performance.com