Can you please help me to identify what's wrong. I have a table into which I perform a really big select insert from another table. the first table is from a dblink. After a few unsuccessful tries and tests (my query was wrong then) it started throwing the said ora and doesn't allow me to insert even a single row. I can select from there though and no data is lost. How can i start inserting there again?
Related
I have a trigger doing an INSERT_AFTER to a different table. The destination table is a staging table for data to be sent to another company. In the database on our side, the date is a smalldatetime. The other comp;any needs this split to a Date field and a Time field. Neither make it to the staging table. I have tried several different things including CAST and CONVERT with no success.
The pertinent SQL is below:
CAST(inserted.CallInDate AS DATE) AS ClientCallinDate,
CAST(inserted.CallInDate AS TIME) AS ClientCallinTime,
--CONVERT(DATE,inserted.CallInDate) AS ClientCallinDate,
--CONVERT(TIME,inserted.CallInDate) AS ClientCallinTime,
This trigger is following another INSERT_AFTER trigger doing the same thing to other tables. The first trigger is fired as a "First" and the trigger with the problem is fired as a last.
`EXEC sp_settriggerorder #triggername=N'[dbo].[TR_FirstTable_to_Client_I]', #order=N'First', #stmttype=N'INSERT'`
The second trigger also has another field failing that is created by the first trigger as a confirmation of receipt from the other company. II do not think these are related, but seeing as I have not figured either out, I cannot be sure.
`EXEC sp_settriggerorder #triggername=N'[dbo].[TR_FirstTable_to_Client_II]', #order=N'Last', #stmttype=N'INSERT'`
What I need is knowledge of what could be failing or what in SQL I need to change. I have dropped the trigger and recreated it, but that was of no help, and actually gave me an error I have yet to solve.
If your code above is part of an INSERT statement, it might just be failing because of the aliasing: AS ClientCallinDate. As long as your statement is constructed correctly, you do not need to alias the columns in the SELECT statement.
We use a DB2 database. Some datawarehouse tables are TRUNCATEd and reloaded every day. We run into deadlock issues when another process is running an INSERT statement against that same table.
Scenario
TRUNCATE is executed on a table.
At the same time another process INSERTS some data in the same table.(The process is based on a trigger and can start at any time )
is there a work around?
What we have thought so far is to prioritize the truncate and then go thruogh with the insert. Is there any way to iplement this. Any help would be appreciated.
You should request a table lock before you execute the truncate.
If you do this you can't get a deadlock -- the table lock won't be granted before the insert finishes and once you have the lock another insert can't occur.
Update from comment:
You can use the LOCK TABLE command. The details depend on your situation but you should be able too get away with SHARED mode. This will allow reads but not inserts (this is the issue you are having I believe.)
It is possible this won't fix your problem. That probably means your insert statement is to complicated -- maybe it is reading from a bunch of other tables or from a federated table. If this is the case, re-architect your solution to include a staging table (first insert into the staging table .. slowly.. then insert into the target table from the staging table).
I have a script that runs a SELECT INTO into a table. To my knowledge, there are no other procedures that might be concurrently referencing/modifying this table. Once in awhile, however, I get the following error:
Schema changed after the target table was created. Rerun the Select
Into query.
What can cause this error and how do I avoid it?
I did some googling, and this link suggests that SELECT INTO cannot be used safely without some crazy try-catch-retry logic. Is this really the case?
I'm using SQLServer 2012.
Unless you really don't know the fields and data types in advance, I'd recommend first creating the table, then adding the data with an Insert statement. In your link, David Moutray suggests the same thing, here's his example code verbatim:
CREATE TABLE #TempTableY (ParticipantID INT NOT NULL);
INSERT #TempTableY (ParticipantID)
SELECT ParticipantID
FROM TableX;
I ran a large query (~30mb) which inserts data in ~20 tables. Accidentally, I selected wrong database. There are only 2 tables with same name but with different columns. Now I want to make sure that no data is inserted in this database, I just don't know how.
If your table has a timestamp you can test for that.
Also sql-server keeps a log of all transactions.
See: https://web.archive.org/web/20080215075500/http://sqlserver2000.databases.aspfaq.com/how-do-i-recover-data-from-sql-server-s-log-files.html
This will show you how to examine the log to see if any inserts happened.
Best option go for Trigger
Use trigger to find the db name and
table name and all the history of
records manipulated
I am trying to do a bulk upload into a SQL server DB. The source file has duplicates which I want to remove, so I was hoping that the operation would automatically upload the first one, then discard the rest. (I've set a unique key constraint). Problem is, the moment a duplicate upload is attempted the whole thing fails and gets rolled back. Is there any way I can just tell SQL to keep going?
Try to bulk insert the data to the temporary table and then SELECT DISTINCT as #madcolor suggested or
INSERT INTO yourTable
SELECT * FROM #tempTable tt
WHERE NOT EXISTS (SELECT 1 FROM youTable yt WHERE yt.id = tt.id)
or other field in WHERE clause.
If you're doing this through some SQL tool like SQL Plus or DBVis or Toad, then I suspect not. If you're doing this programatically in a language, then you need to divide and conquer. Presumably executing an update line by line and catching each exception would be too lengthy a process, so instead you could do a batch operation first on the whole SQL block, and if it fails, do it on the first half, and if that fails, do it on the first half of the first half. Iterate this way until you have a block that succeeds. Discard the block and do the same procedure on the rest of the SQL. Anything that violates a constraint will eventually end up as a sole SQL statement which you know to log and discard. This should import with as much bulk processing as is possible while still throwing out the invalid lines.
Use SSIS for this. You can tell it to skip the duplicates. But first make sure they are true duplicates. What if the data in some of the columns is different, how do you know which is the better record to keep?