How to recover the old data from table - sql

I have made an update statement in a table in SQL 2008 which updated the table with some wrong data.
I didn't have a backup for the DB.
It's some important dates which got updated.
Is there anyway where i can recover the old data from the table.
Thanks
SNA

Basically no unless you want to use a commercial log reader and try go through it with a fine tooth comb. No backup of the database can be an 'update resume, leave town' scenario - harsh but it just should not happen.

Andrew basically has called it. I just want to add a few ideas you can consider if you are desperate:
Are there any reports or printouts lying around? Perhaps you can reconstruct the data from there.
Was this data entered via a web application? If so, there is a remote chance you can find the original data in the web server logs, depending upon how the app was constructed, etc.
Does this app interface (pass data to) any other applications? They may have a buffered copy of data...
Can the data be derived from any other existing data? Is there an audit log table, or another date in your schema based on this one, from which you can reconstruct the original date?
Edit:
Some commenters are mentioning that is is a good idea to test your update/delete statements before running them. For this to become habit, it helps if you have an easy method. I usually create my DELETE statements like this:
--delete --select *
from [User]
where UserID=27
To run the select in order to test your query, highlight everything from select onwards. To then run the delete if you are satisfied with the filter criteria, highlight everything from delete onwards. The two dashes in front of delete are so that if the query accidentally gets run, it will just crash due to invalid syntax.
You can use a similar construct for UPDATE statements, although it is not quite as clean.

SQL server keeps log for every transation.So you can recover your modified data from the log as well without backup.
Select [PAGE ID],[Slot ID],[AllocUnitId],[Transaction ID]
,[RowLog Contents 0], [RowLog Contents 1],[RowLog Contents 3],[RowLog Contents 4]
,[Log Record]
FROM sys.fn_dblog(NULL, NULL)
WHERE
AllocUnitId IN
(Select [Allocation_unit_id] from sys.allocation_units allocunits
INNER JOIN sys.partitions partitions ON (allocunits.type IN (1, 3)
AND partitions.hobt_id = allocunits.container_id) OR (allocunits.type = 2
AND partitions.partition_id = allocunits.container_id)
Where object_id=object_ID('' + 'dbo.student' + ''))
AND Operation in ('LOP_MODIFY_ROW','LOP_MODIFY_COLUMNS')
And [Context] IN ('LCX_HEAP','LCX_CLUSTERED')
Here is the artcile, that explains step by step, how to do it.
http://raresql.com/2012/02/01/how-to-recover-modified-records-from-sql-server-part-1/
Imran

Thanks for all the responses.
The problem was actually accidentally ---i missed to select the where condition in the update statement.---Rest !.
It was a quick 5 minutes task --Like just changing the date to test for one customer data--so we didn't think of taking a backup.
Yes of course you are true ..This is a lesson.
Now onwards i will be careful to write "my update statements in a transaction." or "test my update statements"
Thanks once again--for spending your time to give some insight rather ignoring the question since the only answer is "NO".
Thanks
SNA

Always take a backup before major UPDATE statements, even if it's not used, there's the peace of mind
Especially with Red Gate's Object Level Restore, one can restore individual table/row now given a backup file
Good luck, I'd suggest finding an old copy elsewhere (DEV/QA) etc...

Isn't it possible to do a rollback on an UPDATE statement?

Late one but hopefully useful…
If database is in full recovery mode then all transactions are logged in transaction log and can be retrieved. Problem is that this is not natively supported because this is not the main purpose of the transaction log.
Options are:
Commercial tools such as Apex Log (more expensive, more options) or Quest Toad (less expensive, less options for this purpose main focus is on SQL Server management)
Trying to do this yourself, like user1059637 pointed out. Problem with this approach is that it can’t read transaction log backups and is more tedious.
It comes down to how much your data is worth to you in terms of time and $.

Related

Renaming a column without breaking the scripts and stored procedures

I want to modify a column name to new name present in a table
but here problem i want to manually modify the column name present in Triggers or SP's.
Is there a any better way of doing it.
To rename a column am using this
sp_RENAME 'Tablename.old_Column', 'new_column' , 'COLUMN';
similarly how can i do it for triggers or SP's.? without opening each script?
Well, there are a bunch of 3rd party tools that are promising this type of "safe rename", some for free and some are not:
ApexSQL has a free tool for that, as MWillemse wrote in his answer,
RedGate have a commercial tool called SQLPrompt that also have a safe renaming feture, However it is far from being free.
Microsoft have a visual studio add-in called SQL Server Data Tools (or SSDT in the short version), as Dan Guzman wrote in his comment.
I have to say I've never tried any of these specific tools for that specific task, but I do have some experience with SSDT and some of RedGate's products and I consider them to be very good tools. I know nothing about ApexSQL.
Another option is to try and write the sql script yourself, However there are a couple of things to take into consideration before you start:
Can your table be accessed directly from outside the sql server? I mean, is it possible that some software is executing sql statement directly on that table? If so, you might break it when you rename that column, and no sql tool will help in this situation.
Are your sql scripting skills really that good? I consider myself to be fairly experienced with sql server, but I think writing a script like that is beyond my skills. Not that it's impossible for me, but it will probably take too much time and effort for something I can get for free.
Should you decide to write it yourself, there are a few articles that might help you in that task:
First, Microsoft official documentation of sys.sql_expression_dependencies.
Second, an article called Different Ways to Find SQL Server Object Dependencies that is written by a 13 years experience DBA,
and last but not least, a related question on StackExchange's Database Administrator's website.
You could, of course, go with the safe way Gordon Linoff suggested in his comment, or use synonyms like destination-data suggested in his answer, but then you will have to manually modify all of the columns dependencies manually, and from what I understand, that is what you want to avoid.
Renaming the Table column
Deleting the Table column
Alter Table Keys
Best way use Database Projects in Visual Studio.
Refer this links
link 1
link 2
you can do what #GorDon suggested.
Apart from this,you can also play with this query,
select o.name, sc.* from sys.syscomments sc inner join sys.objects o
on sc.id=o.object_id where sc.text like '%oldcolumnname%'
this will return list of all proc and trigger.Also you can modify filter to get exact list.then it will be very easy for you to modify,manually.
But whatever you decide,don't simply drop old column.
To be safe,even keep back up.
This suggestion relates to Oracle DB, however there may be equivalent solutions in other DBMS's.
A temporary solution to your issue is to create a pseudocolumn. This solution looks a little hacky because the syntax for a pseudocolumn requires an expression. The simplest expression I can think of is the case statement below. Let me know if you can make it more simple.
ALTER TABLE <<tablename>> ADD (
<<new_column_name>> AS (
CASE
WHEN 1=1 THEN <<tablename>>.<<old_column_name>>
END)
);
This strategy basically creates a new column on the fly by evaluating the case statement and copying the value of <<old_column_value>> to <<new_column_value>>. Because you are dynamically interpolating this column there is a performance penalty vs just selecting the original column.
The one gotcha is that this will only work if you are duplicating a column once. Multiple pseudocolumns cannot contain duplicate expressions in Oracle.
The other strategy you can consider is to create a view and you can name the columns whatever you want. You can even INSERT/UPDATE/DELETE (execute DML) against views, but this would give you a whole new table_name, not just a new column. You could however rename the old table, and name your view the same as your old table. This also has a performance penalty vs just accessing the underlying table.
You might want to replace that text in definition. However, you will be needing a dedicated administrator connection in sql server. Versions also vary in setting up a dedicated administrator connection. Setting up the startup parameter by adding ;-T7806 under advanced. And by adding Admin: before the servername upon logging in. By then, you may be able to modify the value of the definition.

How to get the query displayed when a change is made to a table or a field in a table in Postgresql?

I have used mysql for some projects and recently I moved to postgresql. In mysql when I alter a table or a field the corresponding query will be displayed in the page. But such a feature was not found in postgresql(kindly excuse me if I'm wrong). Since the query was readily available it was very helpful for me to test something in the local database(without explicitly typing the query), copy the printed query and run it in the server. Now it seems like I've to manually do all the trick. Even though I'm familiar with the query operations,at times it can be pretty time consuming process. Can anybody help me? How can I get the corresponding query to get displayed in postgresql(like in mysql) whenever a change is made to the table?
If you use SELECT * FROM ... there should not be any reason for your output to not include newly added columns, no matter how you get your results - would that be psql in command line, PgAdmin3 or any other IDE.
After you add new columns, it is possible that these changes are still in open transaction in other window or SQL command - be sure to COMMIT such transaction. Note that your changes to data or schema will not be visible to any other database clients until transaction commits.
If your IDE still does not show changes, maybe you need to refresh list of tables or if that option is not available, restart your IDE. If that does not work still, maybe you should use better IDE.
If you have used SELECT field1, field2, ... FROM ... then you must add new fields into your SELECT statement(s) - but this would be true for any other SQL implementation, MySQL included.
You could use the LISTEN / NOTIFY mechanism in PostgreSQL to notify your client on altering the database schema.

What problems may occur while querying SQL databases with big amount of data over internet

I am having this big database on one MSSQL server that contains data indexed by a web crawler.
Every day I want to update SOLR SearchEngine Index using DataImportHandler which is situated in another server and another network.
Solr DataImportHandler uses query to get data from SQL. For example this query
SELECT * FROM DB.Table WHERE DateModified > Config.LastUpdateDate
The ImportHandler does 8 selects of this types. Each select will get arround 1000 rows from database.
To connect to SQL SERVER i am using com.microsoft.sqlserver.jdbc.SQLServerDriver
The parameters I can add for connection are:
responseBuffering="adaptive/all"
batchSize="integer"
So my question is:
What can go wrong while doing this queries every day ? ( except network errors )
I want to know how is SQL Server working in this context ?
Further more I have to take a decicion regarding the way I will implement this importing and how to handle errors, but first I need to know what errors can arise.
Thanks!
Later edit
My problem is that I don't know how can this SQL Queries fail. When i am calling this importer every day it does 10 queries to the database. If 5th query fails I have to options:
rollback the entire transaction and do it again, or commit the data I got from the first 4 queries and redo somehow the queries 5 to 10. But if this queries always fails, because of some other problems, I need to think another way to import this data.
Can this sql queries over internet fail because of timeout operations or something like this?
The only problem i identified after working with this type of import is:
Network problem - If the network connection fails: in this case SOLR is rolling back any changes and the commit doesn't take place. In my program I identify this as an error and don't log the changes in the database.
Thanks #GuidEmpty for providing his comment and clarifying out this for me.
There could be issues with permissions (not sure if you control these).
Might be a good idea to catch exceptions you can think of and include a catch all (Exception exp).
Then take the overall one as a worst case and roll-back (where you can) and log the exception to include later on.
You don't say what types you are selecting either, keep in mind text/blob can take a lot more space and could cause issues internally if you buffer any data etc.
Though just a quick re-read and you don't need to roll-back if you are only selecting.
I think you would be better having a think about what you are hoping to achieve and whether knowing all possible problems will help?
HTH

Empty XML Columns during SQL Server replication

We have a merge replication setup on SQL Server that goes like this: 1 SQL server at the office, another SQL server traveling around the world. The publisher is the SQL server at the office.
In about 1% of the cases, two of our tables with a column of XML Data type (not bound to a schema) are replicated with rows containing empty XML columns. ( This only happened when data is sent from the "traveling server" back home, but then again, data seems to be changed more often there ). We only have this in prod. environment ( WAN replication ).
Things i have verified:
The row is replicated, as the last modification date on the row is refreshed but the xml column is empty. Of course it is not empty on the other SQL Server.
No conflicts are displayed in the replication conflicts UI.
It is not caused by the size of the data inside the XML Column as some are very small.
Usually, the problem occurs in batch. ( The xml column of 8-9 consecutive rows will be empty )
The problem occurs if a row was inserted OR updated. No pattern there.
The problem seems to occur, but this is pure speculation on my part when the connection is weaker. ( We've seen this problem happen more often when the server was far away as compared to when it was close by. )
Sorry if i have confused some things, I am not really a DBA, more of a DEV with knowledge of SQL but since the application using the database keeps getting blamed for the problems ( the XML column must not be empty!! ) I have taken it at heart to try and find the problem instead of just manually patching the data each time ( Whats the use of replication if you have to do that? )
If anyone could help out with this problem, or at least suggest some ways of being able to debug / investigate this it would be greatly appreciated.
I did search alot on google and I did find this: Hot Fix . But we do have the latest service pack and the problem seems a bit different.
fyi: We have a replication setup locally here but the problem never occurs. We will be trying a WAN simulator on it as well to see if that can help.
Thanks
Edit: hot fix is now available for my issue: http://support.microsoft.com/kb/2591902
After logging this issue with Microsoft, we were able to reproduce the problem without a slow link ( Big thanks to the competent escalation engineer at Microsoft ). The repro is a bit different from our scenario, but highlights the timing issue we were getting perfectly.
Create 2 tables – One parent one child (have a PK-FK relationship)
Insert 2 rows in the parent table
Set up replication – configure merge agent to run ON DEMAND
Sync
Once all is replicated:
On the PUBLISHER: delete one row from the parent table
On the SUBSCRIBER: Insert 2 rows of data that references the parentid you deleted above
Insert 5 rows of data that references the parentid that will stay in the table
Sync, Merge agent will fail, Sync again, Merge agent will succeed
Missing XML data on the publisher on the 5 rows.
Seems it is a bug that is in SQL Server 2005/2008 and 2008R2.
It will be addressed in a hot fix in 2008 and up. ( As SQL Server 2005 is no longer being altered )
Cheers.
You may want to start out by slapping a bandaid on this perplexing situation to buy some time to fully investigate and fix (or more likely get MS to fix it). SQL Data Compare is an excellent tool that might help.
Figured i'd put an update here as this issue got me a few gray hairs and I am somewhat closer to a solution now.
I finally had some time to work on this and managed to reproduce this issue in our test environment, using a WAN simulator and slowing down the link and injecting some random packet loss. ( to best simulate the production environment where the server is overseas on a really bad line ).
After doing some SQL tracing, and some verbose logging here are my conclusions:
When replicating a row with an XML column, the process is done in 2 steps. First an insert is done of the full row but with an empty string for the XML column. Right after, an update is done this time with the XML column having data. Since the link is slow, in some situations a foreign key violation occured.
In this scenario, Table2 depends on Table1. After finishing replicating table1, and starting to replicate table2 (Enumration of insert/updates which takes time on a slow link), some entries were added to table1 and table2. Therefore some inserts on Table2 failed because Table1 entries were not in the database and were only going to be replicated next batch. The next time the replication occured, no more foreign key violations occured, however when it tried to insert the row that had previously failed in Table2 ( XML column row ), the update part of it was missing ( I could see that in the SQL profiler ) and that is why the row ended up after all was done with an empty XML.
Setting "Enforce for replication" to false on the foreign keys seems to address the problem, however I do still think that this whole process should work with the option set to true.
I logged a support call with Microsoft for this. I have sent the traces and logs to Microsoft and will see what they have to say.
I've read this article: http://msdn.microsoft.com/en-us/library/ms152529(v=SQL.90).aspx. But for me, setting this option to false is kind of a work around, no?
What do you guys think?
ps: Hope this is clear, tried to explain it the best I could. English is not my first language.

Problem with a MS Access query after a "Compact and repair" operation

I have an Access application that use the classical front-end/back-end approach. Yesterday, the backend got corrupted for a reason I don't know. So I opened the backend with Access 2003 and access asked me if I wanted to repair the file, I said yes and it seemed to work.
I can open the database see the tables contents and run most of the queries.
However there is an access query that doesn't work with a specific where clause.
Example :
// This works in the original DB, but not in the compacted one :
SELECT a, b, c
FROM tbl1 INNER JOIN tbl2 ON tbl1.d = tbl2.d
WHERE e = 3 AND tbl2.f = 1;
// This works in both the original and the compacted one :
SELECT a, b, c
FROM tbl1 INNER JOIN tbl2 ON tbl1.d = tbl2.d
WHERE e = 3;
When I try to run the queries, nothing happens. The access process start to use most of the CPU and the GUI stop responding. If I run the query from the query editor, I can use Ctrl+Break to stop the execution. I tried to give the query lot of time and it didn't help.
I've checked the execution plan in showplan.out and it seems correct (at least it should not takes forever to execute)
I tried to compact the DB again. I tried to import the tables in a new DB. I even tried to import the tables and their data in a mdb file that was in a now good state (from a backup).
Anyone have an idea?
Sounds like an index was corrupted and when that happens, it's dropped during the compact. Check for a system table called MSysCompactErrors -- you'll have to show hidden objects and/or system objects in Tools | Options | VIEW.
Never compact a Jet MDB without making a backup beforehand. Because of that rule, the COMPACT ON CLOSE function is completely useless, as it's not cancellable, so you always make sure it's turned off in all MDBs.
I don't know what type of meta data Access brings along when it imports a table from one database into another one. If the meta data is corrupted, importing the table to another database wouldn't necessarily resolve the problem. If practical, you might try creating the tables from scratch in a brand new database and then just exporting and importing (or copying and paste appending) the data into the new database.
I've never seen a table get corrupted like this in such a small database, although with Access anything is possible. Could there be something wrong with the data?
I'd try recreating the query fresh (new name, etc.), and see what happens.
You could even try copying it (even within the same DB or to a brand new one). If that works, the worst case scenario is you have to copy all the objects across to a new DB.
Is there an index on the field tbl2.f?
Also try going into that table in datasheet view, sort tbl2.f in ascending sequence and see if there is anything really strange in the first or last records.
Do you have access to a SQL Server installation? You could use the Upsizing Wizard under the Tools -> Database Utilities menu to copy the data to SQL Server, and see if you get the same problem there.