SQL Table Dropped Weeks Ago Only Erroring Now - sql

I ran into an issue last night that I cannot quite figure out. I'm no SQL expert and after doing countless google searches I'm still nowhere close to figuring out why this is happening.
About six weeks ago a table was supposedly dropped from my database, not by me. After this table was dropped all the views that depended on it still functioned properly. This table was unused but still had a handful of dependencies as I can see now in the Object Dependency viewer in SQL Management Studio. We implemented a few updates to out SQL Server 2012 last night and the server was restarted.
At about the time these updates were going on we started to receive a bunch of errors all revolving around this missing table that was deleted 6 weeks ago. After recreating the table everything was fine.
We're currently going through the updates to see if they could have effected it in any way. Does anyone know if there is any sort of caching that could have been going on that I'm not aware of? I'm really stumped as to why this worked for those 6 weeks.

Was the table a temp table? (A table name preceded by #.) If so, SQL sometimes caches temp tables. That could explain why the views worked. Then, when you updated your server and restarted it, the temp table caching was cleared.

Related

How to restore and existing backup from an existing database to a new one without affecting the original?

I've searched extensively for the answer to this question and could not find a good answer. I've looked into several restore DB articles and a few rollbacks too but still no success.
My situation is: I have a very large database in which I did execute a wrong update query for a single column of a single table, and I have a full backup of this database until yesterday (which is more than enough to correct the problem). But the other tables of this same DB were updated in the meantime, and I require them to keep their current values.
so after all the reading my plan was : Restore the full backup to a new location then get the values of the column I need and input those in the current database.
My problem is: I'm not being able to restore this full backup without affecting the production DB. When I try to restore it, the sql studio says the mdf file can't be overwritten (which is good because I'll be using the table further), then i saw some articles telling me to use the MOVE query. But if I do use it the mdf files from the original/production table will be relocated thus affecting the table right ?
I also saw a few articles telling me to roll it back if I have transaction logs backups. I wasn't actually able to tell if I do have those, nor what are those. even after googling it out
Any thoughts on how I should proceed ?
sorry if it is a newbie question, but I'm not originally a programmer yet I have been doing this for work and I really need it done fast ! So any help would be strongly appreciated
I'm using SQL Server Standard 2005 with SQL Server Mangmt Studio 2008.
Restore The backup With Different Name Like DB_Temp on any location
Copy the Table From Running DB using Select INTO.......
Import records from newly restored DB (DB_Temp) Table to Running DB
Delete the Database DB_Temp
Check the changes between recently copied and original table
Update records accordingly
Thanks

PostgreSQL live and test database

I'm working with PostgreSQL now for a few months. Now before going live we usually used the live database for almost everything (creating new columns in the live database tables, executing update and insert queries etc.). But now we want to go live and we have to do things differently before we do that. The best way is to have a test database and live database.
Now I created a copy of the live database so we have a test database to run tests on. The problem is that the data is old after 24 hours, so we actually need to create a fresh copy every 24 hours, which is not really smart to do manually.
So my question is, are there people over here who know a proper way to handle this issue?
I think the most ideal way is:
- copy a selection of tables from the live database to the test database (skip tables like users).
- make it possible to add columns, rename them or even delete them and when we deploy a new version of the website, transfer those changes from the test database to the live database (net necassary but would be a good feature).
If your database structure is changing, you do NOT want it automatic. You will blow away dev work and data. You want it manual.
I once managed a team that had a situation similar: multi-TiB database, updated daily, and needing to do testing and development against that up-to-date data. Here was the way we solved it:
In our database, we defined a function called TODAY(). In our live system, this was a wrapper for NOW(). In our test system, it called out to a one-column table whose only row was a date that we could set. This means that our test system was a time machine, that could pretend any date was the current one.
This meant that every function or procedure we wrote had to be time-aware. Should I care about future-scheduled events? How far in the future? This made our functions extremely robust, and made it dead simple to test them against a huge variety of historical data. This helped catch a large number of bugs that we would have never thought would happen, but we saw would indeed occur in our historical data. It's like functional programming for your database!
We would still schedule database updates from a live backup, about every month or so. This had the benefit of more data AND testing our backup/restore procedure. Our DBA would run a "post-test-sync" script that would set permissions for developers, so we were damn sure than anything we ran on the test system would work on the live one as well. This is what helped us build our deployment database scripts.

Dropped and recreated the the table in SQL - Data not being read after that

I am a beginner level programmer who was assigned to update the data content in UI. this UI references a database table so I went ahead and began updating the table as per constraints. I had a backup of the data taken and had the create construct imported as well before running the modification queries on SQL Server Management Studio 2008.
A misleading update statement corrupted the table when it updated the whole database instead of 4 rows and I could not pin point to what data ended up being modified and what remained same. Long story short, I had to delete the records and eventually drop the table and then reconstruct it also. Everything went well, I recreated the schema and inserted the data from the backup and continued querying.
However, the UI which was populating its display section from my table went all blank after the incident and I am at a loss to know where is it exactly that I went wrong. It is a small database and this table was NOT referencing any other table. The permissions look good as it was before. I can't really understand what has gone wrong. Queries work well.
If you have had the patience to read my tediously long narrative till now, can you please tell me what is it that I am missing here ?!
I apologize for the overdrawn description but the context felt more important than the problem statement itself.

what is SQL ReportServer GetMyRunningJobs in my SQL Profiler

While running the SQL Profiler on a client site I noticed getmyrunningjobs running over and over bogging down their system in the morning from about 5:30 am to 6:30am. I know it runs all the time but for some reason it appears to run 4 times in a row every couple of seconds in the morning. I'm not really sure what it is used for, though I've read a lot on SQL Profiler, I can't find much on SQL Report Server.
Can I stop or change the frequency or is there something else going on that I can check? Also, what is Tablockx, and is this related?
Thanks. Any help appreciated!
To answer your secondary question, TABLOCKX is a SQL Server table hint that applies an exclusive table lock. I'd think this would be related to your problem only if something is holding the lock for an unusually long time during the timeframe you indicated.

SQL Server 2005 multiple database deployment/upgrading software suggestions

We've got a product which utilizes multiple SQL Server 2005 databases with triggers. We're looking for a sustainable solution for deploying and upgrading the database schemas on customer servers.
Currently, we're using Red Gate's SQL Packager, which appears to be the wrong tool for this particular job. Not only does SQL Packager appear to be geared toward individual databases, but the particular (old) version we own has some issues with SQL Server 2005. (Our version of SQL Packager worked fine with SQL Server 2000, even though we had to do a lot of workarounds to make it handle multiple databases with triggers.)
Can someone suggest a product which can create an EXE or a .NET project to do the following things?
* Create a main database with some default data.
* Create an audit trail database.
* Put triggers on the main database so audit data will automatically be inserted into the audit trail database.
* Create a secondary database that has nothing to do with the main database and audit trail database.
And then, when a customer needs to update their database schema, the product can look at the changes between the original set of databases and the updated set of databases on our server. Then the product can create an EXE or .NET project which can, on the customer's server...
* Temporarily drop triggers on the main database so alterations can be made.
* Alter database schemas, triggers, stored procedures, etc. on any of the original databases, while leaving the customer's data alone.
* Put the triggers back on the main database.
Basically, we're looking for a product similar to SQL Packager, but one which will handle multiple databases easily. If no such product exists, we'll have to make our own.
Thanks in advance for your suggestions!
I was looking for this product myself, knowing that RedGate solution worked fine for "one" DB; unfortunately I have been unable to find such tool :(
In the end, I had to roll my own solution to do something "similar". It was a pain in theā€¦ but it worked.
My scenario was way simpler than yours, as we didn't have triggers and T-SQL.
Later, I decided to take a different approach:
Every DB change had a SCRIPT. Numbered. 001_Create_Table_xXX.SQL, 002_AlterTable_whatever.SQL, etc.
No matter how small the change is, there's got to be a script. The new version of the updater does this:
Makes a BKP of the customerDB (just in case)
Starts executing scripts in Alphabetical order. (001, 002...)
If a script fails, it drops the BD. Logs the Script error, Script Number, etc. and restores the customer's DB.
If it finishes, it makes another backup of the customer's DB (after the "migration") and updates a table where we store the DB version; this table is checked by the app to make sure that the DB and the app are in sync.
Shows a nice success msg.
This turned out to be a little bit more "manual" but it has been really working with little effort for three years now.
The secret lies in keeping a few testing DBs to test the "upgrade" before deploying. But apart from a few isolated Dbs where some scripts failed because of data inconsistency, this worked fine.
Since your scenario is a bit more complex, I don't know if this kind of approach can be ok with you.
As of this writing (June 2009) there's still no product on the market that'll do all this for multiple databases. I work for Quest Software, makers of Change Director for SQL Server, another database change automation system. Ours doesn't handle multiple databases like you're after, and I've seen the others out there. No dice.
I wouldn't hold out hope for it either, given the directions I've seen in SQL Server management. Things are going more toward packaged applications being contained in a single database, and most of the code is focusing on that.