I have a Microsoft SQL Server instance Instance1 with a database called Maintenance, with a table called TempWorkOrder. I have another SQL Server instance Instance2 that has a database called MaintenanceR1 with a table called WorkOrder.
Instance2 is a linked database on Instance1. I want to copy any changed or new records from Instance2.MaintenanceR1.WorkOrder to Instance1.Maintenance.TempWorkOrder every hour.
I thought about creating a job that deletes all of the records in Instance1.Maintenance.TempWorkOrder and repopulates it with Instance2.MaintenanceR1.WorkOrder every hour. I am afraid this approach will let the log file get out of control as far as size goes.
Would I be better off dropping the table and re-creating it to keep the log file size reasonable? The table contains about 30,000 rows of data.
30,000 rows really shouldnt cause anything to get out of control. If you are really worried about log size, you can truncate instead of delete and bulk load with minimal logging your insert into the table.
https://www.mssqltips.com/sqlservertip/1185/minimally-logging-bulk-load-inserts-into-sql-server/
https://learn.microsoft.com/en-us/sql/t-sql/statements/truncate-table-transact-sql
Related
I need to perform some calculations using few columns from a table. This database table that gets updated every couple of hours generates duplicates on couple of columns every other day. There is no way tell which one is inserted first which affects my calculations.
Is there a way to copy these rows into a new table automatically as data gets added every couple of hours and perform calculations on the fly? This way whatever comes first will be captured into a new table for a dashboard and for other business use cases.
I thought of creating a stored procedure and using a job scheduler to perform this. But I do not have admin access and can not schedule jobs. Is there another way of doing this efficiently? Much appreciated!
Edit: My request for admin access is being approved.
Another way as to stated in the answers, what you can do is:
Make a temp table.
Make a prod table.
Use stored procedure to copy everything from the temp table into prod table after any load have been done.
Use the same stored procedure to clean the temp table after the load is done.
Don't know if this will work, but this is in general how we are dealing with huge amount of load on a daily basis.
I need to alter the size of a column on a large table (millions of rows). It will be set to a nvarchar(n) rather than nvarchar(max), so from what I understand, it will not be a long change. But since I will be doing this on production I wanted to understand the ramifications in case it does take long.
Should I just hit F5 from SSMS like I execute normal queries? What happens if my machine crashes? Or goes to sleep? What's the general best practice for doing long running updates? Should it be scheduled as a job on the server maybe?
Thanks
Please DO NOT just hit F5. I did this once and lost all the data in the table. Depending on the change, the update statement that is created for you actually stores the data in memory, drops the table, creates the new one that has the change you want, and populates the data from memory. However in my case one of the changes I made was adding a unique constraint so the population failed, and as the statement was over the data in memory was dropped. This left me with the new empty table.
I would create the table you are changing, with the change(s) you want, as a new table. Then select * into the new table, then re-name the tables in a single statement. If there is potential for data to be entered into the table while this is running and that is an issue, you may want to lock the table.
Depending on the size of the table and duration of the statement, you may want to save the locking and re-naming for later, and after the initial population of the new table do a differential population of new data and re-name the tables.
Sorry for the long post.
Edit:
Also, if the connection times out due to duration, then run the insert statement locally on the DB server. You could also create a job and run that, however it is essentially the same thing.
I needed to load 100,000 rows of data from an excel file into a temporary table that I created using "on commit preserve rows". But somehow the most efficient methods did not seem to populate the temporary table due to session issues?
I used Toad to Import Table Data and it showed that x amount of records are imported. But when I select from the temp table, it was empty. Then I generated a bunch of insert scripts and saved them in a notepad.sql and called it from toad editor using #/script/location/notepad.sql and hit F5. It ran and showed how many records were inserted. Again the temp table was somehow still empty. So, I decided to run a random insert script manually in the editor and it showed up in the temp table. I believe the methods that didn't work are not considered to be the same session?
I haven't try SQLLDR but I am assuming it will not work judging from the methods I tried. Can someone confirm? I can't access SQLLDR so I won't know.
Is there anyway to get this to work? I can't run the insert scripts manually. That will be time consuming and Toad can't take that many scripts at the same time.
Oracle temp tables created with ON COMMIT PRESERVE ROWS are session-specific, so the data put into them is only visible within a single session, and for the duration of that session. Toad may be creating a separate session for each window and thus data which is populated from one window/session isn't visible from another window/session. The fact that you can run an insert script and then select the data back suggests this may be the case if both operations were done from the same window. I expect you'd see the same behavior if you used SQL*Loader to load the tables because the load would run in one session and the data would be discarded when the session terminated. Best of luck.
I'm trying to figure out if there's a method for copying the contents of a main schema into a table of another schema, and then, somehow updating that copy or "refreshing" the copy as the main schema gets updated.
For example:
schema "BBLEARN", has table users
SELECT * INTO SIS_temp_data.dbo.bb_users FROM BBLEARN.dbo.users
This selects and inserts 23k rows into the table bb_course_users in my placeholder schema SIS_temp_data.
Thing is, the users table in the BBLEARN schema gets updated on a constant basis, whether or not new users get added, or there are updates to accounts or disables or enables, etc. The main reason for copying the table into a temp table is for data integration purposes and is unrelated to the question at hand.
So, is there a method in SQL Server that will allow me to "update" my new table in the spare schema based on when the data in the main schema gets updated? Or do I just need to run a scheduled task that does a SELECT * INTO every few hours?
Thank you.
You could create a trigger which updates the spare table whenever an updated or insert is performed on the main schema
see http://msdn.microsoft.com/en-us/library/ms190227.aspx
I am very new in Microsoft SQL Server and I am not so into databasess.
Yesterday I made an error and I deletd all the rows inside the wrong table (I should delete the records in another table)
So now it is very important to me restore in some way all the deleted records in this table (only these records and not all the DB, if it is possibile in someway).
for completeness the table is named dbo.VulnerabilityWorkaround and have the following fields:
Id: int not null (is the PK)
Description: varchar(max), not null
I think that the SQL Server
retains the information related to the deleted records in a log file (or in something like it, maybe a DB table...I don't know)
Can in some way restore my original dbo.VulnerabilityWorkaround by a query or something like it?
There is the transaction log, but as far as I know that can be used depending on the backup strategy the database instance has, meaning you would have to fire up a restore backup operation.
Other than restoring a previous backup, I don't think you have much options.
Since you just need one table it could be easier to restore a backup to a different server and then copy/move only the data you need using SSIS or Bulk Import/Export.