how do i make an exact duplicate copy of a database in mysql?
create database test1 from test ???????
if this is not possible, how do i copy a table from one database into another"?
One way to do it is taking a dump from the source db and importing it into another db, like this:
mysqldump src_db > src.sql
mysql dest_db < src.sql
You can't per say. You need to create the new database, create the tables and then do inserts i.e. insert into INSERT INTO newdatabase.table1 SELECT * FROM olddatabase.table1;
Alternately, you could do a full database dump to a sql file and then dump this file into a new database.
A third option (easiest really if you have it) is PhpMyAdmin's copy database function. It creates the new database, the tables and inserts the data all automatically.
mysqlhotcopy db_name /path/to/new_directory
Keep in mind that the users will be blocked from updating tables while they are being copied, since mysqlhotcopy locks them.
I don't know if mysqlhotcopy locks them all at once or one at a time. If it locks them all at once, then all users will be blocked until the copy completes.
If it locks them one at a time while users are modifying them, you have a risk of the tables getting out of sync with each other. Like this:
mysqlhotcopy locks table a, and begins copy of table a
client attaches to database and tries to update tables a and c. It is temporarily blocked because of lock on table a.
mysqlhotcopy completes copy, unlocks table a, locks table b, and begins copy of table b
client makes related updates to tables a and c.
mysqlhotcopy completes copy, unlocks table b, locks table c, and copies table c
The problem is that the copied version of table a is from before the client's modification, and the version of table c is from after the client's modification, which puts the backup
in an inconsistent state.
Update
I just looked at mysqlhotcopy (MySQL 4.1.9). It locks all the tables at once.
Related
I am using MonetDB (MDB) for OLAP queries. I am storing source data in PostgreSQL (PGSQL) and syncing it with MonetDB in batches written in Python.
In PGSQL there is a wide table with ID (non-unique) and few columns. Every few seconds Python script takes a batch of 10k records changed in the PGSQL and uploads them to MDB.
The process of upload to MDB is as follows:
Create staging table in MDB
Use COPY command to upload 10k records into the staging table.
DELETE from destination table all IDs that are in staging table.
INSERT to the destination table all rows from staging table.
So, it is basically a DELETE & INSERT. I cannot use MERGE statement, because I do not have a PK - one ID can have multiple values in the destination. So I need to do a delete and full insert for all IDs currently synced.
Now to the problem: the DELETE is slow.
When I do a DELETE on a destination table, deleting 10k records in table of 25M rows, it will take 500ms.
However! If I run simple SELECT * FROM destination WHERE id = 1 and THEN do a DELETE, it takes 2ms.
I think that it has something to do with automatic creation of auxiliary indices. But this is where my knowledge ends.
I tried to solve this problem of "pre-heating" by doing the lookup myself and it works - but only for the first DELETE after pre-heat.
Once I do DELETE and INSERT, the next DELETE gets again slow. And doing the pre-heating before each DELETE does not make sense, because the pre-heat itself takes 500ms.
Is there any way on how to sync data to MDB without breaking auxiliary indices already built? Or make the DELETE faster without pre-heat? Or should I use some different technique to sync data into MDB without PK (does MERGE has the same problem?).
Thanks!
I have a Microsoft SQL Server instance Instance1 with a database called Maintenance, with a table called TempWorkOrder. I have another SQL Server instance Instance2 that has a database called MaintenanceR1 with a table called WorkOrder.
Instance2 is a linked database on Instance1. I want to copy any changed or new records from Instance2.MaintenanceR1.WorkOrder to Instance1.Maintenance.TempWorkOrder every hour.
I thought about creating a job that deletes all of the records in Instance1.Maintenance.TempWorkOrder and repopulates it with Instance2.MaintenanceR1.WorkOrder every hour. I am afraid this approach will let the log file get out of control as far as size goes.
Would I be better off dropping the table and re-creating it to keep the log file size reasonable? The table contains about 30,000 rows of data.
30,000 rows really shouldnt cause anything to get out of control. If you are really worried about log size, you can truncate instead of delete and bulk load with minimal logging your insert into the table.
https://www.mssqltips.com/sqlservertip/1185/minimally-logging-bulk-load-inserts-into-sql-server/
https://learn.microsoft.com/en-us/sql/t-sql/statements/truncate-table-transact-sql
I'm working with MVC (still fairly new to it.) Once a user deletes a record from table A, I want to simply move it to table B (a history of deleted records) with information such as who deleted it, their IP, timestamp it was deleted, etc.
What is the best way to record deletes?
I'm using VB.NET
this can be accomplished on the back end (database) you can simply select the deleted value and insert it on the history table, to obtain the user information you can get all that you need from HttpContext.Current.Request.Browser (HttpContext.Browser) on the request.
I would recommend you to set a stored proc that inserts and then deletes the row in a transaction.
Other way to do this is to use a trigger, on delete use the deleted table to insert it to the history table, I would not recommend this approach as using triggers creates overhead, but if you cannot modify the existing code and you need it then this is the way to go.
I'm trying to figure out if there's a method for copying the contents of a main schema into a table of another schema, and then, somehow updating that copy or "refreshing" the copy as the main schema gets updated.
For example:
schema "BBLEARN", has table users
SELECT * INTO SIS_temp_data.dbo.bb_users FROM BBLEARN.dbo.users
This selects and inserts 23k rows into the table bb_course_users in my placeholder schema SIS_temp_data.
Thing is, the users table in the BBLEARN schema gets updated on a constant basis, whether or not new users get added, or there are updates to accounts or disables or enables, etc. The main reason for copying the table into a temp table is for data integration purposes and is unrelated to the question at hand.
So, is there a method in SQL Server that will allow me to "update" my new table in the spare schema based on when the data in the main schema gets updated? Or do I just need to run a scheduled task that does a SELECT * INTO every few hours?
Thank you.
You could create a trigger which updates the spare table whenever an updated or insert is performed on the main schema
see http://msdn.microsoft.com/en-us/library/ms190227.aspx
Suppose I have two identical DataBases A and B, but with different Data. The tables, store procedures, etc are originally the same in both.
Now I start doing changes in the definition of DataBase A, like adding a new Table, deleting a Column of another table, after I finish all these changes , I would like to have "recorded" these changes to rerun them in DataBase B, so DataBase B definition will continue to be the same as DataBase A (the Data in both of them will still be different from A to B).
How can I record changes made in A and rerun them in B? Note that the changes I'm talking about are not in the Data but in the DataBase definition.