DB copy within MySQL that is faster than `mysqldump`? [closed] - sql

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a production db that I'd like to copy to dev. Unfortunately it takes about an hour to do this operation via mysqldump | mysql and I am curious if there is a faster way to do this via direct sql commands within mysql since this is going into the same dbms and not moving to another dbms elsewhere.
Any thoughts / ideas on a streamlined process to perform this inside of the dbms so as to eliminate the long wait time?
NOTE: The primary goal here is to avoid hour long copies as we need some data very quickly from production in the dev db. This is not a question about locking or replication. Wanted to clarify based on some comments from my including more info / ancillary remarks than I should have initially.

You could set up a slave to replicate the production db, then take dumps from the slave. This would allow your production database to continue operating normally.
After the slave is done performing a backup, it will catch back up with the master.
http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-backups-mysqldump.html

Related

Is sharing the same database between two programming languages possible? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 11 days ago.
Improve this question
Program A is good at collecting data while Program B, in another language, is good at creating REST APIs. Is it possible to connect these two with a single database that A and B will read and write to? Performance for database operations is not really an issue in my case.
Sure this is possible. Databases typically can handle multiple connections from different programs/clients. A database does not really care which language the tool that is making the connection is written in.
Short edit:
Also most databases support "transactions". Which are used to cover that different connected clients do not break consistency of your application data while reading and writing in parallel.

Periodically update a table with new records from another server [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have databases on two different servers. I need to regularly retrieve new records from a table on server A, and process them in order to use them to update a table on server B (with a different schema). I was going to use a trigger for this, but if this fails, the inserts on server A are rolled back. The inserts on table A must not fail, so the update of server B needs to be as decoupled from this as possible. I am now thinking of using a scheduled sproc on server B to retrieve the results from server A and update server B. This would need to run every 30 seconds. Is there anything wrong with this approach, or is there a better or more 'correct' way of achieving this?
I think creating a scheduled job in SQL Server Agent is the way to go here. This can execute a simple stored procedure (if the logic is realatively simple) or an SSIS package (where it is more complex).
Just a final note on triggers: if possible I have always tried to avoid using triggers. They can have what appear to be "unintended" or "mysterious" side effects, they can be difficult to debug and developers can often forget to check for triggers when trying to resolve an issue. That's not to say they don't offer benfits too - but I think you need to be wary of them.

Getting intermediate spool output [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Am using oracle 11g, and i have a sql file with 'spool on' which runs for at least 7+ hours, as it has to spool huge data. But spool output is dumped only when the whole sql is finished, but i would like to know is there any other way to know the progress of my sql, or data spooled until that point in time, so that am rest assured that my sql is running properly as expected. Please help with your inputs.
Sounds like you are using DBMS_OUTPUT, which always only starts to actually output the results after the procedure completes.
If you want to have real/near time monitoring of progress you have 3 options:
Use utl_file to write to a OS file. You will need access to the db server OS file system for this.
Write to a table and use PRAGMA AUTONOMOUS_TRANSACTION so you can commit the log table entries without impacting your main processing. This is easy to implement, and readily accessible. Implemented in a good way this can become a de facto standard for all your procedures. You may then need to implement some sort of house keeping to avoid this getting too big and unwieldy.
A quick and dirty option which is transient, is to use DBMS_APPLICATION.SET_CLIENT_INFO, and then query v$session.client_info. This works well, good for keeping track of things, fairly unobtrusive and because it is a memory structure is fast.
DBMS_OUTPUT really is limited.

Read/Write safe file based storage for local or network shared data. (SQL CE?) [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
We have a need to have a datastore of some form that has the following properties.
Relocatable, local or remote systems.
Capable of multiple readers/writers, new queries should contain updates.
De-centralized, no server would be required.
Capable of holding at least 16 Mb of data.
SQL CE seems capable, but I'm not sure I'd understand what technologies would go into integrating such a solution as I don't really have an SQL background.
Is there anyone that has tackled a problem like this? what solutions have worked for you?
For point #1, do you want to be able to access the SQL CE database remotely on a share? If so I do not believe you want to do this as CE is not targetted for this. See this link for some details. I think it would be fine for the other 3 items if I am understanding you properly.

SQL Server: What is the best way to Data Migration? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I want to migrate data from one database to another database in Microsoft SQL Server 2005. I need to verify those rows retrieved before I insert them to the destination database's tables. Which approach is reasonable for this kind of things?
I am trying to use two datasets in my VB.NET program. Is it reasonable? Can you suggest me?
Thanks in advance,
RedsDevils
It depends on how much data you're talking about, but I'd tend to pass on .Net datasets for a migration task, as that means pulling all the data into memory. If you must do this via a .Net client program, at least use a DataReader instead. But what's even better is to keep it all in Sql Server via Sql Server Integration Services.