Restore of MySQL Backup just stuck - backup

I have a with mysqldumb created backup file. It's about 15GB and contains a lot of blobs. Max size per blob is 30MB.
mysqldump -uuser -ppass --compress --quick --skip-opt supertext > supertext.sql
Now when I try to restore the backup, the process just gets stuck.
mysql -uuser -ppass dev_supertext < supertext.sql
It get stuck while writing back the biggest table with the blobs. There is no error message and mysql is still running fine.
This is on a 64bit 5.1.48 community edition for Windows server.
max_allowed_packet is set to 40MB and is not the problem. I had that before.
Any other settings I could check or something I can monitor during the restore?
Didn't see anything special in the query or error log. Maybe there is a timeout?
Just FYI:
I've already posted this question in the MySQL Forum, but got no response.
http://forums.mysql.com/read.php?28,377143
Thanks for any tips.

Are you positive it is only the big table with blobs? Try running the dump sans that table. Do that table individually and if it still gets stuck, break it up.
Create the inserts into 3-4 groups and see if any go through. Process of elimination will help narrow down if theres a row specific issue (I.e. corrupted data?) or if mysql is simply taking a while to write.
I'd advise opening up a second mysql shell or using phpmyadmin to refresh the table view and see if new records are being written. MySQL isn't verbose on its dumps. It may simply be taking a while to load in all the inserts.

Related

RavenDB taking forever to show updates

I'm starting to assess our company using RavenDB for storing some stuff that doesn't really belong in a relational database (we're traditionally a SQL Server shop). I installed RavenDB locally on my machine, created a database, added a document. Nice!
Being a DBA, I decided to see how backups/restores work. I backed up my database, deleted it, then restored it from the backup. After refreshing my admin screen, I saw my database. I clicked on it, and got a message that the database doesn't exist.
After a couple hours, I tried again. Still doesn't exist. A full day later, I walk into work, and try again. This time the database works. I've had similar situations with updating documents. The update seems to take anywhere between 1 second - several hours to show an update...
Is this normal for RavenDB?? Am I completely misconfigured?? I run SQL Server on my local machine and it's lightning-fast, so I can't imagine updating a single document could take that long. As-is, I can't imagine recommending we use RavenDB for anything.
Are you querying using indexes or getting documents by ID? Documents should be updated immediately (ACID). If indexes are slow to update (check their status using RavenDB Studio), it could be a configuration problem or something external like an anti-virus software can cause them to update slowly.
Apparently, at least for the document-update latency, the default for caching in queries is enabled, so I was getting cached results.
Jeffery,
No, that isn't normal by a long short. You should be able to immediately see what was changed.
Note that certain AV products will interfere with the HTTP pipeline and can affect RavenDB's usage. The studio will also auto update things only every 5 seconds (to reduce UI jitter), but that is about it.
Restoring a database (from the same machine), should take only as long as it take to copy the files (pure I/O bound operation).
If this is from another machine using a different version of Windows, we might need to run a check on the file, which can take a bit of time, but that doesn't sound like your scenario

can i restore the previous value of a cell of a row in SQL server 2008?

I have made a wrong entry in a cell and committed it. Later i found that that entry was actually suppose to be done to the below row cell, but i dont remember the previous value and want to know that can i still find it anywhere to make the correction.
The backup is the best way to approach this, if the database is in properly full logged mode and the transaction is still in the transaction log, it can be pulled out and decoded manually, although the effort to do this is non-trivial.
I've written an example of doing this for an update. http://sqlfascination.com/2010/02/03/how-do-you-decode-a-simple-entry-in-the-transaction-log-part-1/
This really though is not a mechanism you should ever rely on to recover data, suitable backups / transactions or even paper backups would be better.
Only real way that i can think of would be by restoring a backup and using the value of that row from the backup.
Note: Retaining the references to transaction log restore method using SQL log rescue in case someone with SQL 2000 ever runs into a similar issue
I can add to all these goood answers that also SQL Litespeed tool has a very nice feature called Log Reader which can help you restore past values from Log backups (done with Litespeed) and even Online Transaction log (not backed up). I think a trial version of Litespeed will let you look into your online transaction log file - of course if your database is in Full recovery mode. Worth to try.

Killing the mysqld process

I have a table with ~800k rows. I ran an update users set hash = SHA1(CONCAT({about eight fields})) where 1;
Now I have a hung Sequel Pro process and I'm not sure about the mysqld process.
This is two questions:
What harm can possibly come from killing these programs? I'm working on a separate database, so no damage should come to other databases on the system, right?
Assume you had to update a table like this. What would be a quicker / more reliable method of updating without writing a separate script.
I just checked with phpMyAdmin and it appears as though the query is complete. I still have Sequel Pro using 100% of both my cores though...
If you're using InnoDB, which is backed by a transaction log for recovery and rollback purposes, then you can get away with a lot, especially in a non-production environment.
The easiest way to terminate a renegade query is to use the MySQL shell as the root user:
SHOW PROCESSLIST;
This will give you a list of the current connections and a process ID for each one. To terminate any given query, such as number 19, use:
KILL 19;
Usually this will undo and roll back the query. In some cases this is not sufficient and you may have to force-quit the MySQL server process with kill -9. Under most circumstances you should be able to restart the server right away, and the DB will be in the last fully committed state.
To get the thread IDs (it'll show the query alongside):
mysqladmin proc
To safely kill the query thread:
mysqladmin kill [id]
You'll end up with a partially updated table unless you use innodb, but you should be fine. Details:
During UPDATE or DELETE operations,
the kill flag is checked after each
block read and after each updated or
deleted row. If the kill flag is set,
the statement is aborted. Note that if
you are not using transactions, the
changes are not rolled back.
As for your second question, there is no better way to update a table if one is not allowed to write a separate script (to, say, throttle the updates).

Ensuring data integrity of mysqldump <-> rsync

I use rsync to back up the files on my server, and mysqldump to back up my database. Here's my concern:
A mysqldump on my database takes about 30 seconds. I have a table called photos that stores info about the images a user has uploaded, including the path to the file. I am worried about what will happen when photos are uploaded or deleted during the 30 seconds it takes to complete the mysqldump. If that happened and I were then to restore the rsync'd files and the mysqldump data, I could then be looking at a database that contains rows pointing to deleted photos, or missing rows for photos that were successfully uploaded.
How can I make sure the mysqldump exactly matches the rsync?
Thanks in advance,
Brian
Use LOCK TABLES to block any write activity from the tables you're backing up. Then unlock them once your mysqldump is finished.
I think the answer is simple, just run rsync AFTER you complete mysqldump :) This way at worst you will have a couple NEW files that are not in the db dump, but you will not have inconsistent db entries.
You could MD5 the resulting mysqldump (on the server) and the transfered (locally) by rsync, then compare the two hashes to ensure they match.
Another alternative, is to set mysqldump on a version controlled file (with git or svn or your favorite vcs). The advantage with git, for example, is that you could easily setup some post-commit hooks to push the changes to a remote server, and the upload would be just the differences between versions, not the entire dump. This way you could think in decreasing the backup period.

Sql Server 2000 - tempdb growing very large

We have a SQL Server 2000 production environment where suddenly (ie. the last 3 days) something has caused the tempdb data file to grow very large (45 gigs with a database which is only 10 gigs).
Yesterday, after it happened again we shrank the database and ran the major batch processes individually without any problems. However, this morning the database was back up to 45 gigs.
Is there a simple way to find out what is causing this database to grow so large? Ideally, something which could be looked at today but if that is not available something which can be set to get that information tomorrow.
BTW: Shrinking the database gets back the space within a few seconds.
Agreed with Jimmy, you need use SQL Profiler to find which temporary objects created so intensive. This may be temporary tables that uses some reports or something like.
I wanted to thank everyone for their answers as they definitely led to the cause of the problem.
We turned on SQL profiler and sure enough a large bulk load showed up. As we are working on a project to move the "offending" job to work also in mysql we will probably just watch things for now.
do you have a job running that rebuilds the index? It is possible that it uses SORT_IN_TEMPDB
or any other large queries that do sorting might expand tempdb
This may have something to do with the recovery model that the TempDB is set to. It may be set to FULL instead of BULK-LOGGED. FULL recovery increases the transaction log size until a backup is performed.
Look at the data file size vs. the transaction log size.
I'm not a DBA but some thoughts:
Is it possible that there are temp
tables being created but not dropped?
##tempTable?
Is it possible that
there is a large temp table being
created (and dropped) but the space
isn't reclaimed?
Are you doing any
sort of bulk loading where the system
might use the temp table? (I'm not
sure if you can) but can you turn on
Auto-Shrink for the tempdb?