I have a big transactional table with 80,000,000 records and about 1,000 tps in informix. How can I replicate it without losing data?
-using load/unload for skipping refresh before mirror end with data loss
-using refresh before mirror, stops the subscription after replicating 12,000,000 records with 242 sql error number.
There is a procedure to do that, using commands dmmarkexternalunloadstart and dmmarkexternalunloadend. I think these are the only two commands that cannot be executed through GUI (Management Console). Try the following procedure for external replication:
1) Invoke the command on the source system to mark the starting point of the Refresh (for each table):
dmmarkexternalunloadstart –I –s –t
2) Start refreshing the table(s)
dmrefresh –I -a –s [–t ]
3) When the Refresh has completed, mark the end point of the Refresh for each table
dmmarkexternalunloadend –I –s –t
4) Start Mirroring the changes for the table that was just refreshed.
dmstartmirror –I -n –s
Related
I tried to copy a database within the same server using postgresql server
I tried the below query
CREATE DATABASE newdb WITH TEMPLATE originaldb OWNER dbuser;
And got the below error
ERROR: source database "originaldb" is being accessed by 1 other user
So, I executed the below command
SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'originaldb' AND pid <> pg_backend_pid();
Now none of us are able to login/connect back to the database.
When I provide the below command
psql -h 192.xx.xx.x -p 9763 -d originaldb -U postgres
It prompts for a password and upon keying password, it doesn't return any response
May I understand why does this happen? How can I connect to the db back? How do I restart/make the system to let us login back?
Can someone help us with this?
It sounds like something is holding an access exclusive lock on a shared catalog, such as pg_database. If that is the case, no one will be able to log in until that lock gets released. I wouldn't think the session-killing code you ran would cause such a situation, though. Maybe it was just a coincidence.
If you can't find an active session, you can try using system tools to figure out what is going on, like ps -efl|fgrep postgre. Or you can just restart the whole database instance, using whatever method you would usually use to do that, like pg_ctl restart -D <data_directory> or sudo service postgresql restart or some GUI method if you are on an OS that does that.
I need to execute a large Insert script of size 75MB in my database. I
am using the built in SQL command tool to run this script, but it
still throws the same error - "There is insufficient system memory in
resource pool 'internal' to run this query."
sqlcmd -S .\SQLEXPRESS -d TestDB -i C:\TestData.sql
How to resolve this memory issue, when the last resort of running the script through SQLCMD does not work?
Note - Increasing the Maximum Server Memory(in the Server Properties) did not resolve this problem.
I face the same issue recently. What I have done is added Go statements for every 1000 inserts. This worked perfectly for me.
Go statement will divide the statements into separate batches. So every batch treated as separate Insertion. Hope this will help you in some way.
I use the sqlcmd utility to import a 7 GB large SQL dump file into a remote SQL Server. The command I use is this:
sqlcmd -S IP address -U user -P password -t 0 -d database -i file.sql
After about 20-30 min the server regularly responds with:
Sqlcmd: Error: Scripting error.
Any pointers or advice?
I assume file.sql is just a bunch of INSERT statements. For a large amount of rows, I suggest using the BCP command-line utility. This will perform orders of magnitude faster than individual INSERT statements.
You could also bulk insert data using the T-SQL BULK INSERT command. In that case, the file path needs to be accessible by the database server (i.e. UNC path or copied to a drive on the server) and along with needed permissions. See http://msdn.microsoft.com/en-us/library/ms188365.aspx.
Why not use SSIS. While I have a certificate as a DBA, I always try to use the right tool for the job.
Here are some reasons to use SSIS.
1 - Use can still use fast-load, bulk copy. Make sure you set the batch size.
2 - Error handling is much better.
However, if you are using fast-load, either the batch commits or it gets tossed.
If you are using single record, you can direct each error row to a separate destination.
3 - You can perform transformations on the source data before loading it into the destination.
In short, Extract Translate Load.
4 - SSIS loves memory and buffers. If you want to get really in depth, read some articles from Matt Mason or Brian Knight.
Last but not least, the LAN/WAN always plays a factor if the job is not running on the target server with the input file on a local disk.
If you are on the same backbone with a good pipe, things go fast.
In summary, yeah you can use BCP. It is great for little quick jobs. Anything complicated with robust error handling should be done with SSIS.
Good luck,
I am on a unix server which is set up to remotely connect to another db2 unix server.
I was able to connect to DB2 using following script:
db2 "connect to <server name> user <user name> using <pass>";
Then I ran following command to save results of SQL to a file
db2 "select * from <tablename>" > /myfile.txt
The script starts execution but never ends.I tried using -x before select too but same result never ends execution.Table is small has only one record.When I forcefully end execution the header of table gets saved in file with following error:
SQL0952N Processing was cancelled due to an interrupt. SQLSTATE=57014
Please help I am stuck in a riddle.
You could monitor the connection and the output file in order to know what is happening.
Before start the monitoring, get the current application handle
db2 "values SYSPROC.MON_GET_APPLICATION_ID()"
Open a second terminal, and execute db2top against your databases. Checks the current sessions (L) and take a look at your connection (previous application ID). If you see a Lock Wait status, it is just because another connection put a lock on that table, and it is not possible to read it concurrently.
db2top -d myDB
Try to execute the same query with another isolation level
db2 "select * from <tablename> WITH UR"
If that is the problem, you should analyze which other processes are running (modifying data) on the database.
Open another terminal, and do a
tail -f /myfile.txt
If you see the file is changing, it is just because the output is too big. Just wait.
I have 2 databses X "production" and Y "testing
Database X should be identical to Y in structure. However, they are not because I mad many changes to the production.
Now, I need to somehow to export X and import it into Y without breaking any replications.
I am thinking to do mysql dump but I don't want to case any issue to replication this is why I am asking this question to confirm.
Here are the step that I want to follow
back up production. (ie. mysqldump -u root -p --triggers --routines X > c:/Y.sql)
Restore it. (ie. mysql -u root -p Y < c:\Y.sql)
Will this cause any issues to the replication?
I believe the dump will execute everything and save it into it's bin log and the slave will be able to see it and replicate it with no problems.
Is what I am trying to do correct? will it cause any replication issues?
thanks
Yes, backing up from X and restoring to Y is a normal operation. We often call this "reinitializing the replica."
This does interrupt replication. There's no reliable way to restore the data at the same time as letting the replica continue to apply changes, because the changes the replica is processing are not in sync with the snapshot of data represented by the backup. You could overwrite changed data, or miss changes, and this would make the replica totally out of sync.
So you have to stop replication on the replica while you restore.
Here are the steps for a typical replica reinitialization:
mysqldump from the master, with the --master-data option so the dump includes the binary log position that was current at the moment of the dump.
Stop replication on the replica.
Restore the dump on the replica.
Use CHANGE MASTER to alter what binary log coordinates the replica starts at. Use the coordinates that were saved in the dump.
Start replication on the replica.
Re your comment:
Okay, I understand what you need better now.
Yes, there's an option for mysqldump --no-data so the output only includes the CREATE TABLE and other DDL, but no INSERT statements with data. Then you can import that to a separate database on the same server. And you're right, by default DDL statements are added to the binary log, so any replication replicas will automatically run the same statements.
You can even do the export & import in just two steps like this:
$ mysqladmin create newdatabase
$ mysqldump --no-data olddatabase | mysql newdatabase