When I run:
gbak -r
what will it do?
In Firebird < 2.0, -r will replace your currently database file with the one restored from the backup. In FB >= 2.0, you need to specify -rep for that. Take care to avoid replacing an active database.
-r[ecreate_database] o[verwrite]
http://www.destructor.de/firebird/gbak.htm
[Firebird 2.0] Restores over an existing database. This can only be performed by SYSDBA or the owner of the database that is overwritten. Do NOT restore over a database that is in use!
-r is equivalent to -c. Only the "overwrite" option will restore over an existing database.
It replaces the database - ie. overwrites it.
http://pwet.fr/man/linux/commandes/gbak
Related
I am trying to restore the backup of the database but it's giving me the error of roles, so I got to know that first, we have to take backup of roles/users then we take complete backup but I am unaware of command that can be used. can anyone help me with the command?
You can use pg_dumpall for that with the --globals-only option:
pg_dumpall --globals-only --file=all_roles_and_users.sql -p postgres -h ...
The file all_roles_and_users.sql will contain all roles and role memberships currently defined in the instance (aka "cluster") you connect to.
This nee
I have a dev version and a production version running in django.
I recently started populating it with a lot of data and found that the django loaddata tries to load everything into memory before adding it into the db and my files will be too big for that.
What is the proper way to push my data from my dev machine to my production?
I did...
pg_dump -U user -W db ./filename.sql
and then on the production server I did...
psql dbname < filename.sql
It seems like it worked, all the data is there, but it came up with some errors such as
relation xxx already exists
constrain xxx for relation xxx already exists
and there were quite a few of them, but like I said everything appears to be there. Is this the right way to do it?
Edit: I have in the production machine the database with information and I don't want truncate the tables before import.
This is the script that I use:
pg_dump -d DATABASE_NAME -U postgres --format plain --inserts > /FILE.sql
Edit: As you says in comments that you don't want truncate the tables before import, you can't do this type of import into your production database. I suggest empty your production database before import the dev database dump.
I am transferring my magento site from an old domain to a new domain.
I have exported the database file from the old server and I have done all the necessary changes.
Now I'm trying to run the exported file into the new database but sql is stuck at loading for almost an hour.
Please somebody help.
See loading screen attached. Image here
Thank you.
I would suggest making a backup of the whole 'cPanel' and then reimport it back in. This way you won't mess anything up with the database. If you still do need to perform exporting and reimporting back - make sure you disable the key check by adding this before and after your database.
SET foreign_key_checks = 0;
SET foreign_key_checks = 1;
And to successfully import large database you must increase the memory limit on your 'mysql' within '.ini' file.
I wouldn't do it through a GUI interface. Do you have SSH access? If so, here's how you can run it from Command Line which won't be limited by browser processing.
dump:
mysqldump -u '<<insert user>>' -p --single-transaction --database <<database name>> > data_dump.sql
load:
mysql -p -u '<<insert user>>' <<database name>> < data_dump.sql
It's best to do this as the root user so you don't have any trouble.
On import, if you are getting errors that the definer is not a user, you can either create the definer user or you can run this sed command that will replace the definer in your file with the current mySQL user:
sed -E 's/DEFINER=`[^`]+`#`[^`]+`/DEFINER=<<new user name>>/g' data_dump.sql > cleaned_data_dump.sql
As espradley and damek132 said, combine both the answers. Disable foreign key checks, if it's not. But, mostly it's there while exporting the sql dump.
And use the mysql command line through ssh. You should be up and running in half an hour.
I am trying to restore a database backup through the client interface of open ERP. A message appeared "Could not restore DB". I am using Postgresql 8.4.1
Please help!
You can restore your db direct from pgadmin by making blank db and restore your db into.
and other way is by command prompt postgres given below command.
To create --- createdb db_name
To restore ---- psql created db_name < from which db you want to restore
Which version of openerp and postgresql you are using ? even this message appears, please check in postgresql you will find your database restored.
Are you able to create backups on the same server? I've had similar problems on new installations when I haven't created a .pgpass file. The db_user and db_password configuration parameters are used during regular database access, but can't be used for PostgreSQL backup and restore operations. For backup and restore, you need to set up a .pgpass file.
Using Toad for Oracle, I can generate full DDL files describing all tables, views, source code (procedures, functions, packages), sequences, and grants of an Oracle schema. A great feature is that it separates each DDL declaration into different files (a file for each object, be it a table, a procedure, a view, etc.) so I can write code and see the structure of the database without a DB connection. The other benefit of working with DDL files is that I don't have to connect to the database to generate a DDL each time I need to review table definitions. In Toad for Oracle, the way to do this is to go to Database -> Export and select the appropriate menu item depending on what you want to export. It gives you a nice picture of the database at that point in time.
Is there a "batch" tool that exports
- all table DDLs (including indexes, check/referential constraints)
- all source code (separate files for each procedure, function)
- all views
- all sequences
from SQL Server?
What about PostgreSQL?
What about MySQL?
What about Ingres?
I have no preference as to whether the tool is Open Source or Commercial.
For SQL Server:
In SQL Server Management Studio, right click on your database and choose 'Tasks' -> 'Generate Scripts'.
You will be asked to choose which DDL objects to include in your script.
In PostgreSQL, simply use the -s option to pg_dump. You can get it as a plain sql script (one file for the whole database) on in a custom format that you can then throw a script at to get one file per object if you want it.
The PgAdmin tool will also show you each object's SQL dump, but I don't think there's a nice way to get them all at once from there.
For mysql, I use mysqldump. The command is pretty simple.
$ mysqldump [options] db_name [tables]
$ mysqldump [options] --databases db_name1 [db_name2 db_name3...]
$ mysqldump [options] --all-databases
Plenty of options for this. Take a look here for a good reference.
In addition to the "Generate Scripts" wizard in SSMS you can now use mssql-scripter which is a command line tool to generate DDL and DML scripts.
It's an open source and Python-based tool that you can install via:
pip install mssql-scripter.
Here's an example of what you can use to script the database schema and data to a file.
mssql-scripter -S localhost -d AdventureWorks -U sa --schema-and-data > ./adventureworks.sql
More guidelines: https://github.com/Microsoft/sql-xplat-cli/blob/dev/doc/usage_guide.md
And here is the link to the GitHub repository: https://github.com/Microsoft/sql-xplat-cli
MySQL has a great tool called MySQL workbench that lets you reverse and forward engineer databases, as well as synchronize, which I really like. You can view the DDL when executing these functions.
I wrote SMOscript which does what you are asking for (referring to MSSQL Server)
Following what Daniel Vassallo said, this worked for me:
pg_dump -f c:\filename.sql -C -n public -O -s -d Moodle3.1 -h localhost -p 5432 -U postgres -w
try this python-based tool: Yet another script to split PostgreSQL dumps into object files