reload a .sql schema without restarting mysqld - sql

Is it possible to reload a schema file without having to restart mysqld? I am working in just one db in a sea of many and would like to have my changes refreshed without doing a cold-restart.

When you say "reload a schema file", I assume you're referring to a file that has all the SQL statements defining your database schema? i.e. creating tables, views, stored procecures, etc.?
The solution is fairly simple - keep a file with all the SQL that creates the tables, etc. in a file, and before all the CREATE statements, add a DELETE/DROP statement to remove what's already there. Then when you want to do a reload, just do:
cat myschemafile.sql | mysql -u userid -p databasename

Related

How to export script create database in MongoDB?

In MS SQL Server, after creating database completely, we have file script.sql and anyone want to create our database, they just need to run that file script.sql
I don't know whether if we can export that kind of file in MongoDB ?
mongoDB is "schema less" database system meaning that you can insert data without the initial schema definition like in MS SQL Server. You may create your indexes at later stage and prapare a .js file to execute it every time if you want to have the same indexes every time in new deployments or you can make initial empty mongodump and do mongorestore every time you need same indices.

How to rename database using phpMyAdmin tool?

I created a fresh database in phpmyadmin which does not contain any tables yet since its fresh, however I accidentally made a typo. How can I rename the database?
If this happens to me I usually just execute the SQL command:
DROP DATABASE dbname;
and create another database. But is it possible to rename it? I was already searching SO but found nothing helpful.
I found two possible solutions.
Rename it via the phpmyadmin backend UI (preferable):
Or just execute this SQL (only use it if the database is fresh and does not contain any data yet, otherwise it will be lost!)
CREATE DATABASE newname;
DROP DATABASE oldname;
ALTER DATABASE oldName MODIFY NAME = newName
I don't think you can do this. I think you'll need to dump that database, create the newly named one and then import the dump.
If this is a live system you'll need to take it down. If you cannot, then you will need to setup replication from this database to the new one.
If you want to see the commands try this link, Rename MySQL database
Try using an aux temporary db (as copy of the original)
$ mysqldump dbname > dbname_dump.sql //create a backup
$ mysqladmin create dbname_new //create your new db with desired name
$ mysql dbname_new < dbname_dump.sql //restore the backup to the new one
$ mysql drop database dbname; //drop old one

Importing MySQL tables from other database in live site with mysqldump can cause trouble?

Scenario: I want to replicate MySQL tables from one database to other database.
Possible best solution: May be to use MySQL Replication feature.
Current solution on what I'm working as workaround (mysqldump) because can't spend time to learn about Replication in current deadline.
So currently I'm using command like this:
mysqldump -u user1 -ppassword1 --single-transaction SourceDb TblName | mysql -u user2 -ppassword2 DestinationDB
Based on some tests, it seems to be working fine.
While running above command, I run ab command with 1000 requests on destination site and tried accessing the site from browser also.
My concern is for destination live site on which we are importing data with whole table (which will internally drop existing table and create new one with new data).
Can I be sure that live site won't break while this process or is there any risk factor?
If yes then can that be resolved?
As such you already admitted replication is the best solution here, I'd agree to that.
You said you have 1000 requests on "Destination" side? Are these 1000 connections to Destination read-only?
Ofcourse dropping and recreating table isn't a right choice here for active connections.
Can suggest one improvement. Instead of directly loading to table, load to different database and swap tables. This should be quicker as far as connections to Destination database/tables are concerned.
create new table different database
mysqldump -u user1 -ppassword1 --single-transaction -hSOURCE_HOST SourceDb TblName | mysql -uuser2 -ppassword2 -hDESTINATION_HOST DB_New
(Are you sure you don't have "-h " here?)
Swap the tables
rename table DB.TblName to DB.old_TblName, DB_New.new_TblName to DestinationDB.TblName;
If you're on same host (which I dont think so), you might want to use pt-online-schema-change and swap tables!

Create database explicitly before restoring to it?

When I setup my PostgreSQL server one of the first things I will do is import a database for an external source. Which of the following is the right way to do it?
Create a database called "NEWDB" on the PostgreSQL server and then
import my external "BACKUPDB" database from my pg_dump into the
"NEWDB".
Don't create a database on the PostgreSQL server, and import the
"NEWDB" database, thereby automatically creating "NEWDB" on the
postgresql server.
I guess my question is, if I want to import an existing database to the PostgreSQL server, do I first need to create a database for it to go into?
You don't have to. It depends on what you want to achieve. If you dump a single database with pg_dump, CREATE DATABASE and ALTER DATABASE commands are not included. You are expected to connect to an existing database. So you have to create it first.
I quote advice from the manual:
If your database cluster has any local additions to the template1
database, be careful to restore the output of pg_dump into a truly
empty database; otherwise you are likely to get errors due to
duplicate definitions of the added objects. To make an empty database
without any local additions, copy from template0 not template1, for
example:
CREATE DATABASE foo WITH TEMPLATE template0;
And also:
The dump file also does not contain any ALTER DATABASE ... SET
commands; these settings are dumped by pg_dumpall, along with database
users and other installation-wide settings.
pg_dumpall, on the other hand, dumps the whole DB cluster including meta-objects like users. It includes CREATE DATABASE statements and connects to each DB when restoring. You can even include DROP DATABASE statements with the -c (--clean) option. Careful with that.
Every instance of PostgreSQL has a default maintenance db named "postgres" that you can connect to - to create databases for instance or start a full restore (from pg_dumpall). But a single-DB dump (from pg_dump) has to be run against its target database.
Finally:
Once restored, it is wise to run ANALYZE on each database so the
optimizer has useful statistics. You can also run vacuumdb -a -z to
analyze all databases.

How to view tables in sql?

Hai i am a beginner of Database,
i have a .sql file which contains some tables of data, i want to know how to import them and how to view the list of tables.
presently im using the following:-
software or editor : navicat lite
server : localhost.
databse file format: .sql
Maybe you can try to execute the script in sql server, then type
select * from [database_name].information_schema.tables
to view tables and relevant information.
Remember that a sql file is not really a database, it is a script. You can run the script from any tool, but I'd use command line. This is navicat connected to mysql?
mysql -u username -p databasename < script.sql
password: **
And then the results can be seen using navicat or any other tool
If the .sql file has statements such as "CREATE TABLE..." and then later on "INSERT INTO..." then the script is possibly creating the tables and inserting the data.
To allow that to happen, the tables need to not exist in the database. You can then run the script and it will create the tables and fill in the data.
If the tables do exist, you can always either delete them, or change the CREATE to an ALTER and the script should then run.
Hope that helps.