Alter table schema while loading mysqldump in MySql - sql

I want to load mysqldump file into a server.
while loading dump , I want to change few column values and update schema.
for example for guid column we gave varchar(100) so now I want to change into binary(16) that means I need change in table schema and table values.
can I do this changes while loading dump file into new server.
Thanks

No, basically you can't do anything WHILE loading dump. As mentioned in comments, you have two options:
Edit SQL in dump
Load dump and after that execute a script
with needed fixes.
If you have access to initial database, you can produce another dump with needed changes.

Related

How to rename database using phpMyAdmin tool?

I created a fresh database in phpmyadmin which does not contain any tables yet since its fresh, however I accidentally made a typo. How can I rename the database?
If this happens to me I usually just execute the SQL command:
DROP DATABASE dbname;
and create another database. But is it possible to rename it? I was already searching SO but found nothing helpful.
I found two possible solutions.
Rename it via the phpmyadmin backend UI (preferable):
Or just execute this SQL (only use it if the database is fresh and does not contain any data yet, otherwise it will be lost!)
CREATE DATABASE newname;
DROP DATABASE oldname;
ALTER DATABASE oldName MODIFY NAME = newName
I don't think you can do this. I think you'll need to dump that database, create the newly named one and then import the dump.
If this is a live system you'll need to take it down. If you cannot, then you will need to setup replication from this database to the new one.
If you want to see the commands try this link, Rename MySQL database
Try using an aux temporary db (as copy of the original)
$ mysqldump dbname > dbname_dump.sql //create a backup
$ mysqladmin create dbname_new //create your new db with desired name
$ mysql dbname_new < dbname_dump.sql //restore the backup to the new one
$ mysql drop database dbname; //drop old one

Extract and recreate DDL of database schema elsewhere

I guess I just cannot formulate the search query appropriately, but I cannot find an answer to the following simple question: how to use extracted DDL pieces to recreate tables, views etc. in a different database or a different schema?
For example, when I extract table DDL with
SELECT dbms_metadata.get_dependent_ddl ('TABLE', TABLE-NAME, SCHEMA) FROM dual
I get output with FOREIGN KEY there. If I now naively issue the resulting CREATE TABLE statements on a different database in e.g. alphabetical order of table names, I get "table or view doesn't exist" error, because constraints reference to non-yet-created tables.
What is the normal procedure of using DDL? Is it (easily) possible to recreate full scheme structure (short of full database dump) without using external tools?
You can use datapump export CONTENT option to only export the metadata for a schema:
CONTENT=[ALL | DATA_ONLY | METADATA_ONLY]
ALL unloads both data and metadata. This is the default.
DATA_ONLY unloads only table row data; no database object definitions are unloaded.
METADATA_ONLY unloads only database object definitions; no table row data is unloaded. Be aware that if you specify CONTENT=METADATA_ONLY, then when the dump file is subsequently imported, any index or table statistics imported from the dump file will be locked after the import.
The import process will create the objects and constraints, taking the dependencies into account.
If you want to see the DDL, and optionally run it manually, you can use the datapump import SQLFILE option to put the DDL into a file instead of executing it:
Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.
You can do similar things through SQL Developer and other clients, but those are 'external tools', whereas datapump might not fall into that category, even if you have to run it from the command line. There is a datapump API so you can even avoid the command line if you want to, though in some ways it's more complicated than using the expdp and impdp utilities.

How do I dump an entire impala database

Is there a way to dump all the schema / data of an impala database so I can recreate in a new database instance?
Something akin to what mysqldump does?
Yes ,
you can take all data from impala warehouse ( usually /user/hive/warehouse)
use dictcp to copy from one cluster to other cluster in same location
Fire show create table to get schema of each table and just change location to destination location
Since there is no DUMP command (or something similar):
http://www.cloudera.com/content/cloudera/en/documentation/cloudera-impala/latest/topics/impala_shell_commands.html
I think the best solution will be to use only external tables in one database.
That way, you can know where your data is saved, and potentially copy it in another place.
CREATE EXTERNAL TABLE table_name(one_field INT, another_field BIGINT,
another_field1 STRING)
COMMENT 'This is an external table'
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\054'
STORED AS TEXTFILE
LOCATION '<my_hdfs_location>';

Change the database's table in hive or hcatalog

Is there a way to change the database's table in hive or Hcatalog?
For instance, I have the table foo in the database default, and I want to put this table in the database bar. I try this, but it doesn't work:
ALTER TABLE foo RENAME TO bar.foo
Thanks in advance
AFAIK there is no way in HiveQL to do this. A ticket was raised long back though. But the issue is still open.
An alternate could be to use the EXPORT/IMPORT feature provided by Hive. With this feature we can export the data of a table to a HDFS file along with the metadata using the EXPORT command. The data is stored in JSON format. Data once exported this way could be imported back to another database (even another hive instance) using the IMPORT command.
More on this can be found on the IMPORT/EXPORT MANUAL.
HTH
thanks for your response. I found an other mean to change the database
USE db1; CREATE TABLE db2.foo like foo

Extract specific data from full mysqldump backup

I am making regular backups of my MySQL database with mysqldump. This gives me a .sql file with CREATE TABLE and INSERT statements, allowing me to restore my database on demand. However, I have yet to find a good way to extract specific data from this backup, e.g. extract all rows from a certain table matching certain conditions.
Thus, my current approach is to restore the entire file into a new temporary database, extract the data I actually want with a new mysqldump call, delete the temporary database and then import the extracted lines into my real database.
Is this really the best way to do this? Is there some sort of script that can directly parse the .sql file and extract the relevant lines? I don't think there is an easy solution with grep and friends unfortunately, as mysqldump generates INSERT statements that insert many values per line.
The solution to this just ended up being to import the whole file, extract the data I needed and drop the database again.