MySQL dump by query - sql

Is it possible to do mysqldump by single SQL query?
I mean to dump the whole database, like phpmyadmin does when you do export to SQL

not mysqldump, but mysql cli...
mysql -e "select * from myTable" -u myuser -pxxxxxxxxx mydatabase
you can redirect it out to a file if you want :
mysql -e "select * from myTable" -u myuser -pxxxxxxxx mydatabase > mydumpfile.txt
Update:
Original post asked if he could dump from the database by query. What he asked and what he meant were different. He really wanted to just mysqldump all tables.
mysqldump --tables myTable --where="id < 1000"

This should work
mysqldump --databases X --tables Y --where="1 limit 1000000"

Dump a table using a where query:
mysqldump mydatabase mytable --where="mycolumn = myvalue" --no-create-info > data.sql
Dump an entire table:
mysqldump mydatabase mytable > data.sql
Notes:
Replace mydatabase, mytable, and the where statement with your desired values.
By default, mysqldump will include DROP TABLE and CREATE TABLE statements in its output. Therefore, if you wish to not delete all the data in your table when restoring from the saved data file, make sure you use the --no-create-info option.
You may need to add the appropriate -h, -u, and -p options to the example commands above in order to specify your desired database host, user, and password, respectively.

You can dump a query as csv like this:
SELECT * from myTable
INTO OUTFILE '/tmp/querydump.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'

You could use --where option on mysqldump to produce an output that you are waiting for:
mysqldump -u root -p test t1 --where="1=1 limit 100" > arquivo.sql
At most 100 rows from test.t1 will be dumped from database table.

If you want to export your last n amount of records into a file, you can run the following:
mysqldump -u user -p -h localhost --where "1=1 ORDER BY id DESC LIMIT 100" database table > export_file.sql
The above will save the last 100 records into export_file.sql, assuming the table you're exporting from has an auto-incremented id column.
You will need to alter the user, localhost, database and table values. You may optionally alter the id column and export file name.

MySQL Workbench also has this feature neatly in the GUI. Simply run a query, click the save icon next to Export/Import:
Then choose "SQL INSERT statements (*.sql)" in the list.
Enter a name, click save, confirm the table name and you will have your dump file.

Combining much of above here is my real practical example, selecting records based on both meterid & timestamp. I have needed this command for years. Executes really quickly.
mysqldump -uuser -ppassword main_dbo trHourly --where="MeterID =5406 AND TIMESTAMP<'2014-10-13 05:00:00'" --no-create-info --skip-extended-insert | grep '^INSERT' > 5406.sql

mysql Export the query results command lineļ¼š
mysql -h120.26.133.63 -umiyadb -proot123 miya -e "select * from user where id=1" > mydumpfile.txt

If you want to dump specific fields from a table this can be handy
1/ create temporary table with your query.
create table tmptable select field1, field2, field3 from mytable where filter1 and fileter2 ;
2/ dump the whole temporary table. then you have your dump file with your specific fields.
mysqldump -u user -p mydatabase tmptable > my-quick-dump.sql

To dump a specific table,
mysqldump -u root -p dbname -t tablename --where="id<30" > post.sql

here is my mysqldump to select the same relation from different tables:
mysqldump --defaults-file=~/.mysql/datenbank.rc -Q -t -c --hex-blob \
--default-character-set=utf8 --where="`cat where-relation-ids-in.sql`" \
datenbank table01 table02 table03 table04 > recovered-data.sql
where-relation-ids-in.sql:
relation_id IN (6384291, 6384068, 6383414)
~/.mysql/datenbank.rc
[client]
user=db_user
password=db_password
host=127.0.0.1
Remark: If your relation_id file is huge, the comment of the where clause will be cut in the dump file, but all data is selected correct ;-)
I hope it helps someone ;-)

Related

postgresql - export select query result using command

As we can take a backup file of database using pg_dump command, similarly can we take backup of a select query result.
For example if i have a query select * from tablename; then i want to take backup result of the query that can be restored somewhere.
You can use something like
copy (select * from tablename) to 'path/to/file';
it will generate csv file with results very same manner as pg_dump does (in fact in plain mode it actually runs COPY commands)
update
and if you want DDL as well, you can
create table specname as select * from tablename
and then
pg_dump -s specname

Impala: Show tables like query

I am working with Impala and fetching the list of tables from the database with some pattern like below.
Assume i have a Database bank, and tables under this database are like below.
cust_profile
cust_quarter1_transaction
cust_quarter2_transaction
product_cust_xyz
....
....
etc
Now i am filtering like
show tables in bank like '*cust*'
It is returning the expected results like, which are the tables has a word cust in its name.
Now my requirement is i want all the tables which will have cust in its name and table should not have quarter2.
Can someone please help me how to solve this issue.
Execute from the shell and then filter
impala-shell -q "show tables in bank like '*cust*'" | grep -v 'quarter2'
Query the metastore
mysql -u root -p -e "select TBL_NAME from metastore.TBLS where TBL_NAME like '%cust%' and TBL_NAME not like '%quarter2%'";

How can I find last modified timestamp for a table in Hive?

I'm trying to fetch last modified timestamp of a table in Hive.
Please use the below command:
show TBLPROPERTIES table_name ('transient_lastDdlTime');
Get the transient_lastDdlTime from your Hive table.
SHOW CREATE TABLE table_name;
Then copy paste the transient_lastDdlTime in below query to get the value as timestamp.
SELECT CAST(from_unixtime(your_transient_lastDdlTime_value) AS timestamp);
With the help of above answers I have created a simple solution for the forthcoming developers.
time_column=`beeline --hivevar db=hiveDatabase --hivevar tab=hiveTable --silent=true --showHeader=false --outputformat=tsv2 -e 'show create table ${db}.${tab}' | egrep 'transient_lastDdlTime'`
time_value=`echo $time_column | sed 's/[|,)]//g' | awk -F '=' '{print $2}' | sed "s/'//g"`
tran_date=`date -d #$time_value +'%Y-%m-%d %H:%M:%S'`
echo $tran_date
I used beeline alias. Make sure you setup alias properly and invoke the above script. If there are no alias used then use the complete beeline command(with jdbc connection) by replacing beeline above. Leave a question in the comment if any.
Here there is already an answer for how to see last modified date for a hive table. I am just sharing how to check last modified date for a hive table partition.
Connect to hive cluster to run hive queries. In most of the cases, you can simply connect by running hive command : hive
DESCRIBE FORMATTED <database>.<table_name> PARTITION(<partition_column>=<partition_value>);
In the response you will see something like this : transient_lastDdlTime 1631640957
SELECT CAST(from_unixtime(1631640957) AS timestamp);
You may get the timestamp by executing
describe formatted table_name
you can execute the below command and convert the output of transient_lastDdlTime from timestamp to date.It will give the last modified timestamp for the table.
show create table TABLE_NAME;
if you are using mysql as metadata use following...
select TABLE_NAME, UPDATE_TIME, TABLE_SCHEMA from TABLES where TABLE_SCHEMA = 'employees';

run SQL script using Rocket Universe from AIX command line (uvsh)

Hopefully this is a simple question. I want to write a shell script that calls a SQL script to do some queries in a Rocket UNIVERSE database. I am doing this from the Server command line (the same machine where the database resides).
In SQLSERVER I might do something like the following:
sqlcmd -S myServer\instanceName -i C:\myScript.sql
In Oracle like this:
SQL>#/dir/test.sql
In UV I can't figure it out:
uvsh ??? some.sql file
So in the test.sql file I might have something like the following:
"SELECT ID, COL1, COL2 FROM PRODUCT WHERE #ID=91;"
"SELECT ID, COL1, COL2 FROM PRODUCT WHERE #ID=92;"
"SELECT ID, COL1, COL2 FROM PRODUCT WHERE #ID=93;"
So can this be done or am I going about this the wrong way? Maybe a different method is more optimal? -- Thanks!
You can send the list of commands to the UniVerse process with the following command.
C:\U2\UV\HS.SALES>type test.sql | ..\bin\uv
You will not need the '"' around each statement you described.
For the HS.SALES account the following SQL commands should work:
SELECT #ID, FNAME, LNAME FROM CUSTOMER WHERE #ID='2';
SELECT #ID, FNAME, LNAME FROM CUSTOMER WHERE #ID='3';
SELECT #ID, FNAME, LNAME FROM CUSTOMER WHERE #ID='4';
Note that this may not do what you want, this will display the results to standard out. Also, caution should be taken when sending commands to UniVerse in this manner. If all of the UniVerse licenses are in use the uv command will fail, and the SQL commands will never execute.

BCP... queryout execution erroneously returns zero rows where data is know to exist

I have written a BCP process with queryout as option. As a result the query is exectuted and the results are (or should be) written to the designated output file. The query that is being used has been confirmed in SQL Query Analyzer (using MS SQL 2000) to generate a known result set.
However, when I execute the batch file with the BCP command it will return zero rows (get the response "0 rows copied"). However, I can take this query and run it outside of the BCP process (in query analyzer) and get 42,745 rows. I can also create a view and execute a simpler query and have it work using the BCP...queryout option. The query I am using is joining information from two tables:
bcp "select obj_id, loc_code, CONVERT(VARCHAR(20), create_date, 20) AS build_date,
model_id, (len(build_string)/4) as feature_count, build_string
from my_db..builds a, my_db..models b
where a.model_id = b.model_id and obj_id like '_________C%' and obj_id not like '1G0%'" queryout z:\test.txt -U %1 -P %2 -S SQLSVR\VM_PROD -c
As you can see the query is more complex that "select * from my_db..builds". Essentially if I create a view using the more complex query and then run the bcp...queryout with a simple, straightforward query as noted to retrieve the data from the view it works fine. I can't figure out though why the more complex query doesn't work in the BCP command. Could it be timing out before returning results, or is it that BCP doesn't know how to handle a complex "join-style" query?
I find you can usually avoid issues with the bcp utility if you create a view of the query you want to execute and then select from that instead. In this case:
CREATE VIEW [MyOutputView] as
select obj_id, loc_code, CONVERT(VARCHAR(20), create_date, 20) AS build_date,
...
...
and obj_id not like '1G0%'
GO
Then the bcp command becomes:
bcp "SELECT * FROM MyDb.dbo.MyOutputView" queryout z:\test.txt -U %1 -P %2 -S SQLSVR\VM_PROD -c