As we can take a backup file of database using pg_dump command, similarly can we take backup of a select query result.
For example if i have a query select * from tablename; then i want to take backup result of the query that can be restored somewhere.
You can use something like
copy (select * from tablename) to 'path/to/file';
it will generate csv file with results very same manner as pg_dump does (in fact in plain mode it actually runs COPY commands)
update
and if you want DDL as well, you can
create table specname as select * from tablename
and then
pg_dump -s specname
Related
How do I drop a few tables (e.g. 1 - 3) using the output of a SELECT statement for the table names? This is probably standard SQL, but specifically I'm using Apache Impala SQL accessed via Apache Zeppelin.
So I have a table called tables_to_drop with a single column called "table_name". This will have one to a few entries in it, each with the name of another temporary table that was generated as the result of other processes. As part of my cleanup I need to drop these temporary tables whose names are listed in the "tables_to_drop" table.
Conceptually I was thinking of an SQL command like:
DROP TABLE (SELECT table_name FROM tables_to_drop);
or:
WITH subquery1 AS (SELECT table_name FROM tables_to_drop) DROP TABLE * FROM subquery1;
Neither of these work (syntax errors). Any ideas please?
even in standard sql this is not possible to do it the way you showed.
in standard sql usually you can use dynamic sql which impala doesn't support.
however you can write an impala script and run it in impala shell but it's going to be complicated for such task, I would prepare the drop statement using select and run it manually if this is one-time thing:
select concat('DROP TABLE IF EXISTS ',table_name) dropstatements
from tables_to_drop
I want to export some data from SQL Server to Oracle but I have this scenario:
My table in SQL I need to save my ID in a parameter in SSIS to use this because I want to run this package every 30 minutes.
So if my last ID in SQL exported to oracle is 10, I will save this on a variable in SSIS to change my query like this "SELECT * FROM TABLE WHERE ID > PARAMETER"
But I have no idea how to do it. help me
You can't store a variable in SSIS and have it last between sessions. The ID is in a table, yes? You can do something along the lines of
select
*
from
table
where
id > (select max(id) from <destination table>)
Or, if you're looking at completely different databases, you can populate a variable with the max(id) from your destination, and then plug that variable into your select from your source.
I have written a BCP process with queryout as option. As a result the query is exectuted and the results are (or should be) written to the designated output file. The query that is being used has been confirmed in SQL Query Analyzer (using MS SQL 2000) to generate a known result set.
However, when I execute the batch file with the BCP command it will return zero rows (get the response "0 rows copied"). However, I can take this query and run it outside of the BCP process (in query analyzer) and get 42,745 rows. I can also create a view and execute a simpler query and have it work using the BCP...queryout option. The query I am using is joining information from two tables:
bcp "select obj_id, loc_code, CONVERT(VARCHAR(20), create_date, 20) AS build_date,
model_id, (len(build_string)/4) as feature_count, build_string
from my_db..builds a, my_db..models b
where a.model_id = b.model_id and obj_id like '_________C%' and obj_id not like '1G0%'" queryout z:\test.txt -U %1 -P %2 -S SQLSVR\VM_PROD -c
As you can see the query is more complex that "select * from my_db..builds". Essentially if I create a view using the more complex query and then run the bcp...queryout with a simple, straightforward query as noted to retrieve the data from the view it works fine. I can't figure out though why the more complex query doesn't work in the BCP command. Could it be timing out before returning results, or is it that BCP doesn't know how to handle a complex "join-style" query?
I find you can usually avoid issues with the bcp utility if you create a view of the query you want to execute and then select from that instead. In this case:
CREATE VIEW [MyOutputView] as
select obj_id, loc_code, CONVERT(VARCHAR(20), create_date, 20) AS build_date,
...
...
and obj_id not like '1G0%'
GO
Then the bcp command becomes:
bcp "SELECT * FROM MyDb.dbo.MyOutputView" queryout z:\test.txt -U %1 -P %2 -S SQLSVR\VM_PROD -c
I am using sql server 2005
I have a table [say tblHistory] and this table contains 100 rows.
I have created the same table at the server, but the table doesn't have the data, I want data from tblHistory to convert into
INSERT INTO tblHistory ------
so that I could run the script on the server to fill the database.
To generate all the INSERT INTO statements you need based on table data, take a look at this project: http://www.codeproject.com/KB/database/sqlinsertupdategenerator.aspx
you need to create a linked server between the two servers and then you do something like this
INSERT INTO tblHistory
select * from LinkedServerNAme.DatabaseName.SchemaName.tblHistory
To add a linked server read this http://msdn.microsoft.com/en-us/library/ms190479.aspx
BTW you can also use OPENROWSET, OPENDATASOURCE, SSIS or bcp out and bcp in
Not sure I understand the question... you just want to copy the contents of one table into another?
INSERT INTO newTable SELECT * FROM tblHistory
If the new table doesn't already exist, you can use SELECT INTO:
SELECT *
INTO new_table
FROM tblHistory
But that's the caveat - it has to be a new table, no data already in there.
Otherwise, use:
INSERT INTO new_table
SELECT x.* --preferrable to actually define the column list than use *
FROM OLD_TABLE x
You should check out the SSMS Tools Pack - one of its feature is the ability to generate those INSERT scripts you're looking for!
Get the SSMS Tool Pack from this site - it's an add-in for SQL Server Management Studio - highly recommended!
Is it possible to do mysqldump by single SQL query?
I mean to dump the whole database, like phpmyadmin does when you do export to SQL
not mysqldump, but mysql cli...
mysql -e "select * from myTable" -u myuser -pxxxxxxxxx mydatabase
you can redirect it out to a file if you want :
mysql -e "select * from myTable" -u myuser -pxxxxxxxx mydatabase > mydumpfile.txt
Update:
Original post asked if he could dump from the database by query. What he asked and what he meant were different. He really wanted to just mysqldump all tables.
mysqldump --tables myTable --where="id < 1000"
This should work
mysqldump --databases X --tables Y --where="1 limit 1000000"
Dump a table using a where query:
mysqldump mydatabase mytable --where="mycolumn = myvalue" --no-create-info > data.sql
Dump an entire table:
mysqldump mydatabase mytable > data.sql
Notes:
Replace mydatabase, mytable, and the where statement with your desired values.
By default, mysqldump will include DROP TABLE and CREATE TABLE statements in its output. Therefore, if you wish to not delete all the data in your table when restoring from the saved data file, make sure you use the --no-create-info option.
You may need to add the appropriate -h, -u, and -p options to the example commands above in order to specify your desired database host, user, and password, respectively.
You can dump a query as csv like this:
SELECT * from myTable
INTO OUTFILE '/tmp/querydump.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
You could use --where option on mysqldump to produce an output that you are waiting for:
mysqldump -u root -p test t1 --where="1=1 limit 100" > arquivo.sql
At most 100 rows from test.t1 will be dumped from database table.
If you want to export your last n amount of records into a file, you can run the following:
mysqldump -u user -p -h localhost --where "1=1 ORDER BY id DESC LIMIT 100" database table > export_file.sql
The above will save the last 100 records into export_file.sql, assuming the table you're exporting from has an auto-incremented id column.
You will need to alter the user, localhost, database and table values. You may optionally alter the id column and export file name.
MySQL Workbench also has this feature neatly in the GUI. Simply run a query, click the save icon next to Export/Import:
Then choose "SQL INSERT statements (*.sql)" in the list.
Enter a name, click save, confirm the table name and you will have your dump file.
Combining much of above here is my real practical example, selecting records based on both meterid & timestamp. I have needed this command for years. Executes really quickly.
mysqldump -uuser -ppassword main_dbo trHourly --where="MeterID =5406 AND TIMESTAMP<'2014-10-13 05:00:00'" --no-create-info --skip-extended-insert | grep '^INSERT' > 5406.sql
mysql Export the query results command lineļ¼
mysql -h120.26.133.63 -umiyadb -proot123 miya -e "select * from user where id=1" > mydumpfile.txt
If you want to dump specific fields from a table this can be handy
1/ create temporary table with your query.
create table tmptable select field1, field2, field3 from mytable where filter1 and fileter2 ;
2/ dump the whole temporary table. then you have your dump file with your specific fields.
mysqldump -u user -p mydatabase tmptable > my-quick-dump.sql
To dump a specific table,
mysqldump -u root -p dbname -t tablename --where="id<30" > post.sql
here is my mysqldump to select the same relation from different tables:
mysqldump --defaults-file=~/.mysql/datenbank.rc -Q -t -c --hex-blob \
--default-character-set=utf8 --where="`cat where-relation-ids-in.sql`" \
datenbank table01 table02 table03 table04 > recovered-data.sql
where-relation-ids-in.sql:
relation_id IN (6384291, 6384068, 6383414)
~/.mysql/datenbank.rc
[client]
user=db_user
password=db_password
host=127.0.0.1
Remark: If your relation_id file is huge, the comment of the where clause will be cut in the dump file, but all data is selected correct ;-)
I hope it helps someone ;-)