When I run my command:
psql -h localhost -p 5432 -U meee -d my_db -f sqltest.sql
it displays:
CREATE VIEW
ALTER TABLE
However I want it to show me just like pgadmin show it (for exmpl: The query was executed successfully in 45 ms, but returns no results)
add at the beginning of ´sqltest.sql´ the command ´\timing´ and you will see the time of each command
for example script.sql :
\timing select 2 ; select 1; create table tablax(i int);
Or if you want all time from the begining of the script until end, add some command to script
at the beginning:
create temp table tab (time1 time,time2 time); insert into tab (time1) select now()::time;
at the end:
update tab set time2=now()::time; select time2-time1 as time_Elapsed from tab;
for example:
create temp table tab (time1 time,time2 time);
insert into tab (time1) select now()::time;
...
your script code
...update tab set time2=now()::time; select time2-time1 as time_Elapsed from tab;
Use the a psql command parameter:
psql -h localhost -p 5432 -U meee -d my_db -af sqltest.sql
https://www.postgresql.org/docs/current/static/app-psql.html
And place \timing on at the top of your script
Related
I have the following SQL script -
--This is a function
SELECT * FROM import ('test');
TRUNCATE table some_table;
cat <<SQL
\COPY (
SELECT * from large_query
)
TO '/tmp/dash.csv' WITH DELIMITER ',' CSV HEADER
SQL
;
I am getting a parse error when I run this script like this -
psql -h host -p port -U user db -f my_file.sql
Can regular SQL statements not be combined with the \COPY command?
I created a script to export a table from my db into a .csv file
#!/usr/bin/bash
FILE="example.csv"
sqlplus -s abcd/abcd#XE <<EOF
SET PAGESIZE 50000
SET COLSEP ","
SET LINESIZE 200
SET FEEDBACK OFF
SPOOL $FILE
SELECT * FROM myTable;
SPOOL OFF
EXIT
and now I'd like to modify this script and export my table into another. How can I change my code?
By "exporting your table into another", do you mean copying data from one table to another? If you don't need indexes or keys or other features on your new table initially, i.e. if it's not for production use, it's quite simple:
#!/usr/bin/bash
TABLE="myOtherTable"
sqlplus -s abcd/abcd#XE <<EOF
CREATE TABLE $TABLE as SELECT * FROM myTable;
EXIT
You could also do a create table statement first, specifying columns, keys and storage options as any other table, and then have a separate line that does INSERT INTO $TABLE (SELECT * FROM myTable) to fill it with data copied from myTable.
I'm trying to insert data from excel sheet to sql database. The query is stored in a text file as follows:
insert into [demo].[dbo].[relative]
select *
from openrowset('Microsoft.Jet.OLEDB.4.0','Excel 8.0;Database=D:\relative.xls','select * from [sheet1$]');
When I am executing the following command:
sqlcmd -S ADMIN-PC/SEXPRESS -i d:\demo.txt.
it is showing this error:
Msg 7357, Level 16, State 2, Server ADMIN-PC\SEXPRESS, Line 1
Can anyone please help in rectifying my problem.
Try using the sql server import vizard to create a new table from the xls file and then insert that data to the existing table from there. The problem you are having is maybe due to the non-compatibility between 64bit sql instance and 32 bit excel.
Or try using bcp
bcp demo.dbo.relative in "D:\relative.xls" -c -T
There is another way to get the same result..
create a temp table.
declare #sometable table (value varchar(50), translation varchar(max))
select * into #sometable from YOUR_DATABASE_TABLE (nolock)
Then, do your OPENROWSET, BCP, etc. from here..
You can create a shell script which will automatically read the insert commands from the .csv file and then write it to the database. If you want I can help you up with it. What you just need to do is to write all the insert statements in the .csv file.
#!/bin/ksh
sqlplus -silent /nolog << EOF > /dev/null
username/pwd#"Connection String"
set linesize 0;
set pagesize 0;
set echo off;
while read line; do
A=`echo "$line" | awk -F" " {print 1}`
and so on depends on the number of words in the insert statements.
$A $B
done < your_insert_statements.csv
It will read the .csv file and automatically insert the records in the database.
I have been looking for a command to export an entire database data (db2) to csv.
I did google it but it came up with db2 export command which only export table by table.
For example
export to employee.csv of del select * from employee
Therefore I have to do it for all the table and it can be very annoying. Is there a way I can export an entire database in db2 to csv? (or some other format that I can use with other databases)
Thank you
You could read the SYSIBM.SYSTABLES table to get the names of all the tables, and generate an export command for each table.
Write the export commands to an SQL file.
Read the SQL file, and execute the export commands.
Edited to add: Warning - some of your foreign keys may not be synchronized, as the data base can be changed while you're reading the various tables.
# loop trough all the db2 db tables in bash and export them
# all this is just another oneliner ...
# note the filter by schema name ...
db2 -x "select schemaname from syscat.schemata where 1=1 and schemaname like 'GOSALES%'" | { while read -r schema ; do db2 connect to GS_DB ; db2 -x "SELECT TABLE_NAME FROM SYSIBM.TABLES WHERE 1=1 AND TABLE_CATALOG='GS_DB' AND TABLE_SCHEMA = '$schema'" | { while read -r table ; do db2 connect to GS_DB; echo -e "start table: $table \n" ; db2 -td# "EXPORT TO $schema.$table.csv of del modified by coldel; SELECT * FROM $schema.$table " ; echo -e " stop table: $table " ; done ; } ; done ; }
wc -l *.csv | sort -nr
4185939 total
446023 GOSALES.ORDER_DETAILS.csv
446023 GOSALESDW.SLS_SALES_ORDER_DIM.csv
I know how to select which table I want to dump in SQL format with the shell command:
$ ./sqlite3 test.db '.dump mytable' > test.sql
But this command selects all the data of "mytable"
Can I select the data in my table that I want before dump and how?
In other terms I seek a command like :
$ ./sqlite3 test.db '.dump select name from mytable' > test.sql
Obviously this command does not work :'(
The only way to do it within the sqlite console is to create a temporary table:
CREATE TABLE tmp AS SELECT (field1, field2 ... ) FROM yourTable WHERE ... ;
.dump tmp
DROP TABLE tmp