How to use sqlcmd to execute many queries in one table? - sql

I have many SQL scripts; I want to execute them in one table using sqlcmd.
I have created a new database and a new table. How can I read several files from my PC and execute them in one table?

Refer this
1.Click Start, point to All Programs, point to Accessories, and then click Notepad.
2.Copy and paste the following Transact-SQL code into Notepad:
USE <DBName>;
GO
select *from table1
select *from table2
select *from table3
select *from table4
select *from table5
--give all your select query here
GO
Save the file as myScript.sql in the C drive.
3.To run the script file
4.Open a command prompt window.
In the Command Prompt window, type: sqlcmd -S myServer\instanceName -i C:\myScript.sql
Press ENTER.
The result of the sql file is written to the command prompt window.
To save this output to a text file
5.Open a command prompt window.
In the Command Prompt window, type: sqlcmd -S myServer\instanceName -i C:\myScript.sql -o C:\EmpAdds.txt
Press ENTER.
No output is returned in the Command Prompt window. Instead, the output is sent to the EmpAdds.txt file. You can verify this output by opening the EmpAdds.txt file.

You can easily do this with SQLS*Plus from the command line:
SQLS>#script001
SQLS>#script002
SQLS>#script003
SQLS>#script004
You can also create a master script that contains calls to all these scripts.
FYI:
I wrote SQLSPlus in my free time, this is a cool tool for SQL Server that is helpful to tons of DBAs, especially with the Oracle background - it is like Oracle SQLPlus but for SQL Server.
Tool is great for DBAs (I shared some of the received feedback on the web site), much better than SQL Server sqlcmd and also great for the command line reporting and automation. There is a 100% free version for the business users.
Already got few great clients around the world using it in production. One client, for example, moved lots of Oracle SQL*Plus reports to SQL Server in just a couple of hours.
It is on https://www.sqlsplus.com

Related

Is it possible to create a batch script with sql commands using cmdsql?

I basically want to create a batch script that has embedded sql commands and I was wondering if there is a way to do this using cmdsql. I'm using sql server 2008 r management studio and I've downloaded sqlcmd v2.0.
I made a batch script which attempted to connect to a database and execute a simple select statement, but when I ran the script it went into interactive mode after connecting to the database. It wouldn't execute the sql in the script, it would only allow a user to type in sql commands. The code is below:
sqlcmd -S <servername>\<instancename>
Select Number FROM Table1
GO
I changed the column/table/database etc. names as this is work-related but you get the idea. I'm quite new to batch scripting and don't have much experience, I have more experience with sql.
You could try to read the documentation. A synopsis of the documentation is available from the command line by typing sqlcmd -?
To run a single SQL-Server query from within a batch file, using the default database:
sqlcmd -S <servername>\<instancename> -Q "Select Number FROM Table1"
The standard way to feed input into a program is preparing the input and redirecting it via a | pipe. For example:
(
echo Select Number FROM Table1
echo GO
echo . . .
echo EXIT or QUIT or BYE...
) | sqlcmd -S <servername>\<instancename>
However, if the purpose of your Batch file is just to execute sql commands (and have no Batch logic), an easier way is to prepare a .txt file with the same input you would type via the keyboard:
sqlcmd -S <servername>\<instancename>
Select Number FROM Table1
GO
... and then feed that file into cmd.exe this way:
cmd < theFile.txt
In this case, don't forget to insert both the exit command for sql AND the exit command for cmd.exe!

different command line used to extract tables from an sql server file into one that is usable by mysql

What is the difference between these two command line used to extract tables from a database into one that can be used by mysql ?
C:> mysql -u user -p PASS database_name < ms.sql
And
mysql> source ms.sql ;
I used to do with the former and the database created contained all information but it didn't work. the second worked fine.
Second in the first case setting default character set is examplified but I found none in the homepage of the mysql an example for the second case. I am thankful for any help available.
Both of the commands can be referred as Batch Commands. I am pointing out the difference between them below.
First Command
mysql -u user -p PASS database_name < ms.sql
The above command is executing two commands at a time. One is to loggin to MySQL and other one is, passing the script file to execute using OS I/O Operator '<'.
After execution of this command it will display the sql result of the script and comes back to command prompt.(comes out of SQL
Prompt)
It is necessary to keep USE DB_name command in the begening of file to execute the script
This way is useful when you want to execute a big script without logging into mysql typically most often used.
Second Command
mysql> source ms.sql;
The above command is generally an SQL Command which will execute the script present in sql file.
It is used if you are already in MYSQL Prompt. After executing the script it will return back to Mysql prompt only
You may also use this command like executing the shell script something like mysql> ./filename
For more information please refer MySql Reference Link: https://dev.mysql.com/doc/refman/5.7/en/mysql-batch-commands.html

How do I import a sql data file into SQL Server?

I have a .sql file and I am trying to import it into SQL Server 2008. What is the proper way to do this?
If your file is a large file, 50MB+, then I recommend you use sqlcmd, the command line utility that comes bundled with SQL Server. It is easy to use and it handles large files well. I tried it yesterday with a 22GB file using the following command:
sqlcmd -S SERVERNAME\INSTANCE_NAME -i C:\path\mysqlfile.sql -o C:\path\output_file.txt
The command above assumes that your server name is SERVERNAME, that you SQL Server installation uses the instance name INSTANCE_NAME, and that windows auth is the default auth method. After execution output.txt will contain something like the following:
...
(1 rows affected)
Processed 100 total records
(1 rows affected)
Processed 200 total records
(1 rows affected)
Processed 300 total records
...
use readfileonline.com if you need to see the contents of huge files.
UPDATE
This link provides more command line options and details such as username and password:
https://dba.stackexchange.com/questions/44101/importing-sql-server-database-from-a-sql-file
If you are talking about an actual database (an mdf file) you would Attach it
.sql files are typically run using SQL Server Management Studio. They are basically saved SQL statements, so could be anything. You don't "import" them. More precisely, you "execute" them. Even though the script may indeed insert data.
Also, to expand on Jamie F's answer, don't run a SQL file against your database unless you know what it is doing. SQL scripts can be as dangerous as unchecked exe's
Start SQL Server Management Studio
Connect to your database
File > Open > File and pick your file
Execute it
Try this process -
Open the Query Analyzer
Start --> Programs --> MS SQL Server --> Query Analyzer
Once opened, connect to the database that you are wish running the script on.
Next, open the SQL file using File --> Open option. Select .sql file.
Once it is open, you can execute the file by pressing F5.
In order to import your .sql try the following steps
Start SQL Server Management Studio
Connect to your Database
Open the Query Editor
Drag and Drop your .sql File into the editor
Execute the import
A .sql file is a set of commands that can be executed against the SQL server.
Sometimes the .sql file will specify the database, other times you may need to specify this.
You should talk to your DBA or whoever is responsible for maintaining your databases. They will probably want to give the file a quick look. .sql files can do a lot of harm, even inadvertantly.
See the other answers if you want to plunge ahead.
Get the names of the server and database in SSMS:
Run the following command in PowerShell or CMD:
sqlcmd -S "[SERVER NAME]" -d [DATABASE NAME] -i .\[SCRIPT].sql
Here is a screenshot of what it might look like:
There is no such thing as importing in MS SQL. I understand what you mean. It is so simple. Whenever you get/have a something.SQL file, you should just double click and it will directly open in your MS SQL Studio.

How to run select statment using batch file?

I need to query sql server database using batch file. I put these cmdlines in the batch file. When I run the batch file. Cursor stays there after making trusted connection.
OSQL -E
use db1
SELECT count(*) FROM table_01 t1
left join table_02 t2 on t1.tableID = t2.tableID
WHERE t1.Date < '20110724'
Go
Any suggestions please?
Here's how I do it.
First, build the SQL script that you want, and store it as a simple text file.
Next, use SQLCMD (or OSQL or, perish the thought, ISQL) to call that file, something like so:
SQLCMD -S %1 -E -b -h-1 -I -d tempdb -i BulkDeploy.txt > BulkDeploy_%DateString%.txt
Where:
S specifies the SQL instance server (here, specified with the first batch parameter)
E use NT authentication
b if SQL hits an error, return a value that the batch ERRORLEVEL can pick up and process
h-1 return no header rows (IF datasets are returned)
I set QUOTED_IDENTIFIER on (this blew up in my face once, I forget how or why, and I've included it ever since)
d database to connect to
i execute the following script and exit when done
> directs any output to the specified file for subsequent processing
SQLCMD et. al. have many parameters, check them out in Books Online. Further subtleties can be achieved with batch parameters.
osql has a simple fature.
For example I run an SQL command from e:\backupdb.txt with
osql -S servername -U user -P password -i e:\backupdb.txt
it does the job

Save PL/pgSQL output from PostgreSQL to a CSV file

What is the easiest way to save PL/pgSQL output from a PostgreSQL database to a CSV file?
I'm using PostgreSQL 8.4 with pgAdmin III and PSQL plugin where I run queries from.
Do you want the resulting file on the server, or on the client?
Server side
If you want something easy to re-use or automate, you can use Postgresql's built in COPY command. e.g.
Copy (Select * From foo) To '/tmp/test.csv' With CSV DELIMITER ',' HEADER;
This approach runs entirely on the remote server - it can't write to your local PC. It also needs to be run as a Postgres "superuser" (normally called "root") because Postgres can't stop it doing nasty things with that machine's local filesystem.
That doesn't actually mean you have to be connected as a superuser (automating that would be a security risk of a different kind), because you can use the SECURITY DEFINER option to CREATE FUNCTION to make a function which runs as though you were a superuser.
The crucial part is that your function is there to perform additional checks, not just by-pass the security - so you could write a function which exports the exact data you need, or you could write something which can accept various options as long as they meet a strict whitelist. You need to check two things:
Which files should the user be allowed to read/write on disk? This might be a particular directory, for instance, and the filename might have to have a suitable prefix or extension.
Which tables should the user be able to read/write in the database? This would normally be defined by GRANTs in the database, but the function is now running as a superuser, so tables which would normally be "out of bounds" will be fully accessible. You probably don’t want to let someone invoke your function and add rows on the end of your “users” table…
I've written a blog post expanding on this approach, including some examples of functions that export (or import) files and tables meeting strict conditions.
Client side
The other approach is to do the file handling on the client side, i.e. in your application or script. The Postgres server doesn't need to know what file you're copying to, it just spits out the data and the client puts it somewhere.
The underlying syntax for this is the COPY TO STDOUT command, and graphical tools like pgAdmin will wrap it for you in a nice dialog.
The psql command-line client has a special "meta-command" called \copy, which takes all the same options as the "real" COPY, but is run inside the client:
\copy (Select * From foo) To '/tmp/test.csv' With CSV DELIMITER ',' HEADER
Note that there is no terminating ;, because meta-commands are terminated by newline, unlike SQL commands.
From the docs:
Do not confuse COPY with the psql instruction \copy. \copy invokes COPY FROM STDIN or COPY TO STDOUT, and then fetches/stores the data in a file accessible to the psql client. Thus, file accessibility and access rights depend on the client rather than the server when \copy is used.
Your application programming language may also have support for pushing or fetching the data, but you cannot generally use COPY FROM STDIN/TO STDOUT within a standard SQL statement, because there is no way of connecting the input/output stream. PHP's PostgreSQL handler (not PDO) includes very basic pg_copy_from and pg_copy_to functions which copy to/from a PHP array, which may not be efficient for large data sets.
There are several solutions:
1 psql command
psql -d dbname -t -A -F"," -c "select * from users" > output.csv
This has the big advantage that you can using it via SSH, like ssh postgres#host command - enabling you to get
2 postgres copy command
COPY (SELECT * from users) To '/tmp/output.csv' With CSV;
3 psql interactive (or not)
>psql dbname
psql>\f ','
psql>\a
psql>\o '/tmp/output.csv'
psql>SELECT * from users;
psql>\q
All of them can be used in scripts, but I prefer #1.
4 pgadmin but that's not scriptable.
In terminal (while connected to the db) set output to the cvs file
1) Set field seperator to ',':
\f ','
2) Set output format unaligned:
\a
3) Show only tuples:
\t
4) Set output:
\o '/tmp/yourOutputFile.csv'
5) Execute your query:
:select * from YOUR_TABLE
6) Output:
\o
You will then be able to find your csv file in this location:
cd /tmp
Copy it using the scp command or edit using nano:
nano /tmp/yourOutputFile.csv
CSV Export Unification
This information isn't really well represented. As this is the second time I've needed to derive this, I'll put this here to remind myself if nothing else.
Really the best way to do this (get CSV out of postgres) is to use the COPY ... TO STDOUT command. Though you don't want to do it the way shown in the answers here. The correct way to use the command is:
COPY (select id, name from groups) TO STDOUT WITH CSV HEADER
Remember just one command!
It's great for use over ssh:
$ ssh psqlserver.example.com 'psql -d mydb "COPY (select id, name from groups) TO STDOUT WITH CSV HEADER"' > groups.csv
It's great for use inside docker over ssh:
$ ssh pgserver.example.com 'docker exec -tu postgres postgres psql -d mydb -c "COPY groups TO STDOUT WITH CSV HEADER"' > groups.csv
It's even great on the local machine:
$ psql -d mydb -c 'COPY groups TO STDOUT WITH CSV HEADER' > groups.csv
Or inside docker on the local machine?:
docker exec -tu postgres postgres psql -d mydb -c 'COPY groups TO STDOUT WITH CSV HEADER' > groups.csv
Or on a kubernetes cluster, in docker, over HTTPS??:
kubectl exec -t postgres-2592991581-ws2td 'psql -d mydb -c "COPY groups TO STDOUT WITH CSV HEADER"' > groups.csv
So versatile, much commas!
Do you even?
Yes I did, here are my notes:
The COPYses
Using /copy effectively executes file operations on whatever system the psql command is running on, as the user who is executing it1. If you connect to a remote server, it's simple to copy data files on the system executing psql to/from the remote server.
COPY executes file operations on the server as the backend process user account (default postgres), file paths and permissions are checked and applied accordingly. If using TO STDOUT then file permissions checks are bypassed.
Both of these options require subsequent file movement if psql is not executing on the system where you want the resultant CSV to ultimately reside. This is the most likely case, in my experience, when you mostly work with remote servers.
It is more complex to configure something like a TCP/IP tunnel over ssh to a remote system for simple CSV output, but for other output formats (binary) it may be better to /copy over a tunneled connection, executing a local psql. In a similar vein, for large imports, moving the source file to the server and using COPY is probably the highest-performance option.
PSQL Parameters
With psql parameters you can format the output like CSV but there are downsides like having to remember to disable the pager and not getting headers:
$ psql -P pager=off -d mydb -t -A -F',' -c 'select * from groups;'
2,Technician,Test 2,,,t,,0,,
3,Truck,1,2017-10-02,,t,,0,,
4,Truck,2,2017-10-02,,t,,0,,
Other Tools
No, I just want to get CSV out of my server without compiling and/or installing a tool.
New version - psql 12 - will support --csv.
psql - devel
--csv
Switches to CSV (Comma-Separated Values) output mode. This is equivalent to \pset format csv.
csv_fieldsep
Specifies the field separator to be used in CSV output format. If the separator character appears in a field's value, that field is output within double quotes, following standard CSV rules. The default is a comma.
Usage:
psql -c "SELECT * FROM pg_catalog.pg_tables" --csv postgres
psql -c "SELECT * FROM pg_catalog.pg_tables" --csv -P csv_fieldsep='^' postgres
psql -c "SELECT * FROM pg_catalog.pg_tables" --csv postgres > output.csv
If you're interested in all the columns of a particular table along with headers, you can use
COPY table TO '/some_destdir/mycsv.csv' WITH CSV HEADER;
This is a tiny bit simpler than
COPY (SELECT * FROM table) TO '/some_destdir/mycsv.csv' WITH CSV HEADER;
which, to the best of my knowledge, are equivalent.
I had to use the \COPY because I received the error message:
ERROR: could not open file "/filepath/places.csv" for writing: Permission denied
So I used:
\Copy (Select address, zip From manjadata) To '/filepath/places.csv' With CSV;
and it is functioning
psql can do this for you:
edd#ron:~$ psql -d beancounter -t -A -F"," \
-c "select date, symbol, day_close " \
"from stockprices where symbol like 'I%' " \
"and date >= '2009-10-02'"
2009-10-02,IBM,119.02
2009-10-02,IEF,92.77
2009-10-02,IEV,37.05
2009-10-02,IJH,66.18
2009-10-02,IJR,50.33
2009-10-02,ILF,42.24
2009-10-02,INTC,18.97
2009-10-02,IP,21.39
edd#ron:~$
See man psql for help on the options used here.
I'm working on AWS Redshift, which does not support the COPY TO feature.
My BI tool supports tab-delimited CSVs though, so I used the following:
psql -h dblocation -p port -U user -d dbname -F $'\t' --no-align -c "SELECT * FROM TABLE" > outfile.csv
In pgAdmin III there is an option to export to file from the query window. In the main menu it's Query -> Execute to file or there's a button that does the same thing (it's a green triangle with a blue floppy disk as opposed to the plain green triangle which just runs the query). If you're not running the query from the query window then I'd do what IMSoP suggested and use the copy command.
I tried several things but few of them were able to give me the desired CSV with header details.
Here is what worked for me.
psql -d dbame -U username \
-c "COPY ( SELECT * FROM TABLE ) TO STDOUT WITH CSV HEADER " > \
OUTPUT_CSV_FILE.csv
I've written a little tool called psql2csv that encapsulates the COPY query TO STDOUT pattern, resulting in proper CSV. It's interface is similar to psql.
psql2csv [OPTIONS] < QUERY
psql2csv [OPTIONS] QUERY
The query is assumed to be the contents of STDIN, if present, or the last argument. All other arguments are forwarded to psql except for these:
-h, --help show help, then exit
--encoding=ENCODING use a different encoding than UTF8 (Excel likes LATIN1)
--no-header do not output a header
If you have longer query and you like to use psql then put your query to a file and use the following command:
psql -d my_db_name -t -A -F";" -f input-file.sql -o output-file.csv
To Download CSV file with column names as HEADER use this command:
Copy (Select * From tableName) To '/tmp/fileName.csv' With CSV HEADER;
Since Postgres 12, you can change the output format :
\pset format csv
The following formats are allowed :
aligned, asciidoc, csv, html, latex, latex-longtable, troff-ms, unaligned, wrapped
If you want to export the result of a request, you can use the \o filename feature.
Example :
\pset format csv
\o file.csv
SELECT * FROM table LIMIT 10;
\o
\pset format aligned
I found that psql --csv creates a CSV file with UTF8 characters but it is missing the UTF8 Byte Order Mark (0xEF 0xBB 0xBF). Without taking it into account, the default import of this CSV file will corrupt international characters such as CJK characters.
To fix it, I devised the following script:
# Define a connection to the Postgres database through environment variables
export PGHOST=your.pg.host
export PGPORT=5432
export PGDATABASE=your_pg_database
export PGUSER=your_pg_user
# Place credentials in $HOME/.pgpass with the format:
# ${PGHOST}:${PGPORT}:${PGUSER}:master:${PGPASSWORD}
# Populate long SQL query in a text file:
cat > /tmp/query.sql <<EOF
SELECT item.item_no,item_descrip,
invoice.invoice_no,invoice.sold_qty
FROM item
LEFT JOIN invoice
ON item.item_no=invoice.item_no;
EOF
# Generate CSV report with UTF8 BOM mark
printf '\xEF\xBB\xBF' > report.csv
psql -f /tmp/query.sql --csv | tee -a report.csv
Doing it this way, lets me script the CSV creation process for automation and allows me to succinctly maintain the script in a single source file.
import json
cursor = conn.cursor()
qry = """ SELECT details FROM test_csvfile """
cursor.execute(qry)
rows = cursor.fetchall()
value = json.dumps(rows)
with open("/home/asha/Desktop/Income_output.json","w+") as f:
f.write(value)
print 'Saved to File Successfully'
JackDB, a database client in your web browser, makes this really easy. Especially if you're on Heroku.
It lets you connect to remote databases and run SQL queries on them.
Source
(source: jackdb.com)
Once your DB is connected, you can run a query and export to CSV or TXT (see bottom right).
Note: I'm in no way affiliated with JackDB. I currently use their free services and think it's a great product.
Per the request of #skeller88, I am reposting my comment as an answer so that it doesn't get lost by people who don't read every response...
The problem with DataGrip is that it puts a grip on your wallet. It is not free. Try the community edition of DBeaver at dbeaver.io. It is a FOSS multi-platform database tool for SQL programmers, DBAs and analysts that supports all popular databases: MySQL, PostgreSQL, SQLite, Oracle, DB2, SQL Server, Sybase, MS Access, Teradata, Firebird, Hive, Presto, etc.
DBeaver Community Edition makes it trivial to connect to a database, issue queries to retrieve data, and then download the result set to save it to CSV, JSON, SQL, or other common data formats. It's a viable FOSS competitor to TOAD for Postgres, TOAD for SQL Server, or Toad for Oracle.
I have no affiliation with DBeaver. I love the price and functionality, but I wish they would open up the DBeaver/Eclipse application more and made it easy to add analytics widgets to DBeaver / Eclipse, rather than requiring users to pay for the annual subscription to create graphs and charts directly within the application. My Java coding skills are rusty and I don't feel like taking weeks to relearn how to build Eclipse widgets, only to find that DBeaver has disabled the ability to add third-party widgets to the DBeaver Community Edition.
Do DBeaver users have insight as to the steps to create analytics widgets to add into the Community Edition of DBeaver?