How to save a log of all psql queries AND the results - sql

This question is similar to:
psql - write a query and the query's output to a file
However, their syntax doesn't work.
When I open a psql session from the command line, I'd like to save both the queries sent and the result.
The below code saves queries, but not output:
psql -h host -U username -p port -d database -L ~/file_to_save_output.txt

You can just redirect the output (STDOUT) using the > symbol like below. Redirection works in both Unix and Windows command prompt.
psql -h host -U username -p port -d database -L ~/file_to_save_output.txt > output.txt
From Postgres Doc
--echo-queries
Copy all SQL commands sent to the server to standard output as well.
This is equivalent to setting the variable ECHO to queries.
So in order to get query + query results to a single file,
psql -h host -U username -p port -d database --echo-queries -L output_queries_and_results.txt
Additionally you can save queries and query results in separate files,
psql -h host -U username -p port -d database --echo-queries -L output_queries_only.txt -o output_results_only.txt
Note: The first method will still show queries and query results in terminal, the second will output all results to the file and won't show in the terminal.

Related

parallel ssh (pssh) with output stream

I have 3 servers and I want to run a command on all of them parallely from a client and see the output as it streams.
I have tried using pssh but it shows output only when the command exits. But what I want is the output from all the servers on the stdout of my client as it produces output before exiting.
For example, when I run "ping google.com" on all the servers, I get output only when I hit Ctrl+C like this:
My command looks like this:
pssh -h server_list -l userName -i pemFile.pem 'ping google.com'
How to see the ping output from all the 3 servers as it pings?
I was trying to achieve the same, and the best way for me was to specify an output directory and then follow the stream on the output files, like so:
We add -o /tmp/out -t 0 so we get the output of each host to the specified directory, and we do not get any timeout.
pssh -h server_list -l userName -i pemFile.pem -o /tmp/out -t 0 'ping google.com'
Leave that running, and then follow the streams. Assuming you have host1, host2, host3, and host4 in your server_list, you can do the following:
tail -f /tmp/out/host{1,2,3,4}

Running queries using osql

When executing any one of the following commands:
osql -E -S ComputerName\InstanceName
osql -E -S ComputerName\InstanceName -i MyScript.sql -o MyOutput.rpt
osql -E -q "SELECT * FROM Northwind.dbo.Shippers"
osql -E -Q "SELECT * FROM Northwind.dbo.Shippers" -o MyOutput.rpt
I am getting the following error:
[SQL Server Native Client 10.0]SQL Server Network Interfaces: Connection
string is not valid [87].
[SQL Server Native Client 10.0]Login timeout expired
[SQL Server Native Client 10.0]A network-related or instance-specific error
has occurred while establishing a connection to SQL Server. Server is not
found or not accessible. Check if instance name is correct and if SQL Server
is configured to allow remote connections. For more information see SQL Server
Books Online.
However, I am able, without issue to login and run SELECT queries from SSMS.
How do I run queries against SQL Server 2008 using osql?
Do you have your logged in account set up as a user in SQL Server?
I usually work with specific accounts and SQL Server logins instead of Trusted Logins, and then just specify the database coordinates on the command line with the -S, -D, -U, and -P options:
osql -S %SERVERNAME% -U %USERNAME% -P %PASSWORD% -d %DBNAME%
For instance, if your server name is MyServer\SQL2008 and your user name is Foo and your password is Bar and your database is MyDB, then you'd use this:
osql -S MyServer\SQL2008 -U Foo -P Bar -d MyDB
And then continue on with the rest of your options after that.
If you really want to use your Trusted connection, you need to go to SQL Server Management Studio, and ensure your current Widows Login is added as a user and given appropriate permissions to your database, etc.
In SSMS, connect to your server manually (the 'sa' user and password perhaps), and then expand the "Security" node and look at the logins. If your currently logged in Windows User isn't listed, you'll want to right-click, add new Login, and add your current user.
Then you should be able to run with a Trusted Connection.
You have to run all command in a single line
like this
osql -E -S ComputerName\InstanceName -i MyScript.sql -o MyOutput.rpt
or
osql -E -S ComputerName\InstanceName -Q "SELECT * FROM Northwind.dbo.Shippers" -o MyOutput.rpt
Now you have to see if you can log in SQL Server or if the service is up or even if the TCP/IP protocol is enable
Use the value in the Server name: field for ComputerName\InstanceName. (e.g. MYPC\SQLEXPRESS)
Type:
osql -E -S MYPC\SQLEXPRESS
You will see the interactive prompt.
Then enter your commands:
USE pubs
GO
Also, you can use sqlcmd:
sqcmd -E -S MYPC\SQLEXPRESS

MySQLDump to local machine from remote server connected via SSH

mysqldump -h xxx.xxx.xxx.xxx -u username -ppassword databasename > C:\path\to\store\file
It seemed to work as it paused while the file was downloading, however no file appears once it completes.
Do I have something wrong in the command line?
Use like this:
mysqldump -P3306 -h192.168.20.151 -u root -p database > c:/my.sql
Hope to help you:)
Edition for linux
mysqldump -u root -p databasename > ~/Downlaods/filename.sql
Simply run mysqldump -h xxx.xxx.xxx.xxx -u username -ppassword databasename > C:\path\to\store\file from the command prompt on your local machine.
I don't understand why you involve ssh in your question but...
First try the same command without redirecting it to a file to see that you can connect to the database.
Second make sure that you can write to that location (try to create and edit a file in the same path).
If those to work your command should work.

Add database name to sql file when exporting using mysqldump via linux

I need to backup a range of databases each day and I would like to do this via command line.
I'm using mysqldump to dump the db into a folder on the root of the server appended with the date. I would like to add the name of the database dynamically to the exported filename, rather than hard coding it into the query. Currently I have:
[~]# mysqldump -u user -h localhost -p unique_database_name > unique_database_name_1_$(date +%d%m%y).sql
The goal is to have 'unique_database_name' appended to the filename, so the script is a little more portable.
This script would do it:
#!/bin/bash
dbs='firstdb seconddb thirddb'
echo -n 'Enter database password: '
read pw
for db in $dbs
do
mysqldump -u user -h localhost -p$pw $db > $db_1_$(date +%d%m%y).sql
done

mysql show processes

I don't have phpMyAdmin installed on a ubuntu server but want to show all mysql processed. I know that you can do a "show_processes" in phpMyAdmin but how can I do it via shell?
Thanks
mysql -u USER -pPASS -P PORT -h HOST < "show processlist"
details:
-u = user
-p = password
-P = port number
-h = HOST
if you want to get the details via information_schema views,
sql="select * from information_schema.processlist where ???"
mysql -u USER -pPASS -P PORT -h HOST < $sql
mysql -uuser -p -e "show full processlist"
or
mysql -uuser -p -e "show processlist"