I want my unit tests suite to load a SQL file in my database. I use a command like
"C:\Program Files\PostgreSQL\8.3\bin"\psql --host 127.0.0.1 --dbname unitTests --file C:\ZendStd\www\voo4\trunk\resources\sql\base_test_projectx.pg.sql --username postgres 2>&1
It run fine in command line, but need me to have a pgpass.conf Since I need to run unit tests suite on each of development PC, and on development server I want to simplify the deployment process. Is there any command line wich include password?
Thanks,
Cédric
Try adding something like to pg_hba.conf
local all postgres trust
Of course, this allows anyone on the machine to connect as postgres, but it may do what you want.
EDIT:
You seem to be connecting to the localhost via TCP. You may need something like this instead:
host all postgres 127.0.0.1 trust
Again, I'm mostly guessing. I've never configured postgres quite this permissively.
You should run a bash script file "run_sql_commands.sh" with this content:
export PGHOST=localhost
export PGPORT=5432
export PGDATABASE=postgres
export PGPASSWORD=postgres
export PGUSER=postgres
psql -f file_with_sql_commands.sql
You can use the PGPASSWORD environment variable, or the .pgpass file.
See http://www.postgresql.org/docs/8.4/static/libpq-envars.html
and http://www.postgresql.org/docs/8.4/static/libpq-pgpass.html
Related
I am trying to run sesu command in Unix server from Python with the help of Paramiko exec_command. However when I am running this command exec_command('sesu test'), I am getting
sh: sesu: not found
When I am running simple ls command it giving me desired output. Only with sesu command it is not working fine.
This is how my code looks like:
import paramiko
host = host
username = username
password = password
port = port
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip,port,username,password)
stdin,stdout,stderr=ssh.exec_command('sesu test')
stdin.write('Password')
stdin.flush()
outlines=stdout.readlines()
resp=''.join(outlines)
print(resp)
The SSHClient.exec_command by default does not run shell in "login" mode and does not allocate a pseudo terminal for the session. As a consequence a different set of startup scripts is (might be) sourced, than in your regular interactive SSH session (particularly for non-interactive sessions, .bash_profile is not sourced). And/or different branches in the scripts are taken, based on an absence/presence of TERM environment variable.
Possible solutions (in preference order):
Fix the command not to rely on a specific environment. Use a full path to sesu in the command. E.g.:
/bin/sesu test
If you do not know the full path, on common *nix systems, you can use which sesu command in your interactive SSH session.
Fix your startup scripts to set the PATH the same for both interactive and non-interactive sessions.
Try running the script explicitly via login shell (use --login switch with common *nix shells):
bash --login -c "sesu test"
If the command itself relies on a specific environment setup and you cannot fix the startup scripts, you can change the environment in the command itself. Syntax for that depends on the remote system and/or the shell. In common *nix systems, this works:
PATH="$PATH;/path/to/sesu" && sesu test
Another (not recommended) approach is to force the pseudo terminal allocation for the "exec" channel using the get_pty parameter:
stdin,stdout,stderr = ssh.exec_command('sesu test', get_pty=True)
Using the pseudo terminal to automate a command execution can bring you nasty side effects. See for example Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?
You may have a similar problem with LD_LIBRARY_PATH and locating shared objects.
See also:
Environment variable differences when using Paramiko
Certain Unix commands fail with "... not found", when executed through Java using JSch
Is there any command in ms-sql (on linux) to compare schemas between two databases?
I have very similar needs (I currently use PostgreSQL on Linux), and if doesn't have to necessarily be a ms-sql command I have 2 possible solutions:
Solution 1:
Use mssql-scripter from Microsoft (https://github.com/Microsoft/mssql-scripter)
You can get mssql-scripter via for example
pip install mssql-scripter.
and execute the following commands:
$ mssql-scripter -S serverName -d databaseSource -U user > ./source.sql
$ mssql-scripter -S serverName -d databaseTarget -U user > ./target.sql
$ diff source.sql target.sql
Solution 2:
If you have the possibility to use a desktop environment (as I'm doing) I would use a comparison tools, which is much more user friendly in my opinion.
TiCodeX SQL Schema Compare (https://www.ticodex.com) It's a nice tools that runs in Linux, Windows and Mac and can compare the schema of MS-SQL, MySQL and PostgreSQL database. Easy to use and effective. It may help you.
In order to use it:
Configure the source db (specifying servername, username, password, etc...)
Configure the target db
There are options in case you want to exclude database objects, or change the output
Press the comparison button
You will get the differences between the two databases, and eventually you also get the migration scripts to make the target db identical to the source.
It can perhaps can be done indirectly via
sqlpackage for Linux.
Firstly, dacpac of each database to be created:
sqlpackage.exe /Action:Extract /SourceServerName:XLW-CNU415CD8B /SourceDatabaseName:AdventureWorks2012 /TargetFile:AdventureWorks2012_v1.dacpac /p:IgnoreExtendedProperties=True /p:IgnorePermissions=False /p:ExtractApplicationScopedObjectsOnly=True
Then, dacpacs to be compared:
sqlpackage /a:DeployReport /sf:AdventureWorks2012_v1.dacpac /tf:AdventureWorks2012_v2.dacpac /tdn:AdventureWorks2012.db /op:AdventureWorks2012_v1.xml
Please note that this example is based on a windows version of the tool, I assume that Linux port has the same list of arguments
Please note that this is an AIX related question.
I have a jenkins server running on Redhat which is running a node via SSH on an AIX server.
The commands are run non-interactively using SSH to a user on the AIX machine who has ksh as its standard shell.
The problem is that this build needs a number of environment variables, and i can't seem to get it to work.
I have tried:
Jenkins allows me to set some environment variables for the session. So i tried:
ENV="$HOME/.profile"
I tried creating a .kshrc file containing
. .profile
But none of these approaches seems to make KSH run the .profile script.
The .profile script contains the environment setup for the user i need.
How do i get an AIX implementation of KSH to run my .profile script before executing commands?
You need to specifically tell Jenkins that you want to execute them in ksh shell.
By default, Jenkins runs as sh <commands>.
Add a shebang in your shell command as first line,
#!/bin/ksh
Most shells don't source their .profile files on non-interactive sessions. A simple solution is to source the .profile yourself as part of the command you are sending.
So instead of
yourcommand1; yourcommand2
you should send
. ~/.profile; yourcommand1; yourcommand2
over ssh
UPDATE after reading the comment about Jenkins controlling the ssh command
In the case your ssh command is performed by Jenkins you should have a look at https://wiki.jenkins-ci.org/display/JENKINS/SSH+Slaves+plugin, especially the 'Login profile files' paragraph.
I'd say one of these solutions is best
Set all environment variables from Jenkins using the node's configure page. Install the EnvInject plugin to do this.
Write a wrapper around the java command on the slave that sources your profile script and adjust the JavaPath (also on the node's configure page) to point to that wrapper.
The only way I know of for setting environment variables that will apply for non-interactive shells on AIX is via /etc/environment. I believe this is the correct place, but it will of course then apply to all users and all shells.
I'm trying to get my project running on Dancer (perl 5.16.3 and centos 5.10), and so far it was pleasant experience - until I tried to deploy it on server.
I've decided to do the simplest thing and run it as CGI app with help
of default dispatch.cgi script from Dancer distribution.
I used default apache settings from Dancer::Deployment manual, but
something went wrong. After a day of struggle with half-working
project I deduced the following strange thing: while running through
dispatch.cgi, my project is able to read from sqlite database, but it
cannot write into database, so Dancer::Session::DBI was not working
properly and hence the problems.
If I run the project with stand-alone app.pl or with
plackup -E production -p 80 bin/app.pl
it works fine and able to insert data into DB. I've tried to change
permissions to 0666 on sqlite db file, but it didn't help.
So why there's a problem with sqlite while running as CGI, and how to fix this?
Well, it was permissions problem, but not for the dbase file - for directory contained that file!
Apparently, sqlite creates some temp files while updating bases.
Beware.
I deploy some .bteq and .sql scripts on a TERADATA database. For doing this, I use a client on my desktop called BTEQWin version 13.10.0.03.
I get the .bteq/.sql from a version control like pvcs/svn etc and all I do once the files are in my workspace folder (from Version control tool), to just drag and drop the files from Windows browser to BTEQWin client (which I connect to a database prior to drag/drop for running those scripts).
Now, I have to automate this whole process in UNIX.
I have written a SHELL KSH/BASH script which is getting all the .bteq/.sql from a TAG/LABEL in the version control tool to a given UNIX folder. Now, all I need to do is the pass these files one by one (i'll take care of the order) to Teradata client.
My ?
- what client do I need to tell Unix admin team to install on Unix server - so that I can run something like below:
someTeraDataCommand -u username -p password -h hostname -d database -f filenametoexectue | tee output_filename.log
Where, someTeraDataCommand is the client / executable - which will let me run Teradata scripts (like I was doing using BTEQWin on my desktop - GUI session). Other parameters can be username, password, which database to connect on what server and which file to run or make that file passed to the command using "<" operator at command line.
Any idea?
- What client ?
Assuming the complete Teradata Tools and Utilities package is installed on your UNIX server (which will have the connectivity tools to talk to Teradata), you should have access to bteq from the command line. Something like this:
bteq < script_file > output_file
Your script file should contain a .LOGON statement to establish the connection:
.LOGON yourTDPID/your_account,your_pw
You might also need to use other commands to set your default database or non-default session values.
Another option would be to combine the SQL and call to BTEQ in a Korn shell script:
#!/usr/bin/ksh
##############
SHELL_NAME=`basename $0`
PRG_NAME=`basename $(SHELL_NAME} .ksh`
LOG_FILE=${PRG_NAME}.log
OUT_FILE=${PRG_NAME}.out
#
bteq <<EOBTQ > ${LOG_FILE} 2>$1
.LOGON {TDPID}/{USERID},{PWD};
--.RUN file=${LOGON}
/* Add your SQL/BTEQ commands here */
.QUIT 0;
EOBTQ
Edit
The double hyphen indicates a single line comment. Typically in a UNIX script you do not leave your password in plain text of a KSH script. The .RUN command would reference a text file in a barely sufficient secure location containing the .LOGON {TPDID}/{USERID},{PWD}; command.
The .RUN command in BTEQ allows you to reference another text file containing a series of valid BTEQ commands that you want to run in the current BTEQ script.
Easiest way is to setup the Solaris TTU, is to request root sudo, and run an interactive installation into defaults as a root. That would cure all client issues.