I have tried the following script to connect to the DB.
This way works in oracle,but i am not sure why it is not working for Sybase.
#!/usr/bin/perl
use strict;
use warnings;
my $result = qx { isql -Uxx -Pxxxxxxx -Dxxxxx <<EOF
select count(*) from XXX;
exit;
EOF
};
print "result is :";
print $result;
print "\nbye bye\n";
I tried to connect to sybase DB without DBI.
Please dont tell me to use DBI.even i have know that we can use DBI for this.but unfortunately DBI is not installed here in my server and i am not the admin where i have a authorization to install modules for perl.what ever given to me i have to fully use it. but thats an off topic here.
I repeat the question:
how to connect to sybase DB from perl without using DBI?
The output of the above script is :
> temp.pl
result is :
bye bye
when i manually execute the same thing:
> isql -Uxx -Pxxxxxxxx -Dxxxxx
1> select count(*) from XXX
2> go
-----------
26
(1 row affected)
1> exit
>
I got the solution:
it is because of the semicolon and the go statement.I modified the script as below and its working now.
#!/usr/bin/perl
use strict;
use warnings;
my $result = qx { isql -Uxx -Pxxxxxxx -Dxxxx <<EOF
set nocount on
select count(*) from XXX
go
exit
EOF
};
print "result is :";
print $result;
print "\nbye bye\n";
Related
$START_PRI_AA=1;
$expression = "$SQLPLUS_DIR\\$SQLPLUS_EXEC -S $PLANSTAGE_DB_USER/$PLANSTAGE_DB_PASSWORD\#$PLANSTAGE_DB_ALIAS
'set pagesize 0
set feedback off
set verify off
set heading off
set echo off
select STATUS from jdaabppd.DFXHA_ENGINE_STATUS where ENGINE_NAME ='$ENV{PRI_AA_ENGINE}';
exit;
/'
";
print "\n\n expression is $expression \n\n";
$status = system($expression);
print "$status\n\n";
Why use SQLPLUS from within Perl, while it already has excellent modules to interact with databases ?
First of all you need to install modules DBI and DBD::Oracle, and then you can do something like :
use strict;
use warnings;
use DBI;
my $dbh = DBI->connect(
"dbi:Oracle:host=locahost;port=1521;sid=$PLANSTAGE_DB_ALIAS", # DSN of the database to connect
$PLANSTAGE_DB_USER, # username
$PLANSTAGE_DB_PASSWORD, # password
{ RaiseError => 1 } # die on any DBI error
);
my ($status) = $dbh->selectrow_array(
"select STATUS from jdaabppd.DFXHA_ENGINE_STATUS where ENGINE_NAME ='?", # your sql query
undef, # no specific options needed
$ENV{PRI_AA_ENGINE} # bind value
);
You may need to adjust the DSN according to your use case, I made a few assumptions based on the code snippet you showed. Read the DBI docs for more details.
I am unfamiliar with linux/linux environment so do pardon me if I make any mistakes, do comment to clarify.
I have created a simple perl script. This script creates a sql file and as shown, it would execute the lines in the file to be inserted into the database.
#!/usr/bin/perl
use strict;
use warnings;
use POSIX 'strftime';
my $SQL_COMMAND;
my $HOST = "i";
my $USERNAME = "need";
my $PASSWORD = "help";
my $NOW_TIMESTAMP = strftime '%Y-%m-%d_%H-%M-%S', localtime;
open my $out_fh, '>>', "$NOW_TIMESTAMP.sql" or die 'Unable to create sql file';
printf {$out_fh} "INSERT INTO BOL_LOCK.test(name) VALUES ('wow');";
sub insert()
{
my $SQL_COMMAND = "mysql -u $USERNAME -p'$PASSWORD' ";
while( my $sql_file = glob '*.sql' )
{
my $status = system ( "$SQL_COMMAND < $sql_file" );
if ( $status == 0 )
{
print "pass";
}
else
{
print "fail";
}
}
}
insert();
This works if I execute it while I am logged in as a user(I do not have access to Admin). However, when I set a cronjob to run this file let's say at 10.08am by using the line(in crontab -e):
08 10 * * * perl /opt/lampp/htdocs/otpms/Data_Tsunami/scripts/test.pl > /dev/null 2>&1
I know the script is being executed as the sql file is created. However no new rows are inserted into the database after 10.08am. I've searched for solutions and some have suggested using the DBI module but it's not available on the server.
EDIT: Didn't manage to solve it in the end. A root/admin account was used to to execute the script so that "solved" the problem.
First things first, get rid of the > /dev/null 2>&1 at the end of your crontab entry (at least temporarily) so you can actually see any errors that may be occurring.
In other words, change it temporarily to something like:
08 10 * * * perl /opt/lampp/htdocs/otpms/Data_Tsunami/scripts/test.pl >/tmp/myfile 2>&1
Then you can examine the /tmp/myfile file to see what's being output.
The most likely case is that mysql is not actually on the path in your cron job, because cron itself gives a rather minimal environment.
To fix that problem (assuming that's what it is), see this answer, which gives some guidelines on how best to expand the cron environment to give you what you need. That will probably just involve adding the MySQL executable directory to your PATH variable.
The other thing you may want to consider is closing the out_fh file before trying to pass it to mysql - if the buffers haven't been flushed, it may still be an empty file as far as other processes are concerned.
The expression glob(".* *") matches all files in the current working
directory.
- http://perldoc.perl.org/functions/glob.html
you should not rely on the wd in a cron job. If you want to use a glob (or any file operation) with a relative path, set the wd with chdir first.
source: http://www.perlmonks.org/bare/?node_id=395387
So if your working directory is, for example /home/user, you should insert
chdir('/home/user/');
before the WHILE, ie:
sub insert()
{
my $SQL_COMMAND = "mysql -u $USERNAME -p'$PASSWORD' ";
chdir('/home/user/');
while( my $sql_file = glob '*.sql' )
{
...
replace /home/user with wherever your sql files are being created.
It's better to do as much processing within Perl as possible. It avoids the overhead of generating a separate shell process and leaves everything under the control of the program so that you can handle any errors much more simply
Database access from Perl is done using the DBI module. This program demonstrates how to achieve what you have written using the mysql utility. As you can see it's also much more concise
#!/usr/bin/perl
use strict;
use warnings;
use DBI;
my $host = "i";
my $username = "need";
my $password = "help";
my $dbh = DBI->connect("DBI:mysql:database=test;host=$host", $username, $password);
my $insert = $dbh->prepare('INSERT INTO BOL_LOCK.test(name) VALUES (?)');
my $rv = $insert->execute('wow');
print $rv ? "pass\n" : "fail\n";
I'm hoping someone can help with applying the output from a db2 command to a variable to use later on in a script.
So far I am at...
db2 "connect to <database> user <username> using <password>"
while read HowMany ;
do
Counter=$HowMany
echo $HowMany
done < <(db2 -x "SELECT COUNT(1) FROM SYSCAT.COLUMNS WHERE TABNAME = 'TableA' AND TABSCHEMA='SchemaA' AND GENERATED = 'A'")
When trying to reference $Counter outside of the while loop, it returns SQL1024N A database connection does not exist. SQLSTATE=08003 as does the echo $HowMany
I've tried another method using pipe, which makes the $HowMany show the correct value, but as that is a sub shell, it's lost afterwards.
I'd rather not use temp files and remove them if possible as I don't like left over files if scripts abort at any time.
The DB2 CLP on Linux and UNIX can handle command substitution without losing its database connection context, making it possible to capture query results into a local shell variable or treat it as an inlined block of text.
#!/bin/sh
# This script assumes the db2profile script has already been sourced
db2 "connect to <database> user <username> using <password>"
# Backtick command substitution is permitted
HowMany=`db2 -x "SELECT COUNT(1) FROM SYSCAT.COLUMNS WHERE TABNAME = 'TableA' AND TABSCHEMA='SchemaA' AND GENERATED = 'A'"`
# This command substitution syntax will also work
Copy2=$(db2 -x "SELECT COUNT(1) FROM SYSCAT.COLUMNS WHERE TABNAME = 'TableA' AND TABSCHEMA='SchemaA' AND GENERATED = 'A'")
# One way to get rid of leading spaces
Counter=`echo $HowMany`
# A while loop that is fed by process substitution cannot use
# the current DB2 connection context, but combining a here
# document with command substitution will work
while read HowMany ;
do
Counter=$HowMany
echo $HowMany
done <<EOT
$(db2 -x "SELECT COUNT(1) FROM SYSCAT.COLUMNS WHERE TABNAME = 'TableA' AND TABSCHEMA='SchemaA' AND GENERATED = 'A'")
EOT
As you have found, a DB2 connection in one shell is not available to sub-shells. You could use a sub-shell, but you'd have to put the CONNECT statement in that sub-shell.
So it's more of a simple rewrite, and don't use a sub-shell:
db2 "connect to <database> user <username> using <password>"
db2 -x "SELECT COUNT(1) FROM SYSCAT.COLUMNS WHERE TABNAME = 'TableA' AND TABSCHEMA='SchemaA' AND GENERATED = 'A'" | while read HowMany ; do
Counter=$HowMany
echo $HowMany
done
Getting something is wrong with the execute statement. It just seems to hang forever when I run in command prompt. It doesn't die either. does execute maybe need parameters?
#!/usr/bin/perl
use DBI;
use Data::Dumper;
$dbh = DBI->connect('DB', 'NAME', 'PASS',{ LongReadLen => 1000000} ) or die 'Error: cant connect to db';
$st= "update table set the_id = 7 where mid = 23 and tid = 22";
my $UpdateRecord = $dbh->prepare($st) or die "Couldn't prepare statement: $st".$dbh->errstr;
$UpdateRecord->execute() or die "can not execute $UpdateRecord".$dbh->errstr;
$UpdateRecord->finish;
$dbh->disconnect();
EDIT:
I tried binding in execute as well as using bind_param(), and it's still hanging up.
you need a do instead of prepare.
my $UpdatedRecord = $dbh->do($st) or die "Statement fails: $st".$dbh->errstr;
From DBI:
This method is typically most useful for non-SELECT statements that
either cannot be prepared in advance (due to a limitation of the
driver) or do not need to be executed repeatedly. It should not be
used for SELECT statements because it does not return a statement
handle (so you can't fetch any data).
Also it's always better to add/use db driver module,the one you are using, at top after use DBI; statement.
use DBD::Oracle;
Also add
use strict;
use warnings;
Problem was that I locked a bunch of objects from failing to put the disconnect in before I ran it once...yeah, don't do that.
We have a Perl script which runs a SQL and puts data in the table.
Now instead of supplying a single SQL statement, we want to pass bunch of them putting them together in a .sql file. We know that our program will fail because it expects a single SQL statement, not s bunch of them (that too from a .sql file). How do we make it work with a .sql file (having multiple INSERT statements?). We are using the DBI package.
A small snippet of code:
$sth = $dbh->prepare("/home/user1/tools/mytest.sql");
$sth->execute || warn "Couldn't execute statement";
$sth->finish();
There is a sort of workaround for DDL. You need to slurp SQL file first and then enclose it's contents into BEGIN ... END; keywords. Like:
sub exec_sql_file {
my ($dbh, $file) = #_;
my $sql = do {
open my $fh, '<', $file or die "Can't open $file: $!";
local $/;
<$fh>
};
$dbh->do("BEGIN $sql END;");
}
This subroutine allows to run DDL (SQL) scripts with multiple statements inside (e.g. database dumps).
Not exactly sure what you want...
Once you create a DBI object, you can use it over and over again. Here I'm reading SQL statement after SQL statement from a file and processing each and every one in order:
use DBI;
my $sqlFile = "/home/user1/tools/mytest.sql"
my $dbh = DBI::Connect->new($connect, $user, $password)
or die("Can't access db");
# Open the file that contains the various SQL statements
# Assuming one SQL statement per line
open (SQL, "$sqlFile")
or die("Can't open file $sqlFile for reading");
# Loop though the SQL file and execute each and every one.
while (my $sqlStatement = <SQL>) {
$sth = dbi->prepare($sqlStatement)
or die("Can't prepare $sqlStatement");
$sth->execute()
or die("Can't execute $sqlStatement");
}
Notice that I'm putting the SQL statement in the prepare and not the file name that contains the SQL statement. Could that be your problem?
You don't need perl for this at all. Just use the mysql command line client:
mysql -h [hostname] -u[username] -p[password] [database name] < /home/user1/tools/mytest.sql
replace the [variables] with your information.
Note no space after -u or -p. If your mysql server is running on the same machine you can omit -h[hostname] (it defaults to localhost)
Here is how I've done it. In my case I dont assume one SQL per line and I assume, my example is a bit better :)
sub get_sql_from_file {
open my $fh, '<', shift or die "Can't open SQL File for reading: $!";
local $/;
return <$fh>;
};
my $SQL = get_sql_from_file("SQL/file_which_holds_sql_statements.sql");
my $sth1 = $dbh1->prepare($SQL);
$sth1->execute();