In my perl "script" I'm collecting data and building a hashmap. The hashmap keys represent field names, and the values represent the value I want to insert into the corresponding field.
The hashmap is built, and then passed to the saveRecord() method which is supposed to build a SQL query and eventually it will execute it.
The idea here is to update the database once, rather than once per field (there are a lot of fields).
The problem: I'm having trouble passing the hashmap over to the sub and then pulling the fields and values out of the hashmap. At this point my keys and values are blank. I suspect the data is getting lost during the passing to a sub.
The output of the script indicates no keys and no values.
Need help passing the data to the sub in a way that lets me pull it back apart as shown - with join().
Thanks!
Code snippet:
for my $key (keys %oids) {
$thisField = $key;
$thisOID = $oids{$thisField};
# print "loop: thisoid=$thisOID field=$thisField\n";
# perform the SNMP query.
$result = getOID ($thisOID);
# extract the information from the result.
$thisResult = $result->{$thisOID};
# remove quotation marks from the data value, replace them with question marks.
$thisResult =~ s/\"|\'|/\?/g;
# TODO: instead of printing this information, pass it to a subroutine which writes it to the database (use an update statement).
# printf "The $thisField for host '%s' is '%s'.\n", $session->hostname(), $result->{$thisOID};
# add this key/value pair to the mydata hashmap.
$mydata{$thisField} = $thisResult;
# print "$thisField=$thisResult\n";
}
# write one record update for hashmap %mydata.
saveRecord (%mydata);
# write all fields to database at once...
sub saveRecord ($) {
my $refToFields=shift;
my #fieldlist = keys %$refToFields;
my #valuelist = values %$refToFields;
my $sql = sprintf ("INSERT INTO mytable (%s) VALUES (%s)",join(",",#fieldlist), join(",",#valuelist) );
# Get ID of record with this MAC, if available, so we can perform SQL update
my $recid = getidbymac ($MAC);
print "sql=$sql\n";
# TODO: use an insert or an update based on whether recid was available...
# TODO: ID available, update the record
# TODO: ID not available, insert record let ID be auto assigned.
}
I cleaned up your code a little. Your main problem was not using a reference when calling your sub. Also note the commented regex which is cleaned up:
Code:
use strict;
use warnings;
# $thisResult =~ s/["']+/?/g;
my %mydata = ( 'field1' => 12, 'field2' => 34, );
saveRecord (\%mydata); # <-- Note the added backslash
sub saveRecord {
my $ref = shift;
my $sql = sprintf "INSERT INTO mytable (%s) VALUES (%s)",
join(',', keys %$ref),
join(',', values %$ref);
print "sql=$sql\n";
}
Output:
sql=INSERT INTO mytable (field1,field2) VALUES (12,34)
Related
i have a SQL INSERT sequence in PDO like:
INSERT INTO property (id,name,addres...) VALUES (:id,:name,:address...)
And i want to do a UPDATE sequence, with the same fields. The problem is that i have 150 fields and about 3 or 4 different sequences, so if i make the update syntax manually its probably that it takes a lot of time and a lot of mistakes, is there any "automatic" way to convert it?
Thank you a lot
The way I would do this, is have a function which dynamically builds the query based on key-value pairs passed in an array:
function updateTable($table, $values)
{
// Set the base query
$query = 'UPDATE ' . $table;
// Build the query with the key value pairs
foreach($values as $key=>$data) {
$query . ' SET ' . $key . ' = ' . $data . ' ';
}
// Execute your query here
...
}
Obviously you would need to bind your PDO objects on each iteration of the loop but I wanted to give you the basic layout of a loop to handle what you want to achieve, you could then call it like this:
updateTable('Products', { 'product_name' => 'Apple', 'product_price' => 100.00})
This would then build the query:
UPDATE Products SET product_name = 'Apple', product_price = 100.00
You could easily extend this query to provide a WHERE parameter so you can refine your UPDATE query - please remember this is currently in-secure so please spend some time implementing proper sanitsation over variables before committing to the DB!
Hope this helps.
How do I bind a variable to a SQL set for an IN query in Perl DBI?
Example:
my #nature = ('TYPE1','TYPE2'); # This is normally populated from elsewhere
my $qh = $dbh->prepare(
"SELECT count(ref_no) FROM fm_fault WHERE nature IN ?"
) || die("Failed to prepare query: $DBI::errstr");
# Using the array here only takes the first entry in this example, using a array ref gives no result
# bind_param and named bind variables gives similar results
$qh->execute(#nature) || die("Failed to execute query: $DBI::errstr");
print $qh->fetchrow_array();
The result for the code as above results in only the count for TYPE1, while the required output is the sum of the count for TYPE1 and TYPE2. Replacing the bind entry with a reference to #nature (\#nature), results in 0 results.
The main use-case for this is to allow a user to check multiple options using something like a checkbox group and it is to return all the results. A work-around is to construct a string to insert into the query - it works, however it needs a whole lot of filtering to avoid SQL injection issues and it is ugly...
In my case, the database is Oracle, ideally I want a generic solution that isn't affected by the database.
There should be as many ? placeholders as there is elements in #nature, ie. in (?,?,..)
my #nature = ('TYPE1','TYPE2');
my $pholders = join ",", ("?") x #nature;
my $qh = $dbh->prepare(
"SELECT count(ref_no) FROM fm_fault WHERE nature IN ($pholders)"
) or die("Failed to prepare query: $DBI::errstr");
Currently I have a perl script that accesses our database, performs certain queries and prints output to the terminal. Instead, I would like to output the results into a template latex file before generating a pdf. For most of my queries I pull out numbers and store these as scalar variables (eg how often a particular operator carries out a given task). eg.
foreach $op (#operator) {
$query = "SELECT count(task_name) FROM table WHERE date <= '$date_stop' and
date >= '$date_start' and task=\'$operator[$index]\';";
#execute query
$result=$conn->exec($query);
$conres = $conn->errorMessage;
if ($result->resultStatus eq PGRES_TUPLES_OK) {
if($result->ntuples > 0) {
($task[$index]) = $result->fetchrow;
}
printf("$operator[$index] carried out task: %d\n", $task[$index]);
} else {
die "Failed.\n$conres\n\n";
exit -1;
}
$index++;
}
printf("**********************************\n\n");
In the final report I will summarise how many times each operator completed each task in a table. In addition to this there will also be some incidents which must be reported. I can print these easily to the terminal using a command such as
$query = "SELECT operator, incident_type from table_name WHERE incident_type = 'Y'
and date <= '$date_stop' and date >= '$date_start';";
$result=$conn->exec($query);
$conres = $conn->errorMessage;
if ($result->resultStatus eq PGRES_TUPLES_OK) {
if($result->ntuples > 0) {
$result->print(STDOUT, 1, 1, 0, 0, 0, 1, "\t", "", "");
}
} else {
die "Failed.\n$conres\n\n";
exit -1;
}
An example of the output of this command is
operator | incident_type
-----------------------------
AB | Incomplete due to staff shortages
-------------------------------
CD | Closed due to weather
-----------------------------
How can I make my perl script pass the operator names and incidents into a string array rather than just sending the results to the terminal?
You should consider updating your script to use DBI. This is the standard for database connectivity in Perl.
DBI has a built in facility for inserting parameters into a query string. It is safer and faster than manually creating the string yourself. Before the loop, do this once:
#dbh is a database handle that you have already opened.
my $query = $dbh->prepare(
"SELECT count(task_name) FROM table WHERE date<=? and date>=? and task=?"
);
Then within the loop, you only have to do this each time:
$query->execute($date_stop,$date_start,$op);
Note that the parameters you pass to execute automatically get inserted in place of the ?'s in your statement. It handles the quoting for you.
Also in the loop, after you execute the statement, you can get the results like this:
my $array_ref = $query->fetchall_array_ref;
Now all of the rows are stored in a two-dimensional array structure. $array_ref->[0][0] would get the first column of the first row returned.
See the DBI documentation for more information.
As others have mentioned, there are quite a few other mistakes in your code. Make sure you start with use strict; use warnings;, and ask more questions if you need further help!
Lots of good feedback to your script, but nothing about your actual question.
How can I make my perl script pass the operator names and incidents into a string array rather than just sending the results to the terminal?
Have your tried creating an array and pushing items to it?
my #array;
push (#array, "foo");
Or using nested arrays:
push (#array, ["operator", "incident"]);
With a simple model like that
class Model < ActiveRecord::Base
# ...
end
we can do queries like that
Model.where(["name = :name and updated_at >= :D", \
{ :D => (Date.today - 1.day).to_datetime, :name => "O'Connor" }])
Where the values in the hash will be substituted into the final SQL statement with proper escaping depending on the underlying database engine.
I would like to know a similar feature for SQL execution like:
ActiveRecord::Base.connection.execute( \
["update models set name = :name, hired_at = :D where id = :id;"], \
{ :id => 73465, :D => DateTime.now, :name => "O'My God" }] \
) # THIS CODE IS A FANTASY. NOT WORKING.
(Please do not solve the example with loading a Model object, modifying and then saving! The example is only an illustration for the feature I would like to have / know. Concentrate on the subject!)
The original problem is that I want to insert large amount (many thousand lines) of data into the database. I want to use some features of the SQL abstraction of the ActiveRecord framework but I don't want to use model objects based on ActiveRecord::Base because they are damn slow! (8 queries per second for my current problem.)
query = ActiveRecord::Base.connection.raw_connection.prepare("INSERT INTO users (name) VALUES(:name)")
query.execute(:name => 'test_name')
query.close
Extending the #peufeu solution with concrete code example for bulk insert:
users_places = []
users_values = []
timestamp = Time.now.strftime('%Y-%m-%d %H:%M:%S')
params[:users].each do |user|
users_places << "(?,?,?,?)"
users_values << user[:name] << user[:punch_line] << timestamp << timestamp
end
bulk_insert_users_sql_arr = ["INSERT INTO users (name, punch_line, created_at, updated_at) VALUES #{users_places.join(", ")}"] + users_values
begin
sql = ActiveRecord::Base.send(:sanitize_sql_array, bulk_insert_users_sql_arr)
ActiveRecord::Base.connection.execute(sql)
rescue
"something went wrong with the bulk insert sql query"
end
Here is the reference to sanitize_sql_array method in ActiveRecord::Base, it generates the proper query string by escaping the single quotes in the strings. For example the punch_line "Don't let them get you down" will become "Don\'t let them get you down".
Yes you could do raw SQL, but checkout the ar-extensions gem that helps with batch inserts:
https://github.com/zdennis/ar-extensions
Here's a post on it, and various other techniques:
http://www.coffeepowered.net/2009/01/23/mass-inserting-data-in-rails-without-killing-your-performance/
For INSERTs, batching them using a long VALUES clause (as shown by Simon's link) is the fastest way (unless you want to generate a text file and load it in your database with MySQL's LOAD DATA INFILE). But you have to be very careful about escaping your text values (which is not done in the example).
I was asking "what database are you using" because it does matter for mass UPDATEs.
For instance, you can do this on postgres (and I believe SQL Server changing "columnX" to "colX" ):
UPDATE foo
JOIN (VALUES (1,2),(3,4),... long list) v ON (foo.id=v.column1)
SET foo.bar = v.column2
And you can update a load of rows using a single statement, very fast.
If you don't need Ruby to perform some Ruby-specific magic on your data, the fastest way to transfer data from one DB to a different one is to export as a text file (CSV or tab separated), load it on the other DB (LOAD DATA INFILE on MySQL), perhaps in a temporary table, and bulk process using SQL.
EDIT : Here's how I do this in Python :
sql = [ "INSERT INTO foo (column list) VALUES " ]
values = []
for tuple in tuple_list:
append "(?,?,?,?)" to sql
extend values list with tuple
Then join sql into a string, you get "INSERT INTO foo (column list) VALUES (?,?,?,?),(?,?,?,?),(?,?,?,?)" with the "(?,?,?,?)" repeated as many times as you have lines to insert.
Then "values" contains a list of (a1,b1,c1,d1,a2,b2,c2,d2,a3,b3,c3,d3) with an,bn,cn,dn being the tuples you want to insert for line n. Each one corresponds to a placeholder in the sql string.
Then pass this to the usual "execute query with parameters" function which will handle quoting and escaping as usual.
I encountered a similar issue recently when tying to insert 100K+ records into a MySQL database for a Rails 4 app using mysql2 gem. The data included characters that had to be sanitized prior to insert.
The solution I ended going with was a slightly modified version of Option 3 described at https://www.coffeepowered.net/2009/01/23/mass-inserting-data-in-rails-without-killing-your-performance/
Here's the relevant code block from the above link:
TIMES = 10000
inserts = []
TIMES.times do
inserts.push "(3.0, '2009-01-23 20:21:13', 2, 1)"
end
sql = "INSERT INTO user_node_scores (`score`, `updated_at`, `node_id`, `user_id`) VALUES #{inserts.join(", ")}"
The modification I made was using the public method ActiveRecord::Base.sanitize() on values that required it.
inserts = []
created = Time.now.strftime "%Y-%m-%d %H:%M:%S"
params[:audits].each do |audit|
inserts.push "(#{audit.user_id), #{created}," + ActiveRecord::Base.sanitize(audit.comment) + ", #{audit.status})"
end
sql = "INSERT INTO user_audits (`user_id`, `created_at`, `comment`, `status`) VALUES #{inserts.join(", ")}"
In the code below there is a hash which contains records with fields like name, pid, type and time1.
pid and name are repetitive fields which contain duplicates.
I duplicate found update the fields which need modification
else insert, here name and pid have duplicates (repetitive fields).
The rest are unique. Also I have a unique field while creating the table Serial no. How should I go on? I have done only an insertion in this code. I dont know how to store the retrieved record into an array using Perl. Please guide me.
for my $test11 (sort keys %seen) {
my $test1 = $seen{$test11}{'name'};
my $test2 = $seen{$test11}{'pid'};
my $test3 = $seen{$test11}{'type'};
my $test4 = $seen{$test11}{'time1'};
print "$test11\t$test1$test2$test3$test4\n";
$db_handle = &getdb_handle;
$sth = $dbh->prepare("Select username,pid,sno from svn_log1");
$sth->execute() or die "SQL Error: $DBI::errstr\n";
my $ref = $sth->fetchall_arrayref();
print "hai";
print "****$ref";
$sth = $dbh->prepare("INSERT INTO svn_log1 values('$sno','$test11','$test1','$test4','$test2','$test3')");
$sth->execute() or die "SQL Error: $DBI::errstr\n";
}
I think what you're trying to say is that you don't want to try to insert some data if you already have that name/pid combination in the database, but I can't tell, so I can't help you out there.
However, here are a few things which can clear up your code. First, choose sensible variable names. Second, always, always, always use placeholders in your SQL statements to protect them:
for my $test11 ( sort keys %seen ) {
my $name = $seen{$test11}{'name'};
my $pid = $seen{$test11}{'pid'};
my $type = $seen{$test11}{'type'};
my $time1 = $seen{$test11}{'time1'};
my $dbh = getdb_handle();
my $sth = $dbh->prepare("Select username,pid,sno from svn_log1");
$sth->execute() or die "SQL Error: $DBI::errstr\n";
my $ref = $sth->fetchall_arrayref();
# XXX why are we fetching this data and throwing it away?
$sth = $dbh->prepare("INSERT INTO svn_log1 values(?,?,?,?,?,?)");
$sth->execute( $sno, $test11, $name, $time1, $pid, $type )
or die "SQL Error: $DBI::errstr\n";
}
Assuming that you want to not insert something into the database if "$name" and "$pid" are there (and some cleanup to avoid preparing the same SQL over and over):
my $dbh = getdb_handle();
my $seen_sth = $dbh->prepare( "Select 1 from svn_log1 where username = ? and pid = ?");
# This really needs to be "INSERT INTO svnlog1 (#columns) VALUES (#placeholders)
my $insert_sth = $dbh->prepare("INSERT INTO svn_log1 values(?,?,?,?,?,?)");
for my $test11 ( sort keys %seen ) {
my $name = $seen{$test11}{'name'};
my $pid = $seen{$test11}{'pid'};
my $type = $seen{$test11}{'type'};
my $time1 = $seen{$test11}{'time1'};
$seen_sth->execute($name, $pid) or die "SQL Error: $DBI::errstr\n";
my #seen = $seen_sth->fetchrow_array;
next if $seen[0];
$insert_sth->execute( $sno, $test11, $name, $time1, $pid, $type )
or die "SQL Error: $DBI::errstr\n";
}
That's not quite the way I would write this, but it's fairly clear. I suspect it's not really exactly what you want, but I hope it gets you closer to a solution.
You want to insert some data, but if it exists, then update the existing row?
How do you test that the data already exists in the database? Are you using username and pid?
If so, you may like the change the structure of your database:
ALTER TABLE svn_log1 ADD UNIQUE (username, pid);
This create a composite, and unique index on username and pid. This means that every username/pid combination must be unique.
This allows you to do the following:
INSERT INTO svn_log1 (username, pid, ...) VALUES (?, ?, ...) ON DUPLICATE KEY UPDATE time = NOW();
What database is this?
My feeling is that you want an UPDATE or INSERT query, more commonly known as an UPSERT query.
If this is PostgreSQL you can create an upsert function to handle what you need. See the comments for a decent example. Otherwise, search Stack Overflow for "upsert" and you should find what you need.