Prepared Statement on Postgresql in Rails - sql

Right now I am in the middle of migrating from SQLite to Postgresql and I came across this problem. The following prepared statement works with SQLite:
id = 5
st = ActiveRecord::Base.connection.raw_connection.prepare("DELETE FROM my_table WHERE id = ?")
st.execute(id)
st.close
Unfortunately it is not working with Postgresql - it throws an exception at line 2.
I was looking for solutions and came across this:
id = 5
require 'pg'
conn = PG::Connection.open(:dbname => 'my_db_development')
conn.prepare('statement1', 'DELETE FROM my_table WHERE id = $1')
conn.exec_prepared('statement1', [ id ])
This one fails at line 3. When I print the exception like this
rescue => ex
ex contains this
{"connection":{}}
Executing the SQL in a command line works. Any idea what I am doing wrong?
Thanks in advance!

If you want to use prepare like that then you'll need to make a couple changes:
The PostgreSQL driver wants to see numbered placeholders ($1, $2, ...) not question marks and you need to give your prepared statement a name:
ActiveRecord::Base.connection.raw_connection.prepare('some_name', "DELETE FROM my_table WHERE id = $1")
The calling sequence is prepare followed by exec_prepared:
connection = ActiveRecord::Base.connection.raw_connection
connection.prepare('some_name', "DELETE FROM my_table WHERE id = $1")
st = connection.exec_prepared('some_name', [ id ])
The above approach works for me with ActiveRecord and PostgreSQL, your PG::Connection.open version should work if you're connecting properly.
Another way is to do the quoting yourself:
conn = ActiveRecord::Base.connection
conn.execute(%Q{
delete from my_table
where id = #{conn.quote(id)}
})
That's the sort of thing that ActiveRecord is usually doing behind your back.
Directly interacting with the database tends to be a bit of a mess with Rails since the Rails people don't think you should ever do it.
If you really are just trying to delete a row without interference, you could use delete:
delete()
[...]
The row is simply removed with an SQL DELETE statement on the record’s primary key, and no callbacks are executed.
So you can just say this:
MyTable.delete(id)
and you'll send a simple delete from my_tables where id = ... into the database.

Related

Dealing with multiple output results in UPSERT query [SQL]

I'm trying to do an update query in which a single row the table is updated and, if nothing has matched and updated, a new row is inserted. In each case, I need the query to return the ID of the inserted row.
The issue I'm having is this query is returning 2 separate results when the insert case is reached, one for each output (the first empty, the second containing the ID). I'm running this query using SQL Alchemy on python and I'm only able to see the first result, which is empty.
UPDATE [Rights]
SET accessLevel = :access_level
OUTPUT inserted.rightsID
WHERE principal = :principal and [function] = :function
IF ##ROWCOUNT = 0
INSERT INTO Rights(principal, [function], accessLevel)
OUTPUT inserted.rightsID
VALUES(:principal, :function, :access_level)
And I'm calling it like so:
inserted_right_id = session.execute(sql_rights_update, right).fetchall()
Can anyone recommend a way of changing the query so that I can still use the UPSERT method, but only receive one of the IDs? I was considering storing the OUTPUT values into a table and returning that, or wrapping the whole thing in a select but hopefully there's something more elegant out there.
Thanks a million.
Feeling quite dumb. I simply added a
IF EXISTS(SELECT * FROM Rights WHERE principal = :principal and [function] = :function)
UPDATE ...
ELSE
INSERT ...

Update multiple values in an oracle table using values from an APEX collection

I am using APEX collections to store some values and pass them between pages in Oracle Application Express 4.2.3.
I would like to then perform an update statement on a table called "project" with the values from the collection.
My code so far is as follows:
update project
SET name=c.c002,
description=c.c007,
start_date=c.c004,
timeframe=c.c005,
status=c.c009
FROM
apex_collections c
WHERE
c.collection_name = 'PROJECT_DETAILS_COLLECTION'
and id = :p14_id;
where :p14_id is the value of a page item.
However, I am getting the following error:
ORA-00933: SQL command not properly ended
Anyone have any idea on how to approach this?
Thanks!
The UPDATE syntax you are using is not valid in Oracle; it does not allow you to use FROM in the way you are attempting.
The simplest way to do this in Oracle would with a subquery:
update project
set (name, description, start_date, timeframe, status) =
(select c.c002, c.c007, c.c004, c.c005, c.c009
FROM
apex_collections c
WHERE
c.collection_name = 'PROJECT_DETAILS_COLLECTION'
)
WHERE
id = :p14_id
;
Note that if the subquery returns no rows, the columns in the target table will be updated to NULL; this could be avoided by adding a similar EXISTS condition in the predicate for the update. It could also be avoided by using a MERGE statement instead of an UPDATE.
If the subquery returns multiple rows, the statement will throw an error. It looks like tthat should not be the case here.

Execute raw SQL using ServiceStack.OrmLite

I am working ServiceStack.OrmLite using MS SQL Server. I would like to execute raw SQL against database but original documentation contains description of how to do it with SELECT statement only. That is not enough for me.
I cant find the way to run anything as simple as that:
UPDATE table1
SET column1 = 'value1'
WHERE column2 = value2
Using, for example:
var two = db.Update(#"UPDATE table1
SET column1 = 'value1'
WHERE column2 = value2");
Running this expressions with db.Update() or db.Update<> produces uncomprehensive errors like
Incorrect syntax near the keyword 'UPDATE'.
I would like to use raw sql because my real UPDATE expression uses JOIN.
db.Update is for updating a model or partial model as shown in OrmLite's Documentation on Update. You can choose to use the loose-typed API to build your update statement, e.g:
db.Update(table: "table1",
set: "column1 = {0}".Params("value1"),
where: "column2 = {0}".Params("value2"));
The Params extension method escapes your values for you.
Otherwise the way to execute any arbitrary raw sql is to use db.ExecuteSql().
If it is a SELECT statement and you want to execute using raw sql you can use:
List<Person> results = db.SqlList<Person>("SELECT * FROM Person WHERE Age < #age", new { age=50});
Reference: https://github.com/ServiceStack/ServiceStack.OrmLite#typed-sqlexpressions-with-custom-sql-apis

Can I first SELECT and then DELETE records in one t-SQL transaction?

I can't figure out if this is an acceptable operation. I need to select records from the SQL Server 2008 database and then delete them, all as a single transaction from an ASP.NET code. Note that the .NET code must be able to retrieve the data that was first selected.
Something as such:
SELECT * FROM [tbl] WHERE [id] > 6;
DELETE FROM [tbl] WHERE [id] > 6
I'm trying it with the SQL Fiddle but then if I do:
SELECT * FROM [tbl]
I get the full table as if nothing was deleted.
EDIT As requested below here's the full .NET code to retrieve the records:
string strSQLStatement = "SELECT * FROM [tbl] WHERE [id] > 6;" +
"DELETE FROM [tbl] WHERE [id] > 6";
using (SqlCommand cmd = new SqlCommand(strSQLStatement, connectionString))
{
using (SqlDataReader rdr = cmd.ExecuteReader())
{
while(rdr.Read())
{
//Read values
val0 = rdr.GetInt32(0);
val3 = rdr.GetInt32(3);
//etc.
}
}
}
This will do the select and delete simultanious:
delete from [tbl] output deleted.* WHERE [id] > 6
It is possible to select and delete in the same transaction as long as both operations are enlisted in the same transaction.
Look at this post
Transactions in .net
The "easiest" way to achieve transactions with a compatible provider (SQL Server works great!) is to use a TransactionScope. Just make sure the scope is created before the connection is opened so that everything is correctly enlisted.
The content of the SelectStuff and DeleteStuff methods doesn't matter much - just use the same connection, don't manually mess with the connection or with transactions, and perform the SQL operations however is best.
// Notes
// - Create scope OUTSIDE/BEFORE connection for automatic enlisting
// - Create only ONE connection inside to avoid DTC and "advanced behavior"
using (var ts = new TransactionScope())
using (var conn = CreateConnection()) {
// Make sure stuff selected is MATERIALIZED:
// If a LAZY type (Enumerable/Queryable) is returned and used later it
// may cause access to the connection outside of when it is valid!
// Use "ToList" as required to force materialization of such sequences.
var selectedStuff = SelectStuff(conn);
DeleteStuff(conn);
// Commit
ts.Complete();
// Know stuff is deleted here, and access selected stuff.
return selectedStuff;
}
The return value from multiple SQL statements is the result of the last statement run, which in this case is the DELETE. There are no rows returned from a DELETE, so there is nothing to read for val0 and val3.
There are two solutions I can think of here:
Change your code to expressly start a transaction, perform the SELECT, read the values, and then issue the DELETE, or
SELECT into a #temp table, execute the DELETE, and then SELECT from the #temp table, do what you need to with the rows, and then DROP th.

MySQL: "The SELECT would examine more than MAX_JOIN_SIZE rows"

I am using PHP and MySQL. In my program there is a select query involving joins. When I run it on localhost it's working fine but when I upload it on my server and try to execute it then it generates the following error:
The SELECT would examine more than MAX_JOIN_SIZE rows; check your WHERE and use SET SQL_BIG_SELECTS=1 or SET SQL_MAX_JOIN_SIZE=# if the SELECT is okay
How can I correct this?
When using PHP, SQL_BIG_SELECTS=1 should be set in a separate query before your main query. For example:
$mysqli = new mysqli("localhost", "root", "password", "db");
$mysqli->query("SET SQL_BIG_SELECTS=1"); //Set it before your main query
$results = $mysqli->query("SELECT a, b, c FROM test");
while($row = $results->fetch_assoc()){
echo '<pre>';
print_r ($row);
echo '</pre>';
}
Try running as a query previous executing your select:
SET SQL_BIG_SELECTS=1
Is this really executing over a huge dataset? If not this should be solved in a different way.
The parameter's effect is documented at http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_max_join_size.
You should filter the involved records more strictly (so there are less records involved in each part of the query). If possible start with the table where you can filter out the most records with a simple WHERE-clause.
I've ran into the same problem. Its a drupal site so no surprise that it fell over.
It was an old style query, ie Select blah From table1, table2, table3 Where table1.id=table2.id And table2.some = 'thing'
Like #VolkerK says, the solution was to move the where clauses that filtered table2 results before that which matched table1 to table2 (effectively the join clauses), thus decreasing the amount of records needing to match in table2 to table1.
For me the solution was to add an index key to all the columns the joins used to match.
If you are using PDO driver, set the PDO::MYSQL_ATTR_INIT_COMMAND in your driver_options array when constructing a new database handle
like so:
$dbh = new PDO('mysql:host=xxx;port=xxx;dbname=xxx', 'xxx', 'xxx', array(PDO::MYSQL_ATTR_INIT_COMMAND => 'SET SESSION SQL_BIG_SELECTS=1'));