How to configure a timeout for an SQL query in Groovy? - sql

How do I create a timeout for this operation: ?
def db = Sql.newInstance("jdbc:mysql://${mysql_host}:3306/${dbName}",
user, pass, 'com.mysql.jdbc.Driver')
db.eachRow(query) { row ->
// do something with the row
}

I believe the correct way would be something like this:
sql = Sql.newInstance("jdbc:oracle:thin:#localhost:1521:XE", "user",
"pwd", "oracle.jdbc.driver.OracleDriver")
sql.withStatement {
stmt -> stmt.queryTimeout = 10
}
sql.eachRow("select * from someTable", {
println it
} )
of course, this is where I'd used Oracle, but I hope this can give you an idea.

I believe there might not be a general answer, but rather a database/driver-specific answer via parameters to the connection URL.
E.g. for mysql, I think that adding connectTimeout=something&socketTimeout=something might do the trick.

Related

Do strings need to be escaped inside parametrized queries?

I'm discovering Express by creating a simple CRUD without ORM.
Issue is, I'm not able to find any record through the Model.findBy() function
model User {
static async findBy(payload) {
try {
let attr = Object.keys(payload)[0]
let value = Object.values(payload)[0]
let user = await pool.query(
`SELECT * from users WHERE $1::text = $2::text LIMIT 1;`,
[attr, value]
);
return user.rows; // empty :-(
} catch (err) {
throw err
}
}
}
User.findBy({ email: 'foo#bar.baz' }).then(console.log);
User.findBy({ name: 'Foo' }).then(console.log);
I've no issue using psql if I surround $2::text by single quote ' like:
SELECT * FROM users WHERE email = 'foo#bar.baz' LIMIT 1;
Though that's not possible inside parametrized queries. I've tried stuff like '($2::text)' (and escaped variations), but that looks far from what the documentation recommends.
I must be missing something. Is the emptiness of user.rows related to the way I fetch attr & value ? Or maybe, is some kind of escape required when passing string parameters ?
"Answer":
As stated in the comment section, issue isn't related to string escape, but to dynamic column names.
Column names are not identifiers, and therefore cannot be dynamically set using a query parameter.
See: https://stackoverflow.com/a/50813577/11509906

Retry SQL UPDATE query after waiting x seconds

I am using a RichSinkFunction to execute a SQL UPDATE query on an existing record.
This function assumes that a record already exists on the DB. However, in certain scenarios the existing record is late.
To overcome the issue of record lateness, I have added a Thread.sleep() to make the function wait and retry the DB update.
Sample code provided below for reference.
class RichSinkFact extends RichSinkFunction[FulfillmentUsagesOutput]{
private def updateFactUpcoming(
r: FulfillmentUsagesOutput,
schemaName: String
): Unit = {
var updateStmt: PreparedStatement = null
val sqlStatement =
s"""
|UPDATE $schemaName.$factUpcomingTableName
|SET unit_id = ?
|WHERE pledge_id = ?
|;
|
""".stripMargin
try {
updateStmt = connection.prepareStatement(sqlStatement)
updateStmt.setLong(1, r.unit_id)
updateStmt.setString(2, r.pledge_id)
val rows = updateStmt.executeUpdate()
if(rows == 0) {
logger.warn(s"Retrying update for ${r}")
//retry update
Thread.sleep(retrySleepTime)
val rows = updateStmt.executeUpdate()
if(rows == 0){
//raise error
logger.error(s"Unable to update row: ${r}")
}
}
} finally {
if (updateStmt != null) {
updateStmt.close()
}
}
}
}
Question : Since Flink already implements other timers and uses internal time processing functions, is this the right way of retrying a DB update?
Thanks
As you suspected, sleeping in a Flink user function can cause problems, and should be avoided. In this case there is a better solution: take a look at Sink.ProcessingTimeService. This will let you register timers that will call a callback you register when they fire.
Thanks to David for the original idea behind this approach.
Sink.ProcessingTimeService is only present from Flink 1.12 onwards. So, for anyone on a previous version of Flink looking to implement a similar solution, ProcessingTimeCallback can be used to implement timers in a Sink application.
I have included a sample approach here
https://gist.github.com/soumoks/f73694c64169c8b3494ba1842fa61f1b

DBArrayList to List<Map> Conversion after Query

Currently, I have a SQL query that returns information to me in a DBArrayList.
It returns data in this format : [{id=2kjhjlkerjlkdsf324523}]
For the next step, I need it to be in a List<Map> format without the id: [2kjhjlkerjlkdsf324523]
The Datatypes being used are DBArrayList, and List.
If it helps any, the next step is a function to collect the list and then to replace all single quotes if any [SQL-Injection prevention]. Using:
listMap = listMap.collect() { "'" + Util.removeSingleQuotes(it) + "'" }
public static String removeSingleQuotes(s) {
return s ? s.replaceAll(/'"/, '') : s
}
I spent this morning working on it, and I found out that I needed to actually collect the DBArrayList like this:
listMap = dbArrayList.collect { it.getAt('id')}
If you're in a bind like I was and restrained to a specific schema this might help, but #ou_ryperd has the correct answer!
While using a DBArrayList is not wrong, Groovy's idiom is to use the db result as a collection. I would suggest you use it that way directly from the db:
Map myMap = [:]
dbhandle.eachRow("select fieldSomeID, fieldSomeVal from yourTable;") { row ->
map[row.fieldSomeID] = row.fieldSomeVal.replaceAll(/'"/, '')
}

Is there any way to do multiple inserts/updates in Slick?

In sql we can do something like this:
INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9);
Is there any way to do multiple/bulk/batch inserts or updates in Slick?
Can we do something similar, at least using SQL plain queries ?
For inserts, as Andrew answered, you use insertALL.
def insertAll(items:Seq[MyCaseClass])(implicit session:Session) = {
(items.size) match {
case s if s > 0 =>
try {
// basequery is the tablequery object
baseQuery.insertAll(tempItems :_*)
} catch {
case e:Exception => e.printStackTrace()
}
Some(tempItems(0))
case _ => None
}
}
For updates, you're SOL. Check out Scala slick 2.0 updateAll equivalent to insertALL? for what I ended up doing. To paraphrase, here's the code:
private def batchUpdateQuery = "update table set value = ? where id = ?"
/**
* Dropping to jdbc b/c slick doesnt support this batched update
*/
def batchUpate(batch:List[MyCaseClass])(implicit session:Session) = {
val pstmt = session.conn.prepareStatement(batchUpdateQuery)
batch map { myCaseClass =>
pstmt.setString(1, myCaseClass.value)
pstmt.setString(2, myCaseClass.id)
pstmt.addBatch()
}
session.withTransaction {
pstmt.executeBatch()
}
}
In Slick, you are able to use the insertAll method for a Table. An example of insertAll is given in the Getting Started page on Slick's website.
http://slick.typesafe.com/doc/0.11.1/gettingstarted.html

PDO login script won't work

I changed this login script to PDO. Now it passes the username but get's stuck fetchAll line. I need help please. thanks
<?php
session_start();
include_once"includes/config.php";
if (isset($_POST['admin_login'])) {
$admin_user = trim($_POST['admin_user']);
$admin_pw = trim($_POST['admin_pw']);
if ($admin_user == NULL OR $admin_pw == NULL) {
$final_report.="Please complete all the fields below..";
} else {
$check_user_data = $db->prepare("SELECT * FROM `admin`
WHERE `admin_user`='$admin_user'");
$check_user_data->execute();
if ($check_user_data->fetchColumn() == 0) {
$final_report.="This admin username does not exist..";
} else {
$get_user_data = $check_user_data->fetchAll($check_user_data);
if ($get_user_data['admin_pw'] == $admin_pw) {
$start_idsess = $_SESSION['admin_user'] = "".$get_user_data['admin_user']."";
$start_passsess = $_SESSION['admin_pw'] = "".$get_user_data['admin_pw']."";
$final_report.="You are about to be logged in, please wait a few moments...";
header('Location: admin.php');
}
}
}
}
?>
Not checking return value prepare() or execute() for false. You need to check for SQL errors and handle them, stopping the code instead of continuing on blithely.
Not using query parameters in the prepared statement, still interpolating $_POST content into the query unsafely. You're missing the benefit of switching to PDO, and leaving yourself vulnerable to SQL injection attack.
You're storing passwords in plaintext, which is unsafe. See You're Probably Storing Passwords Incorrectly.
Do you really need to SELECT * if you only use the admin_pw column? Hint: no.
PDOStatement::fetchAll() returns an array of arrays, not just one array for a row. Read the examples in the documentation for fetchAll().