How to use ScalaQuery to insert a BLOB field? - sql

I used ScalaQuery and Scala.
If I have an Array[Byte] object, how do I insert it into the table?
object TestTable extends BasicTable[Test]("test") {
def id = column[Long]("mid", O.NotNull)
def extInfo = column[Blob]("mbody", O.Nullable)
def * = id ~ extInfo <> (Test, Test.unapply _)
}
case class Test(id: Long, extInfo: Blob)
Can I define the method used def extInfo = column[Array[Byte]]("mbody", O.Nullable), how to operate(UPDATE, INSERT, SELECT) with the BLOB type field?
BTW: no ScalaQuery tag

Since the BLOB field is nullable, I suggest changing its Scala type to Option[Blob], for the following definition:
object TestTable extends Table[Test]("test") {
def id = column[Long]("mid")
def extInfo = column[Option[Blob]]("mbody")
def * = id ~ extInfo <> (Test, Test.unapply _)
}
case class Test(id: Long, extInfo: Option[Blob])
You can use a raw, nullable Blob value if you prefer, but then you need to use orElse(null) on the column to actually get a null value out of it (instead of throwing an Exception):
def * = id ~ extInfo.orElse(null) <> (Test, Test.unapply _)
Now for the actual BLOB handling. Reading is straight-forward: You just get a Blob object in the result which is implemented by the JDBC driver, e.g.:
Query(TestTable) foreach { t =>
println("mid=" + t.id + ", mbody = " +
Option(t.extInfo).map { b => b.getBytes(1, b.length.toInt).mkString })
}
If you want to insert or update data, you need to create your own BLOBs. A suitable implementation for a stand-alone Blob object is provided by JDBC's RowSet feature:
import javax.sql.rowset.serial.SerialBlob
TestTable insert Test(1, null)
TestTable insert Test(2, new SerialBlob(Array[Byte](1,2,3)))
Edit: And here's a TypeMapper[Array[Byte]] for Postgres (whose BLOBs are not yet supported by ScalaQuery):
implicit object PostgresByteArrayTypeMapper extends
BaseTypeMapper[Array[Byte]] with TypeMapperDelegate[Array[Byte]] {
def apply(p: BasicProfile) = this
val zero = new Array[Byte](0)
val sqlType = java.sql.Types.BLOB
override val sqlTypeName = "BYTEA"
def setValue(v: Array[Byte], p: PositionedParameters) {
p.pos += 1
p.ps.setBytes(p.pos, v)
}
def setOption(v: Option[Array[Byte]], p: PositionedParameters) {
p.pos += 1
if(v eq None) p.ps.setBytes(p.pos, null) else p.ps.setBytes(p.pos, v.get)
}
def nextValue(r: PositionedResult) = {
r.pos += 1
r.rs.getBytes(r.pos)
}
def updateValue(v: Array[Byte], r: PositionedResult) {
r.pos += 1
r.rs.updateBytes(r.pos, v)
}
override def valueToSQLLiteral(value: Array[Byte]) =
throw new SQueryException("Cannot convert BYTEA to literal")
}

I just post an updated code for Scala and SQ, maybe it will save some time for somebody:
object PostgresByteArrayTypeMapper extends
BaseTypeMapper[Array[Byte]] with TypeMapperDelegate[Array[Byte]] {
def apply(p: org.scalaquery.ql.basic.BasicProfile) = this
val zero = new Array[Byte](0)
val sqlType = java.sql.Types.BLOB
override val sqlTypeName = "BYTEA"
def setValue(v: Array[Byte], p: PositionedParameters) {
p.pos += 1
p.ps.setBytes(p.pos, v)
}
def setOption(v: Option[Array[Byte]], p: PositionedParameters) {
p.pos += 1
if(v eq None) p.ps.setBytes(p.pos, null) else p.ps.setBytes(p.pos, v.get)
}
def nextValue(r: PositionedResult) = {
r.nextBytes()
}
def updateValue(v: Array[Byte], r: PositionedResult) {
r.updateBytes(v)
}
override def valueToSQLLiteral(value: Array[Byte]) =
throw new org.scalaquery.SQueryException("Cannot convert BYTEA to literal")
}
and then usage, for example:
...
// defining a column
def content = column[Array[Byte]]("page_Content")(PostgresByteArrayTypeMapper)

Related

Issue in def variable while performing sort in Karate Framework

I have noticed that while passing a def as a list for simple sorting I am observing that apart from return values the original variable is also sorted.
* def original = ['a','b','c']
* def javaInstance = new (Java.type('package.subpackage.StringSort'))
* def sortedContent = javaInstance.m1(original,'desc');
* print sortedContent
* print original
Both "sortedContent" & "original" def variables are sorted.
Below is the java fn:
public class StringSort {
public List<String> m1(List<String> s, String order) {
Collections.sort(s);
if(order.equals("asc"))
return s;
else {
Collections.reverse(s);
return s;
}
}
}
Output:
sortedContent = ['c','b','a']
original = ['c','b','a']
I don't understand why original def variable is sorted.
That's how Java works. Create a clone:
* def original = ['a','b','c']
* copy temp = original

Groovy testing groovy.sql.Sql - MockSql

Need some help of mocking sql.eachRow().
I have a service which I'd like to test which is as follows:
class MyService {
def dataSource
def method1(id) {
def map = [:]
def sql = new Sql(dataSource)
def queryString = 'select col1, col2 from tbl_1 where id = ?
sql.eachRow(queryString, [1]) { row ->
map.put(row.val1, row.val2)
}
return map
}
}
I'm trying to test this with MockFor
Code:
class MyServiceTest extends Specification {
#Test
def "test method1"() {
setup:
def row = [:]
row["col1"] = "1"
row["col2"] = "val1"
def mockResult = [row]
Sql.metaClass.constructor = {dataSource -> return new MockSql("") }
def mockSql = new MockFor(Sql.class)
mockSql.demand.newInstance { def datasource ->
return mockSql
}
mockSql.demand.eachRow { def arg1, def arg2, closure ->
// run the closure over the mock array
mockResult.each(closure)
}
when:
def result = service.method1(1)
then:
result == ["1":"val1"]
}
}
Getting the below error with earRow argument types
No signature of method: com.kenexa.assess.MockSql.eachRow() is applicable for argument types: (java.lang.String, java.util.ArrayList, xyz_closure103) values: [select col1, col2 from tbl_1 where id = ?]
Possible solutions: eachRow(java.lang.Object, groovy.lang.Closure)
groovy.lang.MissingMethodException: No signature of method: com.kenexa.assess.MockSql.eachRow() is applicable for argument types: (java.lang.String, java.util.ArrayList, xyz_closure103) values: [select col1, col2 from tbl_1 where id = ?]
Possible solutions: eachRow(java.lang.Object, groovy.lang.Closure)

Use method in sql interpolator

Using scala 2.11 and Slick 2.11
In a scala class, I have 2 methods:
getSQL which returns String SQL
getSqlStreamingAction which returns a composed SqlStreamingAction using sql interpolator
The code
def getSQL(id: Int): String = {
var fields_string = "";
for ((k,v) <- field_map) fields_string += k + ", ";
fields_string = fields_string.dropRight(2) // remove last ", "
"SELECT "+ fields_string +" FROM my_table WHERE id = " + id
}
def getSqlStreamingAction (id: Int): SqlStreamingAction[Vector[OtherObject], OtherObject, Effect] = {
val r = GetResult(r => OtherObject(r.<<, r.<<))
// this works
var fields_string = "";
for ((k,v) <- field_map) fields_string += k + ", ";
sql"""SELECT #$fields_string FROM my_table WHERE id = #$id""".as(r)
// But I want to use the method getSQL to retrieve the SQL String
// I imagine something like this, but of course it doesn't work :)
//sql"getSQL($id)".as(r)
I want to have separated methods for unit tests purposes, so I want to use getSQL method for sql interpolator
So, how can I use a method for Slick sql interpolator?
Note: I'm pretty new in Scala
-1 for me.
Solution:
def getSqlStreamingAction (id: Int): SqlStreamingAction[Vector[OtherObject], OtherObject, Effect] = {
val r = GetResult(r => OtherObject(r.<<, r.<<))
var sql_string: String = getSQL(id)
sql"""#$sql_string""".as(r)

Squeryl geo queries with a postgres backend?

How can I perform geo queries using Squeryl with a postgres backend? The sort of queries I want to run are "return all users within x kilometres", etc.
If geo queries aren't supported directly/through a plugin, how can I run raw SQL queries? I saw one gist and it looked complicated.
Update
Specifically I want to run the following query:
SELECT events.id, events.name FROM events
WHERE earth_box( {current_user_lat}, {current_user_lng},
{radius_in_metres}) #> ll_to_earth(events.lat, events.lng);
This is taken from http://johanndutoit.net/searching-in-a-radius-using-postgres/
This object should solve your problem.
object object RawSql {
def q(query: String, args: Any*) =
new RawTupleQuery(query, args)
class RawTupleQuery(query: String, args: Seq[Any]) {
private def prep = {
val s = Session.currentSession
val st = s.connection.prepareStatement(query)
def unwrap(o: Any) = o match {
case None => null
case Some(ob) => ob.asInstanceOf[AnyRef]
case null => null
case a#AnyRef => a
case a#_ => a
}
for(z <- args.zipWithIndex) {
st.setObject(z._2 + 1, unwrap(z._1))
}
st
}
def toSeq[A1]()(implicit f1 : TypedExpressionFactory[A1,_]) = {
val st = prep
val rs = st.executeQuery
try {
val ab = new ArrayBuffer[A1]
val m1 = f1.thisMapper.asInstanceOf[PrimitiveJdbcMapper[A1]]
while(rs.next)
ab.append(m1.convertFromJdbc(m1.extractNativeJdbcValue(rs, 1)))
ab
}
finally {
rs.close()
st.close()
}
}
def toTupleSeq[A1,A2]()(implicit f1 : TypedExpressionFactory[A1,_], f2 : TypedExpressionFactory[A2,_]) = {
val st = prep
val rs = st.executeQuery
try {
val ab = new ArrayBuffer[(A1,A2)]
val m1 = f1.thisMapper.asInstanceOf[PrimitiveJdbcMapper[A1]]
val m2 = f2.thisMapper.asInstanceOf[PrimitiveJdbcMapper[A2]]
while(rs.next)
ab.append(
(m1.convertFromJdbc(m1.extractNativeJdbcValue(rs, 1)),
m2.convertFromJdbc(m2.extractNativeJdbcValue(rs, 2))))
ab
}
finally {
rs.close()
st.close()
}
}
}
}
I got from this gist:
https://gist.github.com/max-l/9250053

Find Index of all values of an array in another array and collect the value in that index from a third Array

I would like to find the index of matches from the descendentList in the parentIdList and then add the value which exists in that index from the idList to the descendentList and then once again check the parentIdList for the index of all the matching values.
I am essentially trying to create a looping structure which would result in looking like this:
This seems to work but only if you can allow descendentList to be a Set. If not then I am not sure what the terminating condition would be, it would just keep adding the values of the same indexes over and over. I think a Set is appropriate considering what you said in your comment above... "I would like to loop through this until no more matches are added to descendentList"
Set descendentList = [2]
def parentIdList = [0,1,2,3,2]
def idList = [1,2,3,4,5]
/**
* First: I would like to find the index of matches from the descendentList in the
* parentIdList
*/
def findIndexMatches(Set descendentList, List parentIdList, List idList) {
List indexes = []
def size = descendentList.size()
descendentList.each { descendent ->
indexes.addAll(parentIdList.findIndexValues { it == descendent })
}
addExistingValuesToFromIdListToDecendentList(descendentList, idList, indexes)
// Then once again check the parentIdList for the index of all the matching values.
if(size != descendentList.size()) { // no more indexes were added to decendentList
findIndexMatches(descendentList, parentIdList, idList)
}
}
/**
* and then add the value which exists in that index from the
* idList to the descendentList
*/
def addExistingValuesToFromIdListToDecendentList(Set descendentList, List idList, List indexes) {
indexes.each {
descendentList << idList[it as int]
}
}
findIndexMatches(descendentList, parentIdList, idList)
println descendentList // outputs [2,3,4,5]
Something like the following seems to work - not written any tests though, so may fail with different use cases - just a simple, idiomatic recursive solution.
def descendentList = [2]
def parentIdList = [0,1,2,3,2]
def idList = [1,2,3,4,5]
def solve( List descendentList, List parentIdList, List idList ){
List matchedIds = descendentList.inject( [] ){ result, desc ->
result + idList[ parentIdList.findIndexValues{ it == desc } ]
}
if ( matchedIds ){
descendentList + solve( matchedIds, parentIdList, idList )
} else {
descendentList
}
}
println solve( descendentList, parentIdList, idList )
You can also do this without recursion, using an iterator:
class DescendantIterator<T> implements Iterator<T> {
private final List<T> parents
private List<T> output
private final List<T> lookup
private List<T> next
DescendantIterator(List<T> output, List<T> parents, List<T> lookup) {
this.output = output
this.parents = parents
this.lookup = lookup
}
boolean hasNext() { output }
Integer next() {
def ret = output.head()
parents.findIndexValues { it == ret }.with { v ->
if(v) { output += lookup[v] }
}
output = output.drop(1)
ret
}
void remove() {}
}
def descendentList = [2]
def parentIdList = [0,1,2,3,2]
def idList = [1,2,3,4,5]
def values = new DescendantIterator<Integer>(descendentList, parentIdList, idList).collect()
After this, values == [2, 3, 5, 4]
First build a map from parent id to all it's child ids. Next find the results for the input and iterate over newly found results as long as there are no more.
def parentIdList = [0,1,2,3,2]
def idList = [1,2,3,4,5]
tree = [parentIdList, idList].transpose().groupBy{it[0]}.collectEntries{ [it.key, it.value*.get(1)] }
def childs(l) {
l.collect{ tree.get(it) }.findAll().flatten().toSet()
}
def descendants(descendentList) {
def newresults = childs(descendentList)
def results = [].toSet() + descendentList
while (newresults.size()) {
results.addAll(newresults)
newresults = childs(newresults) - results
}
return results
}
assert descendants([2]) == [2,3,4,5].toSet()
assert descendants([2,1]) == [1,2,3,4,5].toSet()
assert descendants([3]) == [3,4].toSet()