How to convert query result to case class? - slick-2.0

I'm using Slick 2.0 and I have the following User case class:
case class User(id: Option[Long], email: String, password: String)
class Users(tag: Tag) extends Table[User](tag, "user") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def email = column[String]("email")
def password = column[String]("password")
def * = (id.?, email, password) <> ((User.apply _).tupled, User.unapply _)
}
and I have the following function getting a user via their database id:
def findOneById(id: Long) = DB.withSession { implicit session =>
val result = (for {
u <- users if u.id === id
} yield (u.id, u.email, u.password)).first
User(Option(result._1), result._2, result._3)
}
Is there an easier way to convert the Seq response back result into the User case class?

Ok, I found my answer. I got it in part from #benji and in part from this post: Converting scala.slick.lifted.Query to a case class.
Instead of using a for comprehension, the following returns an Option[User], which is exactly what I need:
def findOneById(id: Long):Option[User] = DB.withSession { implicit session =>
users.filter(_.id === id).firstOption
}

For comprehension is easier in use with sql joins. Example:
def findOneById(id: Long) = DB.withSession { implicit session =>
val query = for {
u <- users if u.id === id
t <- token if t.user_id = u.id && token.isActive === `A`
} yield u
query.firstOption
}

To convert the result to list of User case class, you can simply do
result.map(User.tupled)
However, that I will only work, if your data model is consitent. For example: Your user case class has id as optional where as it is a primary key in the DB which is wrong.

Related

Kotlin Exposed - selecting based on sub-query count

In my data model I have a very simple one-to-many relationship between challenges and it's whitelist items.
I am trying to select a challenge filtered by whitelist. Basically the challenge selection criteria is when the challenge is either does not have any entries in whitelist for itself or the whitelist matches by name.
This can be achieved with quite simple SQL query:
select c.* from challenge c, challenge_whitelist w where (c.id = w."challengeId" and w."userName" = 'testuser') or ((select count(*) where c.id = w."challengeId") = 0);
I am unable to translate it to Exposed though:
// will not compile
fun listAll(userName: String) {
ExposedChallenge.wrapRows(
ChallengeTable.innerJoin(ChallengeWhitelistTable)
.slice(ChallengeTable.columns)
.select((ChallengeWhitelistTable.userName eq userName) or (ChallengeTable.innerJoin(ChallengeWhitelistTable).selectAll().count() eq 0))
).toList()
}
The userName check works correctly but ChallengeTable.innerJoin(ChallengeWhitelistTable).selectAll().count() eq 0) is not qualified as the valid expression (will not compile).
Note that the mappings are super-simple:
object ChallengeTable : IntIdTable() {
val createdAt = datetime("createdAt")
}
class ExposedChallenge(id: EntityID<Int>) : IntEntity(id) {
companion object : IntEntityClass<ExposedChallenge>(ChallengeTable)
var createdAt by ChallengeTable.createdAt
val whitelist by ExposedChallengeWhitelist referrersOn ChallengeWhitelistTable.challenge
}
object ChallengeWhitelistTable : IntIdTable(name = "challenge_whitelist") {
var userName = varchar("userName", 50)
var challengeId = integer("challengeId")
val challenge = reference("challengeId", ChallengeTable).uniqueIndex()
}
class ExposedChallengeWhitelist(id: EntityID<Int>) : IntEntity(id) {
companion object : IntEntityClass<ExposedChallengeWhitelist>(ChallengeWhitelistTable)
val challengeId by ChallengeWhitelistTable.challengeId
val challenge by ExposedChallenge referencedOn ChallengeWhitelistTable.challenge
}
Any help would be appreciated.
Your SQL query is invalid as you use select count(*) without from part.
But it can be rewritten with Exposed DSL like:
ChallengeTable.leftJoin(ChallengeWhitelistTable).
slice(ChallengeTable.columns).
selectAll().
groupBy(ChallengeTable.id, ChallengeWhitelistTable.userName).having {
(ChallengeWhitelistTable.userName eq "testUser") or
(ChallengeWhitelistTable.id.count() eq 0)
}
Another way is to use just left join:
ChallengeTable.leftJoin(ChallengeWhitelistTable).
slice(ChallengeTable.columns).
select {
(ChallengeWhitelistTable.userName eq "testUser") or
(ChallengeWhitelistTable.id.isNull())
}

Squeryl geo queries with a postgres backend?

How can I perform geo queries using Squeryl with a postgres backend? The sort of queries I want to run are "return all users within x kilometres", etc.
If geo queries aren't supported directly/through a plugin, how can I run raw SQL queries? I saw one gist and it looked complicated.
Update
Specifically I want to run the following query:
SELECT events.id, events.name FROM events
WHERE earth_box( {current_user_lat}, {current_user_lng},
{radius_in_metres}) #> ll_to_earth(events.lat, events.lng);
This is taken from http://johanndutoit.net/searching-in-a-radius-using-postgres/
This object should solve your problem.
object object RawSql {
def q(query: String, args: Any*) =
new RawTupleQuery(query, args)
class RawTupleQuery(query: String, args: Seq[Any]) {
private def prep = {
val s = Session.currentSession
val st = s.connection.prepareStatement(query)
def unwrap(o: Any) = o match {
case None => null
case Some(ob) => ob.asInstanceOf[AnyRef]
case null => null
case a#AnyRef => a
case a#_ => a
}
for(z <- args.zipWithIndex) {
st.setObject(z._2 + 1, unwrap(z._1))
}
st
}
def toSeq[A1]()(implicit f1 : TypedExpressionFactory[A1,_]) = {
val st = prep
val rs = st.executeQuery
try {
val ab = new ArrayBuffer[A1]
val m1 = f1.thisMapper.asInstanceOf[PrimitiveJdbcMapper[A1]]
while(rs.next)
ab.append(m1.convertFromJdbc(m1.extractNativeJdbcValue(rs, 1)))
ab
}
finally {
rs.close()
st.close()
}
}
def toTupleSeq[A1,A2]()(implicit f1 : TypedExpressionFactory[A1,_], f2 : TypedExpressionFactory[A2,_]) = {
val st = prep
val rs = st.executeQuery
try {
val ab = new ArrayBuffer[(A1,A2)]
val m1 = f1.thisMapper.asInstanceOf[PrimitiveJdbcMapper[A1]]
val m2 = f2.thisMapper.asInstanceOf[PrimitiveJdbcMapper[A2]]
while(rs.next)
ab.append(
(m1.convertFromJdbc(m1.extractNativeJdbcValue(rs, 1)),
m2.convertFromJdbc(m2.extractNativeJdbcValue(rs, 2))))
ab
}
finally {
rs.close()
st.close()
}
}
}
}
I got from this gist:
https://gist.github.com/max-l/9250053

Translate nested join and groupby query to Slick 3.0

I'm implementing a todo list. A user can have multiple lists and a list can have multiple users. I want to be able to retrieve all the lists for a user, where each of these lists contain a list of the users for which it's shared (including the owner). Not succeeding implementing this query.
The table definitions:
case class DBList(id: Int, uuid: String, name: String)
class Lists(tag: Tag) extends Table[DBList](tag, "list") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc) // This is the primary key column
def uuid = column[String]("uuid")
def name = column[String]("name")
// Every table needs a * projection with the same type as the table's type parameter
def * = (id, uuid, name) <> (DBList.tupled, DBList.unapply)
}
val lists = TableQuery[Lists]
case class DBUser(id: Int, uuid: String, email: String, password: String, firstName: String, lastName: String)
// Shared user projection, this is the data of other users which a user who shared an item can see
case class DBSharedUser(id: Int, uuid: String, email: String, firstName: String, lastName: String, provider: String)
class Users(tag: Tag) extends Table[DBUser](tag, "user") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc) // This is the primary key column
def uuid = column[String]("uuid")
def email = column[String]("email")
def password = column[String]("password")
def firstName = column[String]("first_name")
def lastName = column[String]("last_name")
def * = (id, uuid, email, password, firstName, lastName) <> (DBUser.tupled, DBUser.unapply)
def sharedUser = (id, uuid, email, firstName, lastName) <> (DBSharedUser.tupled, DBSharedUser.unapply)
}
val users = TableQuery[Users]
// relation n:n user-list
case class DBListToUser(listUuid: String, userUuid: String)
class ListToUsers(tag: Tag) extends Table[DBListToUser](tag, "list_user") {
def listUuid = column[String]("list_uuid")
def userUuid = column[String]("user_uuid")
def * = (listUuid, userUuid) <> (DBListToUser.tupled, DBListToUser.unapply)
def pk = primaryKey("list_user_unique", (listUuid, userUuid))
}
val listToUsers = TableQuery[ListToUsers]
I created an additional class to hold the database list object + the users, my goal is to map the query result somehow to instances of this class.
case class DBListWithSharedUsers(list: DBList, sharedUsers: Seq[DBSharedUser])
This is the SQL query for most of it, it gets first all the lists for the user (inner query) then it does a join of lists with list_user with user in order to get the rest of the data and the users for each list, then it filters with the inner query. It doesn't contain the group by part
select * from list inner join list_user on list.uuid=list_user.list_uuid inner join user on user.uuid=list_user.user_uuid where list.uuid in (
select (list_uuid) from list_user where user_uuid=<myUserUuuid>
);
I tested it and it works. I'm trying to implement it in Slick but I'm getting a compiler error. I also don't know if the structure in that part is correct, but haven't been able to come up with a better one.
def findLists(user: User) = {
val listsUsersJoin = listToUsers join lists join users on {
case ((listToUser, list), user) =>
listToUser.listUuid === list.uuid &&
listToUser.userUuid === user.uuid
}
// get all the lists for the user (corresponds to inner query in above SQL)
val queryToGetListsForUser = listToUsers.filter(_.userUuid===user.uuid)
// map to uuids
val queryToGetListsUuidsForUser: Query[Rep[String], String, Seq] = queryToGetListsForUser.map { ltu => ltu.listUuid }
// create query that mirrors SQL above (problems):
val queryToGetListsWithSharedUsers = (for {
listsUuids <- queryToGetListsUuidsForUser
((listToUsers, lists), users) <- listsUsersJoin
if lists.uuid.inSet(listsUuids) // error because inSet requires a traversable and passing a ListToUsers
} yield (lists))
// group - doesn't compile because above doesn't compile:
queryToGetListsWithSharedUsers.groupBy {case (list, user) =>
list.uuid
}
...
}
How can I fix this?
Thanks in advance
Edit:
I put together this emergency solution (at least it compiles), where I execute the query using raw SQL and then do the grouping programmatically, it looks like this:
case class ResultTmp(listId: Int, listUuid: String, listName: String, userId:Int, userUuid: String, userEmail: String, userFirstName: String, userLastName: String, provider: String)
implicit val getListResult = GetResult(r => ResultTmp(r.nextInt, r.nextString, r.nextString, r.nextInt, r.nextString, r.nextString, r.nextString, r.nextString, r.nextString))
val a = sql"""select (list.id, list.uuid, list.name, user.id, user.uuid, user.email, user.first_name, user.last_name, user.provider) from list inner join list_user on list.uuid=list_user.list_uuid inner join user on user.uuid=list_user.user_uuid where list.uuid in (
select (list_uuid) from list_user where user_uuid=${user.uuid}
);""".as[ResultTmp]
val result: Future[Vector[ResultTmp]] = db.run(a)
val res: Future[Seq[DBListWithSharedUsers]] = result.map {resultsTmp =>
val myMap: Map[String, Vector[ResultTmp]] = resultsTmp.groupBy { resultTmp => resultTmp.listUuid }
val r: Iterable[DBListWithSharedUsers] = myMap.map {case (listUuid, resultsTmp) =>
val first = resultsTmp(0)
val list = DBList(first.listId, listUuid, first.listName)
val users: Seq[DBSharedUser] = resultsTmp.map { resultTmp =>
DBSharedUser(resultTmp.userId, resultTmp.userUuid, resultTmp.userEmail, resultTmp.userFirstName, resultTmp.userLastName, resultTmp.provider)
}
DBListWithSharedUsers(list, users)
}
r.toSeq
}
But that's just horrible, how do I get it working the normal way?
Edit 2:
I'm experimenting with monadic joins but also stuck here. For example something like this would get all the lists for a given user:
val listsUsersJoin = for {
list <- lists
listToUser <- listToUsers
user_ <- users if user_.uuid === user.uuid
} yield (list.uuid, list.name, user.uuid, user.firstName ...)
but this is not enough because I need the get also all the users for those lists, so I need 2 queries. So I need to get first the lists for the user and then find all the users for those lists, something like:
val queryToGetListsForUser = listToUsers.filter(_.userUuid===user.uuid)
val listsUsersJoin = for {
list <- lists
listToUser <- listToUsers
user_ <- users /* if list.uuid is in queryToGetListsForUser result */
} yield (list.uuid, list.name, user.uuid, user.firstName ... )
But I don't know how to pass that to the join. I'm not even sure if groupBy, at least at database level is correct, so far I see this used only to aggregate the results to a single value, like count or avg. I need them in a collection.
Edit 3:
I don't know yet if this is right but the monadic join may be the path to the solution. This compiles:
val listsUsersJoin = for {
listToUser <- listToUsers if listToUser.userUuid === user.uuid // get the lists for the user
list <- lists if list.uuid === listToUser.listUuid // join with list
listToUser2 <- listToUsers if list.uuid === listToUser.listUuid // get all the users for the lists
user_ <- users if user_.uuid === listToUser2.userUuid // join with user
} yield (list.uuid, list.name, user.uuid, user.email, user.firstName, user.lastName)
Ah, look at that, I came up with a solution. I still have to test if works but at least the compiler stopped shouting at it. I’ll edit this later if necessary.
val listsUsersJoin = for {
listToUser <- listToUsers if listToUser.userUuid === user.uuid
list <- lists if list.uuid === listToUser.listUuid
listToUser2 <- listToUsers if list.uuid === listToUser.listUuid
user_ <- users if user_.uuid === listToUser2.userUuid
} yield (list.id, list.uuid, list.name, user_.id, user_.uuid, user_.email, user_.firstName, user_.lastName, user_.provider)
val grouped = listsUsersJoin.groupBy(_._2)
val resultFuture = db.run(grouped.result).flatMap {groupedResults =>
val futures: Seq[Future[DBListWithSharedUsers]] = groupedResults.map {groupedResult =>
val listUuid = groupedResult._1
val valueQuery = groupedResult._2
db.run(valueQuery.result).map {valueResult =>
val first = valueResult(0) // if there's a grouped result this should never be empty
val list = DBList(first._1, listUuid, first._3)
val users = valueResult.map {value =>
DBSharedUser(value._4, value._5, value._6, value._7, value._8, value._9)
}
DBListWithSharedUsers(list, users)
}
}
Future.sequence(futures)
}

optimize a SQL query for a couchdb view

How to optimize this SQL query for a couchdb view ?
SELECT * FROM db WHERE user = '$userid' OR userFollowed = '$userid'
The couchdb database contains this structure of documents:
_id
user
userFollowed
This because a user can follows another and viceversa and my scope is to get all followers of user A that this user which follows it turn, for example:
A follows B
B follows A
In this example I need to get B, enstabilishing that both users are followers... I know it's complex to explain and understand but I'll try with the things I'm doing with node.js and cradle.
The view map:
function (doc) {
emit(doc.user, doc.userFollowed)
}
The node.js code:
db.view("followers/getFollowers", function(err, resp) {
if (!err) {
var followers = [];
resp.forEach(function(key, value, id) {
var bothFollow = false;
if (key == userID) {
if (followers.indexOf(value) == -1) {
resp.forEach(function(key, value, id) {
if (value == userID)
bothFollow = true;
});
if (bothFollow)
followers.push(value);
}
} else if (value == userID) {
if (followers.indexOf(key) == -1) {
resp.forEach(function(key, value, id) {
if (key == userID)
bothFollow = true;
});
if (bothFollow)
followers.push(key);
}
}
});
console.log(followers);
}
});
So in the code first I check if the A or B values corrispondes to the other user, then check with another loop if there is a relationship putting the follower in the list
All this code works but I don't think that's the correct procedure and maybe I'm wrong anything :( can you help me please ?
It is easier to emit both users in the view function:
function (doc) {
emit(doc.user, null);
emit(doc.userFollowed, null);
}
Than you can just call the view and will get a plain list:
http://localhost:5984/db/_design/app/_view/followers?include_docs=true

How to use ScalaQuery to insert a BLOB field?

I used ScalaQuery and Scala.
If I have an Array[Byte] object, how do I insert it into the table?
object TestTable extends BasicTable[Test]("test") {
def id = column[Long]("mid", O.NotNull)
def extInfo = column[Blob]("mbody", O.Nullable)
def * = id ~ extInfo <> (Test, Test.unapply _)
}
case class Test(id: Long, extInfo: Blob)
Can I define the method used def extInfo = column[Array[Byte]]("mbody", O.Nullable), how to operate(UPDATE, INSERT, SELECT) with the BLOB type field?
BTW: no ScalaQuery tag
Since the BLOB field is nullable, I suggest changing its Scala type to Option[Blob], for the following definition:
object TestTable extends Table[Test]("test") {
def id = column[Long]("mid")
def extInfo = column[Option[Blob]]("mbody")
def * = id ~ extInfo <> (Test, Test.unapply _)
}
case class Test(id: Long, extInfo: Option[Blob])
You can use a raw, nullable Blob value if you prefer, but then you need to use orElse(null) on the column to actually get a null value out of it (instead of throwing an Exception):
def * = id ~ extInfo.orElse(null) <> (Test, Test.unapply _)
Now for the actual BLOB handling. Reading is straight-forward: You just get a Blob object in the result which is implemented by the JDBC driver, e.g.:
Query(TestTable) foreach { t =>
println("mid=" + t.id + ", mbody = " +
Option(t.extInfo).map { b => b.getBytes(1, b.length.toInt).mkString })
}
If you want to insert or update data, you need to create your own BLOBs. A suitable implementation for a stand-alone Blob object is provided by JDBC's RowSet feature:
import javax.sql.rowset.serial.SerialBlob
TestTable insert Test(1, null)
TestTable insert Test(2, new SerialBlob(Array[Byte](1,2,3)))
Edit: And here's a TypeMapper[Array[Byte]] for Postgres (whose BLOBs are not yet supported by ScalaQuery):
implicit object PostgresByteArrayTypeMapper extends
BaseTypeMapper[Array[Byte]] with TypeMapperDelegate[Array[Byte]] {
def apply(p: BasicProfile) = this
val zero = new Array[Byte](0)
val sqlType = java.sql.Types.BLOB
override val sqlTypeName = "BYTEA"
def setValue(v: Array[Byte], p: PositionedParameters) {
p.pos += 1
p.ps.setBytes(p.pos, v)
}
def setOption(v: Option[Array[Byte]], p: PositionedParameters) {
p.pos += 1
if(v eq None) p.ps.setBytes(p.pos, null) else p.ps.setBytes(p.pos, v.get)
}
def nextValue(r: PositionedResult) = {
r.pos += 1
r.rs.getBytes(r.pos)
}
def updateValue(v: Array[Byte], r: PositionedResult) {
r.pos += 1
r.rs.updateBytes(r.pos, v)
}
override def valueToSQLLiteral(value: Array[Byte]) =
throw new SQueryException("Cannot convert BYTEA to literal")
}
I just post an updated code for Scala and SQ, maybe it will save some time for somebody:
object PostgresByteArrayTypeMapper extends
BaseTypeMapper[Array[Byte]] with TypeMapperDelegate[Array[Byte]] {
def apply(p: org.scalaquery.ql.basic.BasicProfile) = this
val zero = new Array[Byte](0)
val sqlType = java.sql.Types.BLOB
override val sqlTypeName = "BYTEA"
def setValue(v: Array[Byte], p: PositionedParameters) {
p.pos += 1
p.ps.setBytes(p.pos, v)
}
def setOption(v: Option[Array[Byte]], p: PositionedParameters) {
p.pos += 1
if(v eq None) p.ps.setBytes(p.pos, null) else p.ps.setBytes(p.pos, v.get)
}
def nextValue(r: PositionedResult) = {
r.nextBytes()
}
def updateValue(v: Array[Byte], r: PositionedResult) {
r.updateBytes(v)
}
override def valueToSQLLiteral(value: Array[Byte]) =
throw new org.scalaquery.SQueryException("Cannot convert BYTEA to literal")
}
and then usage, for example:
...
// defining a column
def content = column[Array[Byte]]("page_Content")(PostgresByteArrayTypeMapper)