My problem is simple.
I have a column seqNum: Double which is NOT NULL DEFAULT 1 in CREATE TABLE statement as follows:
CREATE TABLE some_table
(
...
seq_num DECIMAL(18,10) NOT NULL DEFAULT 1,
...
);
User can enter a value for seqNum or not from UI. So the accepting PLAY form is like:
case class SomeCaseClass(..., seqNum: Option[Double], ...)
val secForm = Form(mapping(
...
"seqNum" -> optional(of[Double]),
...
)(SomeCaseClass.apply)(SomeCaseClass.unapply))
The slick Table Schema & Objects looks like this:
case class SomeSection (
...
seqNum: Option[Double],
...
)
class SomeSections(tag: Tag) extends Table[SomeSection](tag, "some_table") {
def * = (
...
seqNum.?,
...
) <> (SomeSection.tupled, SomeSection.unapply _)
...
def seqNum = column[Double]("seq_num", O.NotNull, O.Default(1))
...
}
object SomeSections {
val someSections = TableQuery[SomeSections]
val autoInc = someSections returning someSections.map(_.sectionId)
def insert(s: someSection)(implicit session: Session) = {
autoInc.insert(s)
}
}
When I'm sending seqNum from UI, everything is works fine but when None is there, it breaks saying that cannot insert NULL in NOT NULL column which is correct. This question explains why.
But how to solve this problem using slick? Can't understand where should I check about None? I'm creating & sending an object of SomeSection to insert method of SomeSections Object.
I'm using sql-server, if it matters.
Using the default requires not inserting the value at all rather than inserting NULL. This means you will need a custom projection to insert to.
people.map(_.name).insert("Chris") will use defaults for all other fields. The limitations of scala's native tuple transformations and case class transformations can make this a bit of a hassle. Things like Slick's HLists, Shapeless, Scala Records or Scala XR can help, but are not trivial or very experimental at the moment.
Either you enforce the Option passed to Slick by suffixing it with a .getOrElse(theDefault), or you make the DB accepts NULL (from a None value) and defaults it using some trigger.
Related
I’m going to duplicate some records in table tbl.
It looks like
INSERT INTO tbl SELECT id+100, name FROM tbl
in plain SQL.
I expected that it could look like
db.run(
tableQuery.forceInsertQuery(
tableQuery.map{rec=>rec.copy(id=rec.id+100)}
))
in Slick, where
rec is an instance of Table[ScalaCaseClassForTbl]
with
val id = column[Int]("id", O.PrimaryKey)
val name = column[String]("name")
and
override def * : ProvenShape[ScalaCaseClassForTbl] =
But I do not understand how to make map.
Thank you for any ideas.
The problem with...
tableQuery.map{rec=>rec.copy(id=rec.id+100)}
...is that rec is not a case class, so there's not a copy.
What you can do is map to a tuple of the columns (the Rep[T]) values) and then convert that to a case class.
For example:
tableQuery.map{ rec =>
(rec.id+100, rec.name).mapTo[YourCaseClass]
}
I am using exposed in one project, and I have a table lets call it TableX with two properties
property1 and x knowing that x is nullable
I added TableX.x.isNotNull() to my query so I can ignore null rows!.
And I have Object1 with also two properties as TableX which are: property1 and x knowing that x is not null in Object1
Then when I create Object1 out of the rows from the query, the compiler will nag about x because it should not be null and we are receiving a nullable x from TableX.
So I have added !! when setting x in Object1, given that I am sure that the query will never return any row with x is null because of the constraint that I have added.
But still, I receive KotlinNullPointerException some times. So how is this possible?
I thought of some compatibility issues between MySQL and exposed! But couldn't find any
val result = listOf<Object1>()
transaction {
val query = TableX.select {
TableX.property1.eq(123) and
TableX.x.isNotNull()
}
.fetchSize(1000)
result = query.map {
Object1(
property1 = it[TableX.property1],
x = it[TableX.x]!!
)
}
}
I'm facing the same problem in a company I am working now. And it looks like you have race condition in place. Try to explicitly check for null before building result list. At least you won't get an exception.
In my use case I have a createdDate field that I would like to preserve in the event that the record already exists.
case class Record(id:Long, value:String, createdDate:DateTime, updateDate:DateTime)
Is it possible to use a TableQuery.insertOrUpdate(record) such that only parts of the record are updated in the event the record already exists?
In my case I'd want only the value and updateDate fields to change. Using plain SQL in a stored procedure I'd do something like:
merge Record r
using (
select #id,
#value
) as source (
id,
value
)
on r.id = source.id
when matched then
update set value = source.value, updateDate = getDate()
when not matched then
insert (id, value, createdDate, updatedDate) values
(id, value, getDate(), getDate()
Can Slick's insertOrUpdate modify a subset of columns?
No, I don't believe this is possible with the insertOrUpdate function. This has been requested as a feature but it is not currently implemented.
How can we work around this?
Since the update function does support updating a specific list of columns, we can write our own upsert logic instead of using the insertOrUpdate function. It might work like this:
def insertOrUpdate(record: Record): Future[Int] = {
val insertOrUpdateAction = for {
recordOpt <- records.filter(_.id === record.id).result.headOption
updateAction = recordOpt.map(_ => updateRecord(record))
action <- updateAction.getOrElse(insertRecord(record))
} yield action
connection.run(insertOrUpdateAction)
}
private def updateRecord(record: Record) = {
val query = for {
r <- records.filter(_.id === record.id)
} yield (r.value, r.updatedDate) // list of columns which can be updated
query.update(record.value, record.updatedDate)
}
private def insertRecord(record: Record) = records += record
I have a Field Field<T>. I want to create a named value for that field, to be able to use it in a query. The name of the value should be the name of the field.
select value as field from ...
Is the the correct way to do it?
public <T> Field<T> namedValue(Field<T> field, T value) {
return DSL.val(value, field).as(field);
}
Although it works, I was wondering if there is a shorter way to do this. I might be pedantic here :).
update
I am creating the following construction:
UPADTE table SET x = alias.x, y = alias.y
FROM (SELECT constant value for x, table2.y FROM table2 WHERE ...) AS alias.
Let's simplify this to (for the sake of this example, to focus on the constant selection):
SELECT
FROM (SELECT constant value for x) AS alias.
First, I started with:
Select s1 = context.select(DSL.val("TEST"));
Select s2 = context.select(s1.fields()).from(s1);
This resulted in an incorrect query:
select "alias_66794930"."TEST" from (select 'TEST') as "alias_66794930"
(I am not really sure if this is correct behavior from jOOQ.)
So, I added an alias:
Select s1 = context.select(DSL.val("TEST").as(X));
Select s2 = context.select(s1.fields()).from(s1);
This resulted in:
select "alias_76324565"."x" from (select 'TEST' as "x") as "alias_76324565"
This works fine. Then, I ran into problems when the constant vale was null:
Select s1 = context.select(DSL.val(null).as(X));
Select s2 = context.select(s1.fields()).from(s1);
This resulted in:
select "alias_85795854"."x" from (select cast(? as varchar) as "x") as "alias_85795854"
1400 [localhost-startStop-1] TRACE org.jooq.impl.DefaultBinding - Binding variable 1 : null (class java.lang.Object)
This makes sense, the field type is not known. So I added the field (with its type) as following:
Select s1 = context.select(DSL.val(null, X).as(X));
Select s2 = context.select(s1.fields()).from(s1);
Binding is now correct:
1678 [localhost-startStop-1] TRACE org.jooq.impl.DefaultBinding - Binding variable 1 : null (class java.lang.String)
All done!
I don't think you can get much shorter than what you already have. I mean, your SQL reads:
value as field
And your Java/jOOQ code reads:
DSL.val(value, field).as(field)
You could of course static import DSL.val or DSL.*:
import static org.jooq.impl.DSL.*;
And then shorten things to:
val(value, field).as(field)
And if you're very sure about value's type, you don't need to coerce it to that of field
val(value).as(field)
Now, you definitely can't go any shorter, and there's no more need for your namedValue() function...
I would like to use a Groovy closure to process data coming from a SQL table. For each new row, the computation would depend on what has been computed previously. However, new rows may become available on further runs of the application, so I would like to be able to reload the closure, initialised with the intermediate state it had when the closure was last executed in the previous run of the application.
For example, a closure intending to compute the moving average over 3 rows would be implemented like this:
def prev2Val = null
def prevVal = null
def prevId = null
Closure c = { row ->
println([ prev2Val, prevVal, prevId])
def latestVal = row['val']
if (prev2Val != null) {
def movMean = (prev2Val + prevVal + latestVal) / 3
sql.execute("INSERT INTO output(id, val) VALUES (?, ?)", [prevId, movMean])
}
sql.execute("UPDATE test_data SET processed=TRUE WHERE id=?", [row['id']])
prev2Val = prevVal
prevVal = latestVal
prevId = row['id']
}
test_data has 3 columns: id (auto-incremented primary key), value and processed. A moving mean is calculated based on the two previous values and inserted into the output table, against the id of the previous row. Processed rows are flagged with processed=TRUE.
If all the data was available from the start, this could be called like this:
sql.eachRow("SELECT id, val FROM test_data WHERE processed=FALSE ORDER BY id", c)
The problem comes when new rows become available after the application has already been run. This can be simulated by processing a small batch each time (e.g. using LIMIT 5 in the previous statement).
I would like to be able to dump the full state of the closure at the end of the execution of eachRow (saving the intermediate data somewhere in the database for example) and re-initialise it again when I re-run the whole application (by loading those intermediate variable from the database).
In this particular example, I can do this manually by storing the values of prev2Val, prevVal and prevId, but I'm looking for a generic solution where knowing exactly which variables are used wouldn't be necessary.
Perhaps something like c.getState() which would return [ prev2Val: 1, prevVal: 2, prevId: 6] (for example), and where I could use c.setState([ prev2Val: 1, prevVal: 2, prevId: 6]) next time the application is executed (if there is a state stored).
I would also need to exclude sql from the list. It seems this can be done using c.#sql=null.
I realise this is unlikely to work in the general case, but I'm looking for something sufficiently generic for most cases. I've tried to dehydrate, serialize and rehydrate the closure, as described in this Groovy issue, but I'm not sure how to save and store all the # fields in a single operation.
Is this possible? Is there a better way to remember state between executions, assuming the list of variables used by the closure isn't necessarily known in advance?
Not sure this will work in the long run, and you might be better returning a list containing the values to pass to the closure to get the next set of data, but you can interrogate the binding of the closure.
Given:
def closure = { row ->
a = 1
b = 2
c = 4
}
If you execute it:
closure( 1 )
You can then compose a function like:
def extractVarsFromClosure( Closure cl ) {
cl.binding.variables.findAll {
!it.key.startsWith( '_' ) && it.key != 'args'
}
}
Which when executed:
println extractVarsFromClosure( closure )
prints:
['a':1, 'b':2, 'c':4]
However, any 'free' variables defined in the local binding (without a def) will be in the closures binding too, so:
fish = 42
println extractVarsFromClosure( closure )
will print:
['a':1, 'b':2, 'c':4, 'fish':42]
But
def fish = 42
println extractVarsFromClosure( closure )
will not print the value fish