Liftweb - Maximum value of a SQL table field with mapper - sql

I would like to find a simple way to access the maximum value of a Mapped element in liftweb, here is an example of what I actually do:
Mapper part
class MappedEntity extends LongKeyedMapper[MappedEntity] with IdPK {
def getSingleton = MappedEntity
object targetRaw extends MappedInt(this)
}
object MappedEntity extends MappedEntity with LongKeyedMetaMapper[MappedEntity]
Search part
val max = MappedEntity.findAllByInsecureSql(
"SELECT MAX (targetRaw) AS targetRaw FROM MappedEntity",
IHaveValidatedThisSQL("chris", "2011,11,14")
).head.targetRaw.get
When supposing that I work with the SQL table called MappedEntity, I want max to contain either a string or an int equal to the maximum value contained in targetRaw
If you have any suggestion or any question I will be happy to help.

I don't believe that lift-mapper has a built-in way of running this query. In fact, it's very short on any sort of aggregate functions. All I see are some count methods.
The find* methods are only suitable for returning objects of the Mappers type, as you can see by their return types.
Given that there's no great way to do this in Lift as it stands, you have several options to choose from.
Use lift-squeryl-record instead of lift-mapper. Squeryl is a more complete ORM, and supports group and aggregate functions.
Create your own trait which adds max functions to a MetaMapper. This would be a bit of work, but you can use the implementation of count as a guide.
Technically, there could be a more general implementation that handles all of the aggregate functions (max, min, sum, count, ...). That may be what we in the business call 'overkill'.
Just write some SQL. Lift offers a loan-pattern way of obtaining a connection to the database. It also has loan-pattern helpers for preparing statements and executing queries in such a was that everything is automagically closed when you're done with it.
DB.use(DefaultConnectionIdentifier) { conn =>
// execute query
}
Find the object with the value your looking for, then just retrieve that field. This has the distinct disadvantage of being ugly, slow and brittle.
val max: Option[String] = MappedEntity.findAll(
BySql("targetRaw IN (SELECT MAX (targetRaw) FROM MappedEntity)",
IHaveValidatedThisSQL("chris", "2011,11,14")).map(_.targetRaw.is).headOption

Here is the solution I finally used:
val max = DB.runQuery("SELECT YEAR(MAX(targetRaw)) FROM targetTable")._2.head.head.toInt

Related

Golang SQL rows.Scan function for all fields of generic type

I want to use the Scan() function from the sql package for executing a select statement that might (or not) return multiple rows, and return these results in my function.
I´m new to Golang generics, and am confused about how to achieve this.
Usually, we would use the Scan function on a *sql.Rows and provide the references to all fields of our expected 'result type' we want to read the rows into, e.g.:
var alb Album
rows.Scan(&alb.ID, &alb.Title, &alb.Artist,
&alb.Price, &alb.Quantity)
where Album is a struct type with those five fields shown.
Now, for the purpose of not writing a similar function N times for every SQL table I have, I want to use a generic type R instead. R is of generic interface type Result, and I will define this type as one of N different structs:
type Result interface {
StructA | StructB | StructC
}
func ExecSelect[R Result](conn *sql.DB, cmd Command, template R) []R
How can I now write rows.Scan(...) to apply the Scan operation on all fields of my struct of R´s concrete type? e.g. I would want to have rows.Scan(&res.Field1, &res.Field2, ...) where res is of type R, and Scan should receive all fields of my current concrete type R. And do I actually need to provide a 'template' as argument of R´s concrete type, so that at runtime it becomes clear which struct is now relevant?
Please correct me on any mistake I´m making considering the generics.
This is a poor use case for generics.
The arguments to the function sql.Rows.Scan are supposed to be the scan destinations, i.e. your struct fields, one for each column in the result set, and within the generic function body you do not have access to the fields of R type parameter.
Even if you did, the structs in your Result constraint likely have different fields...? So how do you envision writing generic code that works with different fields?
You might accomplish what you want with a package that provides arbitrary struct scanning like sqlx with facilities like StructScan, but that uses reflection under the hood to map the struct fields into sql.Rows.Scan arguments, so you are not getting any benefit at all with generics.
If anything, you are making it worse, because now you have the additional performance overheads of using type parameters.

Using another list in an Entity Framework query

Looking to achieve the below but it is failing as the locations.Any() is being treated as an IEnumerable instead of an IQueryable and scalar functions invoked via EF require IQueryable. I need this filter to happen at the database level (not materialize the list first).
How can I get the locations.Any() to be treated as an IQueryable here? I understand the list doesn't exist in the database but is there a way for Entity Framework to understand this any and build and AND statement with nested OR in SQL?
public Address GetAddresses(List<Loctions> locations)
{
_context.Addresses.
Where(a => locations.Any(l => MyContext.CustomFunction(l.PropA,l.PropB, a.PropA, a.ProbB) > 1 ))
}
[DbFunction("fn_DistanceBetweenCoordinates", "dbo")]
public static decimal CustomFunction(decimal SourceLatitude, decimal SourceLongitude, decimal TargetLatitude, decimal TargetLongitude) {
throw new NotImplementedException();
}
You could achieve this by moving CustomFunction into the database and use that server side function when querying from EF.
Please read User defined function mapping and try to adapt the sample according to your use case.
We haven't seen the body of the CustomFunction, so it's impossible to tell if it's viable to do the transfer from client based UDF to server based.
We also don't know how the Locations list is populated. Depending on how that is done, the adaptation of the example code might become more cumbersome.

Are extensible records useless in Elm 0.19?

Extensible records were one of the most amazing Elm's features, but since v0.16 adding and removing fields is no longer available. And this puts me in an awkward position.
Consider an example. I want to give a name to a random thing t, and extensible records provide me a perfect tool for this:
type alias Named t = { t | name: String }
„Okay,“ says the complier. Now I need a constructor, i.e. a function that equips a thing with specified name:
equip : String -> t -> Named t
equip name thing = { thing | name = name } -- Oops! Type mismatch
Compilation fails, because { thing | name = ... } syntax assumes thing to be a record with name field, but type system can't assure this. In fact, with Named t I've tried to express something opposite: t should be a record type without its own name field, and the function adds this field to a record. Anyway, field addition is necessary to implement equip function.
So, it seems impossible to write equip in polymorphic manner, but it's probably not a such big deal. After all, any time I'm going to give a name to some concrete thing I can do this by hands. Much worse, inverse function extract : Named t -> t (which erases name of a named thing) requires field removal mechanism, and thus is not implementable too:
extract : Named t -> t
extract thing = thing -- Error: No implicit upcast
It would be extremely important function, because I have tons of routines those accept old-fashioned unnamed things, and I need a way to use them for named things. Of course, massive refactoring of those functions is ineligible solution.
At last, after this long introduction, let me state my questions:
Does modern Elm provides some substitute for old deprecated field addition/removal syntax?
If not, is there some built-in function like equip and extract above? For every custom extensible record type I would like to have a polymorphic analyzer (a function that extracts its base part) and a polymorphic constructor (a function that combines base part with additive and produces the record).
Negative answers for both (1) and (2) would force me to implement Named t in a more traditional way:
type Named t = Named String t
In this case, I can't catch the purpose of extensible records. Is there a positive use case, a scenario in which extensible records play critical role?
Type { t | name : String } means a record that has a name field. It does not extend the t type but, rather, extends the compiler’s knowledge about t itself.
So in fact the type of equip is String -> { t | name : String } -> { t | name : String }.
What is more, as you noticed, Elm no longer supports adding fields to records so even if the type system allowed what you want, you still could not do it. { thing | name = name } syntax only supports updating the records of type { t | name : String }.
Similarly, there is no support for deleting fields from record.
If you really need to have types from which you can add or remove fields you can use Dict. The other options are either writing the transformers manually, or creating and using a code generator (this was recommended solution for JSON decoding boilerplate for a while).
And regarding the extensible records, Elm does not really support the “extensible” part much any more – the only remaining part is the { t | name : u } -> u projection so perhaps it should be called just scoped records. Elm docs itself acknowledge the extensibility is not very useful at the moment.
You could just wrap the t type with name but it wouldn't make a big difference compared to approach with custom type:
type alias Named t = { val: t, name: String }
equip : String -> t -> Named t
equip name thing = { val = thing, name = name }
extract : Named t -> t
extract thing = thing.val
Is there a positive use case, a scenario in which extensible records play critical role?
Yes, they are useful when your application Model grows too large and you face the question of how to scale out your application. Extensible records let you slice up the model in arbitrary ways, without committing to particular slices long term. If you sliced it up by splitting it into several smaller nested records, you would be committed to that particular arrangement - which might tend to lead to nested TEA and the 'out message' pattern; usually a bad design choice.
Instead, use extensible records to describe slices of the model, and group functions that operate over particular slices into their own modules. If you later need to work accross different areas of the model, you can create a new extensible record for that.
Its described by Richard Feldman in his Scaling Elm Apps talk:
https://www.youtube.com/watch?v=DoA4Txr4GUs&ab_channel=ElmEurope
I agree that extensible records can seem a bit useless in Elm, but it is a very good thing they are there to solve the scaling issue in the best way.

F# Record vs Class

I used to think of a Record as a container for (immutable) data, until I came across some enlightening reading.
Given that functions can be seen as values in F#, record fields can hold function values as well. This offers possibilities for state encapsulation.
module RecordFun =
type CounterRecord = {GetState : unit -> int ; Increment : unit -> unit}
// Constructor
let makeRecord() =
let count = ref 0
{GetState = (fun () -> !count) ; Increment = (fun () -> incr count)}
module ClassFun =
// Equivalent
type CounterClass() =
let count = ref 0
member x.GetState() = !count
member x.Increment() = incr count
usage
counter.GetState()
counter.Increment()
counter.GetState()
It seems that, apart from inheritance, there’s not much you can do with a Class, that you couldn’t do with a Record and a helper function. Which plays better with functional concepts, such as pattern matching, type inference, higher order functions, generic equality...
Analyzing further, the Record could be seen as an interface implemented by the makeRecord() constructor. Applying (sort of) separation of concerns, where the logic in the makeRecord function can be changed without risk of breaking the contract, i.e. record fields.
This separation becomes apparent when replacing the makeRecord function with a module that matches the type’s name (ref Christmas Tree Record).
module RecordFun =
type CounterRecord = {GetState : unit -> int ; Increment : unit -> unit}
// Module showing allowed operations
[<CompilationRepresentation(CompilationRepresentationFlags.ModuleSuffix)>]
module CounterRecord =
let private count = ref 0
let create () =
{GetState = (fun () -> !count) ; Increment = (fun () -> incr count)}
Q’s: Should records be looked upon as simple containers for data or does state encapsulation make sense? Where should we draw the line, when should we use a Class instead of a Record?
Note the model from the linked post is pure, whereas the code above is not.
I do not think there is a single universal answer to this question. It is certainly true that records and classes overlap in some of their potential uses and you can choose either of them.
The one difference that is worth keeping in mind is that the compiler automatically generates structural equality and structural comparison for records, which is something you do not get for free for classes. This is why records are an obvious choice for "data types".
The rules that I tend to follow when choosing between records & classes are:
Use records for data types (to get structural equality for free)
Use classes when I want to provide C#-friendly or .NET-style public API (e.g. with optional parameters). You can do this with records too, but I find classes more straightforward
Use records for types used locally - I think you often end up using records directly (e.g. creating them) and so adding/removing fields is more work. This is not a problem for records that are used within just a single file.
Use records if I need to create clones using the { ... with ... } syntax. This is particularly nice if you are writing some recursive processing and need to keep state.
I don't think everyone would agree with this and it is not covering all choices - but generally speaking, using records for data and local types and classes for the rest seems like a reasonable method for choosing between the two.
If you want to achieve data hiding in a record, I feel there are better ways of going about it, like abstract data type "pattern".
Take a look at this:
type CounterRecord =
private {
mutable count : int
}
member this.Count = this.count
member this.Increment() = this.count <- this.count + 1
static member Make() = { count = 0 }
The record constructor is private, so the only way of constructing an instance is through the static Make member,
count field is mutable - not something to be proud about, but I'd say fair game for your counter example. Also it's not accessible from outside the module where it's defined due to private modifier. To access it from outside, you have the read-only Count property.
Like in your example, there's an Increment function on the record that mutates the internal state.
Unlike your example, you can compare CounterRecord instances using auto-generated structural comparisons - as Tomas mentioned, the selling point of records.
As for records-as-interfaces, you might see that sometimes in the field, though I think it's more of a JavaScript/Haskell idiom. Unlike those languages, F# has the interface system of .NET, made even stronger when coupled with object expressions. I feel there's not much reason to repurpose records for that.

Implement LINQ to SQL expressions for a database with custom date/time format

I'm working with an MS-SQL database with tables that use a customized date/time format stored as an integer. The format maintains time order, but is not one-to-one with ticks. Simple conversions are possible from the custom format to hours / days / months / etc. - for example, I could derive the month with the SQL statement:
SELECT ((CustomDateInt / 60 / 60 / 24) % 13) AS Month FROM HistoryData
From these tables, I need to generate reports, and I'd like to do this using LINQ-to-SQL. I'd like to have the ability to choose from a variety of grouping methods based on these dates (by month / by year / etc.).
I'd prefer to use the group command in LINQ that targets one of these grouping methods. For performance, I would like the grouping to be performed in the database, rather than pulling all my data into POCO objects first and then custom-grouping them afterwords. For example:
var results = from row in myHistoryDataContext.HistoryData
group row by CustomDate.GetMonth(row.CustomDateInt) into grouping
select new int?[] { grouping.Key , grouping.Count() }
How do I implement my grouping functions (like CustomDate.GetMonth) so that they will be transformed into SQL commands automatically and performed in the database? Do I need to provide them as Func<int, int> objects or Expression<> objects, or by some other means?
You can't write a method and expect L2S to automatically know how to take your method and translate it to SQL. L2S knows about some of the more common methods provided as part of the .NET framework for primitive types. Anything beyond that and it will not know how to perform the translation.
If you have to keep your db model as is:
You can define methods for interacting with the custom format and use them in queries. However, you'll have to help L2S with the translation. To do this, you would look for calls to your methods in the expression tree generated for your query and replace them with an implementation L2S can translate. One way to do this is to provide a proxy IQueryProvider implementation that inspects the expression tree for a given query and performs the replacement before passing it off to the L2S IQueryProvider for translation and execution. The expression tree L2S will see can be translated to SQL because it only contains the simple arithmetic operations used in the definitions of your methods.
If you have the option to change your db model:
You might be better off using a standard DateTime column type for your data. Then your could model the column as System.DateTime and use its methods (which L2S understands). You could achieve this by modifying the table itself or providing a view that performs the conversion and having L2S interact with the view.
Update:
Since you need to keep your current model, you'll want to translate your methods for L2S. Our objective is to replace calls to some specific methods in a L2S query with a lambda L2S can translate. All other calls to these methods will of course execute normally. Here's an example of one way you could do that...
static class DateUtils
{
public static readonly Expression<Func<int, int>> GetMonthExpression = t => (t / 60 / 60 / 24) % 13;
static readonly Func<int, int> GetMonthFunction;
static DateUtils()
{
GetMonthFunction = GetMonthExpression.Compile();
}
public static int GetMonth(int t)
{
return GetMonthFunction(t);
}
}
Here we have a class that defines a lambda expression for getting the month from an integer time. To avoid defining the math twice, you could compile the expression and then invoke it from your GetMonth method as shown here. Alternatively, you could take the body of the lambda and copy it into the body of the GetMonth method. That would skip the runtime compilation of the expression and likely execute faster -- up to you which you prefer.
Notice that the signature of the GetMonthExpression lambda matches the GetMonth method exactly. Next we'll inspect the query expression using System.Linq.Expressions.ExpressionVisitor, find calls to GetMonth, and replace them with our lambda, having substituted t with the value of the first argument to GetMonth.
class DateUtilMethodCallExpander : ExpressionVisitor
{
protected override Expression VisitMethodCall(MethodCallExpression node)
{
LambdaExpression Substitution = null;
//check if the method call is one we should replace
if(node.Method.DeclaringType == typeof(DateUtils))
{
switch(node.Method.Name)
{
case "GetMonth": Substitution = DateUtils.GetMonthExpression;
}
}
if(Substitution != null)
{
//we'd like to replace the method call; we'll need to wire up the method call arguments to the parameters of the lambda
var Replacement = new LambdaParameterSubstitution(Substitution.Parameters, node.Arguments).Visit(Substitution.Body);
return Replacement;
}
return base.VisitMethodCall(node);
}
}
class LambdaParameterSubstitution : ExpressionVisitor
{
ParameterExpression[] Parameters;
Expression[] Replacements;
public LambdaParameterExpressionVisitor(ParameterExpression[] parameters, Expression[] replacements)
{
Parameters = parameters;
Replacements = replacements;
}
protected override Expression VisitParameter(ParameterExpression node)
{
//see if the parameter is one we should replace
int p = Array.IndexOf(Parameters, node);
if(p >= 0)
{
return Replacements[p];
}
return base.VisitParameter(node);
}
}
The first class here will visit the query expression tree and find references to GetMonth (or any other method requiring substitution) and replace the method call. The replacement is provided in part by the second class, which inspects a given lambda expression and replaces references to its parameters.
Having transformed the query expression, L2S will never see calls to your methods, and it can now execute the query as expected.
In order to intercept the query before it hits L2S in a convenient way, you can create your own IQueryable provider that is used as a proxy in front of L2S. You would perform the above replacements in your implementation of Execute and then pass the new query expression to the L2S provider.
I think you can register your custom function in the DataContext and use it in the linq query. In this post is very well explained: http://msdn.microsoft.com/en-us/library/bb399416.aspx
Hope it helps.
Found a reference to some existing code which implements an IQueryable provider as Michael suggests.
http://tomasp.net/blog/linq-expand.aspx
I think assuming that code works, the other lingering issue is that you would have to have an Expression property for each type which contained the date.
The resulting code for avoiding doing that appears to be a bit cumbersome (though it would avoid the sort of errors you're trying to avoid by putting the calculation in a method):
Group Expression:
group row by CustomDate.GetMonth(row, x => x.customdate).Compile().Invoke(row)
Method to Return Group Expression:
public class CustomDate
{
public static Expression<Func<TEntity, int>> GetMonth<TEntity>(TEntity entity, Func<TEntity, int> func)
{
return x => ((func.Invoke(entity)/60/60/24)%13);
}
}
I'm not entirely sure whether that nested .Invoke would cause problems with the Expandable expression or whether the concept would have to be tweaked a bit more, but that code seems to supply an alternative to building a custom IQueryProvider for simple mathematical expressions.
There doesn't appear to be any way to instruct LINQ-to-SQL to call your SQL UDF. However, I believe you can encapsulate a reusable C# implementation in System.Linq.Expressions.Expression trees...
public class CustomDate {
public static readonly Expression<Func<int, int>> GetMonth =
customDateInt => (customDateInt / 60 / 60 / 24) % 13;
}
var results = from row in myHistoryDataContext.HistoryData
group row by CustomDate.GetMonth(row.CustomDateInt) into grouping
select new int?[] { grouping.Key , grouping.Count() }