Coldfusion EntityLoad reading calculated field - orm

I have an Entity with a lot of fields.
<cfscript>
component persistent="true" output="false" {
...
property name="placeholder" default = 0;
property name="expired" update=false insert=false;
property name="admin" default = 0;
property name="partner" default = 0;
Much later, but in the same request. I am going to do this
if (!arguments.Account.getPlaceholder() ) local.arRoles.append("account");
if (!arguments.Account.getExpired() ) local.arRoles.append("event");
if (arguments.Account.getAdmin() ) local.arRoles.append("admin");
if (arguments.Account.getPartner() ) local.arRoles.append("partner");
And I get an error that looks like this
I do a dump of the object. It looks like it should be OK

Expired is not like the other fields. It is backed by a calculation done on the database. That is why it is
property name="expired" update=false insert=false;
Furthermore
Account = EntityLoadByPK("Accounts", arguments.id);
Many not have what is expected. A read from the DB must be forced
Account = EntityLoadByPK("Accounts", arguments.id);
EntityReload(Account);

Related

Data class .copy only if nullable parameter is not null

I have a Front-End application that sends me Data to update my User (updatedUser). Since I don't want to send the whole Userdata, I'm only sending the data that has changed. Now I want to Update my Userdata with the changes provided, so I'd like to know if there is a more elegant way to do this than just a list of ifs/lets. I'm quite new to kotlin, so don't expect too much from me^^
Not so elegant way:
changeData.firstname?.let { updatedUser.firstname = it }
changeData.lastname?.let { updatedUser.lastname = it }
...
Expected (doesn't work - type mismatch):
updatedUser.copy(
firstname = changeData?.firstname,
lastname = changeData?.lastname,
...)
the reason you get a type mismatch is There is a string type and a string nullable type
var variableName:String = "myData" // if you want a non nullable
var variableName:String? = "myDataThatCouldBeNull" // if you want a string that could be null

In VTD-XML how to add new attribute into tag with existing attributes?

I'm using VTD-XML to update XML files. In this I am trying to get a flexible way of maintaining attributes on an element. So if my original element is:
<MyElement name="myName" existattr="orig" />
I'd like to be able to update it to this:
<MyElement name="myName" existattr="new" newattr="newValue" />
I'm using a Map to manage the attribute/value pairs in my code and when I update the XML I'm doing something like the following:
private XMLModifier xm = new XMLModifier();
xm.bind(vn);
for (String key : attr.keySet()) {
int i = vn.getAttrVal(key);
if (i!=-1) {
xm.updateToken(i, attr.get(key));
} else {
xm.insertAttribute(key+"='"+attr.get(key)+"'");
}
}
vn = xm.outputAndReparse();
This works for updating existing attributes, however when the attribute doesn't already exist, it hits the insert (insertAttribute) and I get "ModifyException"
com.ximpleware.ModifyException: There can be only one insert per offset
at com.ximpleware.XMLModifier.insertBytesAt(XMLModifier.java:341)
at com.ximpleware.XMLModifier.insertAttribute(XMLModifier.java:1833)
My guess is that as I'm not manipulating the offset directly this might be expected. However I can see no function to insert an an attribute at a position in the element (at end).
My suspicion is that I will need to do it at the "offset" level using something like xm.insertBytesAt(int offset, byte[] content) - as this is an area I have needed to get into yet is there a way to calculate the offset at which I can insert (just before the end of the tag)?
Of course I may be mis-using VTD in some way here - if there is a better way of achieving this then happy to be directed.
Thanks
That's an interesting limitation of the API I hadn't encountered yet. It would be great if vtd-xml-author could elaborate on technical details and why this limitation exists.
As a solution to your problem, a simple approach would be to accumulate your key-value pairs to be inserted as a String, and then to insert them in a single call after your for loop has terminated.
I've tested that this works as per your code:
private XMLModifier xm_ = new XMLModifier();
xm.bind(vn);
String insertedAttributes = "";
for (String key : attr.keySet()) {
int i = vn.getAttrVal(key);
if (i!=-1) {
xm.updateToken(i, attr.get(key));
} else {
// Store the key-values to be inserted as attributes
insertedAttributes += " " + key + "='" + attr.get(key) + "'";
}
}
if (!insertedAttributes.equals("")) {
// Insert attributes only once
xm.insertAttribute(insertedAttributes);
}
This will also work if you need to update the attributes of multiple elements, simply nest the above code in while(autoPilot.evalXPath() != -1) and be sure to set insertedAttributes = ""; at the end of each while loop.
Hope this helps.

HibernateException: Errors in named query

When running a particular unit-test, I am getting the exception:
Caused by: org.hibernate.HibernateException: Errors in named queries: UPDATE_NEXT_FIRE_TIME
at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:437)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1385)
at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:954)
at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:891)
... 44 more
for the named query defined here:
#Entity(name="fireTime")
#Table(name="qrtz_triggers")
#NamedQueries({
#NamedQuery(
name="UPDATE_NEXT_FIRE_TIME",
query= "update fireTime t set t.next_fire_time = :epochTime where t.trigger_name = 'CalculationTrigger'")
})
public class JpaFireTimeUpdaterImpl implements FireTimeUpdater {
#Id
#Column(name="next_fire_time", insertable=true, updatable=true)
private long epochTime;
public JpaFireTimeUpdaterImpl() {}
public JpaFireTimeUpdaterImpl(final long epochTime) {
this.epochTime = epochTime;
}
#Override
public long getEpochTime() {
return this.epochTime;
}
public void setEpochTime(final long epochTime) {
this.epochTime = epochTime;
}
}
After debugging as deep as I could, I've found that the exception occurs in w.statement(hqlAst) in QueryTranslatorImpl:
private HqlSqlWalker analyze(HqlParser parser, String collectionRole) throws QueryException, RecognitionException {
HqlSqlWalker w = new HqlSqlWalker( this, factory, parser, tokenReplacements, collectionRole );
AST hqlAst = parser.getAST();
// Transform the tree.
w.statement( hqlAst );
if ( AST_LOG.isDebugEnabled() ) {
ASTPrinter printer = new ASTPrinter( SqlTokenTypes.class );
AST_LOG.debug( printer.showAsString( w.getAST(), "--- SQL AST ---" ) );
}
w.getParseErrorHandler().throwQueryException();
return w;
}
Is there something wrong with my query or annotations?
NamedQuery should be written with JPQL, but query seems to mix both names of persistent attributes and names of database columns. Names of database columns cannot be used in JPQL.
In this case instead of next_fire_time name of the persistent attribute epochTime should be used. Also trigger_name looks more like name of the database column than name of the persistent attribute, but it seems not to be mapped in your current class at all. After it is mapped, query is as follows:
update fireTime t set t.epochTime = :epochTime
where t.triggerName = 'CalculationTrigger'
If SQL query is preferred, then #NamedNativeQuery should be used instead.
As a side note, JPA 2.0 specification doesn't encourage changing primary key:
The application must not change the value of the primary key[10]. The
behavior is undefined if this occurs.[11]
In general entities are not aware of changed made via JPQL queries. That gets especially interesting when trying to refresh entity that does not exist anymore (because primary key was changed).
Additionally naming is little bit confusing:
Name of the class looks more like name of the service class
than name of the entity.
Starting name of the entity with lower
case letter is rather rare style.
Name of the entity, name of the
table and name of the class do not match too well.

Proper Way to Retrieve More than 128 Documents with RavenDB

I know variants of this question have been asked before (even by me), but I still don't understand a thing or two about this...
It was my understanding that one could retrieve more documents than the 128 default setting by doing this:
session.Advanced.MaxNumberOfRequestsPerSession = int.MaxValue;
And I've learned that a WHERE clause should be an ExpressionTree instead of a Func, so that it's treated as Queryable instead of Enumerable. So I thought this should work:
public static List<T> GetObjectList<T>(Expression<Func<T, bool>> whereClause)
{
using (IDocumentSession session = GetRavenSession())
{
return session.Query<T>().Where(whereClause).ToList();
}
}
However, that only returns 128 documents. Why?
Note, here is the code that calls the above method:
RavenDataAccessComponent.GetObjectList<Ccm>(x => x.TimeStamp > lastReadTime);
If I add Take(n), then I can get as many documents as I like. For example, this returns 200 documents:
return session.Query<T>().Where(whereClause).Take(200).ToList();
Based on all of this, it would seem that the appropriate way to retrieve thousands of documents is to set MaxNumberOfRequestsPerSession and use Take() in the query. Is that right? If not, how should it be done?
For my app, I need to retrieve thousands of documents (that have very little data in them). We keep these documents in memory and used as the data source for charts.
** EDIT **
I tried using int.MaxValue in my Take():
return session.Query<T>().Where(whereClause).Take(int.MaxValue).ToList();
And that returns 1024. Argh. How do I get more than 1024?
** EDIT 2 - Sample document showing data **
{
"Header_ID": 3525880,
"Sub_ID": "120403261139",
"TimeStamp": "2012-04-05T15:14:13.9870000",
"Equipment_ID": "PBG11A-CCM",
"AverageAbsorber1": "284.451",
"AverageAbsorber2": "108.442",
"AverageAbsorber3": "886.523",
"AverageAbsorber4": "176.773"
}
It is worth noting that since version 2.5, RavenDB has an "unbounded results API" to allow streaming. The example from the docs shows how to use this:
var query = session.Query<User>("Users/ByActive").Where(x => x.Active);
using (var enumerator = session.Advanced.Stream(query))
{
while (enumerator.MoveNext())
{
User activeUser = enumerator.Current.Document;
}
}
There is support for standard RavenDB queries, Lucence queries and there is also async support.
The documentation can be found here. Ayende's introductory blog article can be found here.
The Take(n) function will only give you up to 1024 by default. However, you can change this default in Raven.Server.exe.config:
<add key="Raven/MaxPageSize" value="5000"/>
For more info, see: http://ravendb.net/docs/intro/safe-by-default
The Take(n) function will only give you up to 1024 by default. However, you can use it in pair with Skip(n) to get all
var points = new List<T>();
var nextGroupOfPoints = new List<T>();
const int ElementTakeCount = 1024;
int i = 0;
int skipResults = 0;
do
{
nextGroupOfPoints = session.Query<T>().Statistics(out stats).Where(whereClause).Skip(i * ElementTakeCount + skipResults).Take(ElementTakeCount).ToList();
i++;
skipResults += stats.SkippedResults;
points = points.Concat(nextGroupOfPoints).ToList();
}
while (nextGroupOfPoints.Count == ElementTakeCount);
return points;
RavenDB Paging
Number of request per session is a separate concept then number of documents retrieved per call. Sessions are short lived and are expected to have few calls issued over them.
If you are getting more then 10 of anything from the store (even less then default 128) for human consumption then something is wrong or your problem is requiring different thinking then truck load of documents coming from the data store.
RavenDB indexing is quite sophisticated. Good article about indexing here and facets here.
If you have need to perform data aggregation, create map/reduce index which results in aggregated data e.g.:
Index:
from post in docs.Posts
select new { post.Author, Count = 1 }
from result in results
group result by result.Author into g
select new
{
Author = g.Key,
Count = g.Sum(x=>x.Count)
}
Query:
session.Query<AuthorPostStats>("Posts/ByUser/Count")(x=>x.Author)();
You can also use a predefined index with the Stream method. You may use a Where clause on indexed fields.
var query = session.Query<User, MyUserIndex>();
var query = session.Query<User, MyUserIndex>().Where(x => !x.IsDeleted);
using (var enumerator = session.Advanced.Stream<User>(query))
{
while (enumerator.MoveNext())
{
var user = enumerator.Current.Document;
// do something
}
}
Example index:
public class MyUserIndex: AbstractIndexCreationTask<User>
{
public MyUserIndex()
{
this.Map = users =>
from u in users
select new
{
u.IsDeleted,
u.Username,
};
}
}
Documentation: What are indexes?
Session : Querying : How to stream query results?
Important note: the Stream method will NOT track objects. If you change objects obtained from this method, SaveChanges() will not be aware of any change.
Other note: you may get the following exception if you do not specify the index to use.
InvalidOperationException: StreamQuery does not support querying dynamic indexes. It is designed to be used with large data-sets and is unlikely to return all data-set after 15 sec of indexing, like Query() does.

Problems with CF9's EntityLoad()

I've only just started out with CF9's ORM features, and have run into a problem.
I've got a single table set up - member - which has 2 records in it.
If I try:
<cfscript>
members = EntityLoad("member");
writedump(members);
</cfscript>
...I should get an array of member objects; but I get the error:
unexpected token: member near line 1, column 6 [from member]
The error occurred in \\vmware-host\Shared
Folders\Web\sites\testbed\webroot\orm\index.cfm: line 2
1 : <cfscript>
2 : members = EntityLoad("member");
3 : writedump(members);
4 : </cfscript>
If I try:
<cfscript>
members = EntityLoad("member", {});
writedump(members);
</cfscript>
...I get the expected array of 2 member objects - but it takes 5-10 seconds to return it.
But if I request a unique object:
<cfscript>
members = EntityLoad("member", 1, true);
writedump(members);
</cfscript>
...I get the result returned instantaneously.
Any ideas as to what the problem(s) is/are?
member.cfc:
component output="false" persistent="true"
{
// identifier
property name="memberid" fieldtype="id";
// properties
property name="firstname";
property name="lastname";
property name="address1";
property name="address2";
property name="city";
property name="postcode";
property name="country";
property name="email";
property name="telephone";
property name="uuid";
property name="password";
}
OK, I've figured it out...
It turns out that "member" is a (semi-)reserved word in Hibernate: https://forum.hibernate.org/viewtopic.php?f=1&t=1005886&start=0
Changing the object and table names to "sitemember" fixed the problem.
I would guess that it works fine if in the underlying HQL query there's a WHERE clause following the "SELECT FROM member"; but if you just have the basic entityload("member") then it doesn't have this WHERE clause.
I wonder if there are any other names I need to steer clear of?
Thanks for the help, Henry!