NHibernate HiLo algorithm re-using hi as lo? - nhibernate

I'm using NHibernate 3.1 with FluentNHibernate 1.2.0.712.
We're using the HiLo generator to generate Ids - with standard settings except max_lo is set to 100 (default 1000).
Our mappings all have this line in the ctor:
Id(m => m.Id)
.GeneratedBy.HiLo("100");
Hovewer, when we start fresh with a new SessionFactory, and the first item is saved - let's say the next hi is 12 it gets Id 1212 (I would have expected 1200 or 1201). Is this intended behaviour or am I missing some vital part of the configuration?
I've tried using default values ("1000") as the max_lo, but then the above would result in 12012 - still not exactly what I would expect.

I read through the nhibernate code-base. This is apparently intended behaviour - for the initial set, it 'clocks over' (for a reason beyond me - but probably has something to do with keeping parity with hibernate (since that has that exact same comment :-)).
For all subsequent increments - all is performing as expected.
So, closing down this question.

Related

Php: Saving Server resources - theoretical

(Use Laravel)
I have saved rows in my table. On updating, if I uncheck checkbox, the value must be reset to default. So my checkbox receives only checked values and I take all rows from database and set them to default first, then I set the needed value only for checked rows. This is done in 2 rows of code,
\DB::table('tablename')->where('val', '=', $id)->update(['val' => 0]);
\DB::table('tablename')->whereIn('id', $request->checked)->update(['val' => $id]);
this is one way, but for me it is awful to set all rows to default when I need to set default only not checked ones. The other option is to use foreach, which I don't like in controllers. So question 1 (theoretical and important) - Which takes more resource - huge sql or foreach? Q2 (practical) - how would you solve this problem, considering its Laravel
You could add the inverse of whereIn to your first query.
\DB::table('tablename')
->whereNotIn('id', $request->checked)
->where('val', '=', $id)
->update(['val' => 0]);

Can't retrieve db Model getId() anymore in activejdbc 1.4.12

I was using activejdbc 1.4.9 and the following sample code was running just fine
Client client = new Client();
client.save();
Assert.assertNotNull(client.getId());
Since I upgraded to 1.4.12, client.getId() is always returning null when save is inserting a new record. i.e. id is not getting refreshed.
Did anyone notice this as well? Do I have to do anything different using this version to get the newly created id?
I cannot confirm this with the version 1.4.12. For instance, I wrote this example: https://github.com/javalite/simple-example/blob/new_id. Check out code in the Main.java. As you can see, the code is identical to yours, but on line 21, it prints out a real value of the new ID.
If you can put together a simple example that replicates your issue, I will take a look.
EDIT:
Now that you provided more info in comments below, the problem is with you setting the ID to empty string: "". Because the ID is not null anymore, the method save() uses update rather than insert. The update then uses the value of ID to update an "existing" record, and, as a result does not do anything. Messing with ID value is possible but not advised. Please see this for more information: http://javalite.io/surrogate_primary_keys

phalcon querybuilder total_items always returns 1

I make a query via createBuilder() and when executing it (getQuery()->execute()->toArray())
I got 10946 elements. I want to paginate it, so I pass it to:
$paginator = new \Phalcon\Paginator\Adapter\QueryBuilder(array(
"builder" => $builder,
"limit" => $limit,
"page" => $current_page
));
$limit is 25 and $current_page is 1, but when doing:
$paginator->getPaginate();
$page->total_items;
returns 1.
Is that a bug or am I missing something?
UPD: it seems like when counting items it uses created sql with limit. There is no difference what limit is, limit divided by items per page always equals 1. I might be mistaken.
UPD2: Colleague helped me to figure this out, the bug was in the query phalcon produces: count() of the group by counts grouped elements. So a workaround looks like:
$dataCount = $builder->getQuery()->execute()->count();
$page->next = $page->current + 1;
$page->before = $page->current - 1 > 0 ? $page->current - 1 : 1;
$page->total_items = $dataCount;
$page->total_pages = ceil($dataCount / 100);
$page->last = $page->total_pages;
I know this isn't much of an answer but this is most likely to be a bug. Great guys at Phalcon took on a massive job that is too big to do it properly in their little free time and things like PHQL, Volt and other big but non-core components do not receive as much attention as we'd like. Also given that most time in the past 6 months was spent on v2 there are nearly 500 bugs about stuff like that and it's counting. I came across considerable issues in ORM, Volt, Validation and Session, which in the end made me stick to other not as cool but more proven solutions. When v2 comes out I'm sure all attention will on the bug list and testing, until then we are mostly on our own. Given that it's all C right now, only a few enthusiast get involved, with v2 this will also change.
If this is the only problem you are hitting, the best approach is to update your query to get the information you need yourself without getPaginate().

Number of results of find-feature in sails.js restful api (in newer versions)

I started using the sails.js framework a few months ago because I need it's restful API.
In the first version a simple "http://domain.com:1337/mymodel" returned all datasets of the connected MySQL-database, however, after an update to V 0.10.xx it returns only the first 30 results.
I searched the sails.js changelog, documentation and various examples around the web and tried several ideas but I can't figure out how to force sails.js to return **all* results again.
Has anybody a solution for this?
Use sails.config.blueprints.defaultLimit for general record limits. This also serves as the default limit for populated associations. There's technically no way at the moment to specify "no limit" for blueprints, but you can set the limit to the max number value as long as you don't have more than 9 quadrillion records :)
config/blueprints.js
defaultLimit: Number.MAX_VALUE // Set to highest possible value
Use populate_limit in your route config options to set the populate limit on a per-route basis.
config/routes.js
"GET /user": {blueprint: populate_limit: 10}
Use populate_[alias]_limit in your route config options to set the populate limit for a particular association on a per-route basis (e.g. populate_pets_limit: 10)
config/routes.js
"GET /user": {blueprint: 'find', limit: 20, populate_limit: 10, populate_pets_limit: 5}
I'll make sure this all gets added to the docs!
defaultLimit: -1 brings back all rows
if you need cange only populate limit, you can use populate_limit in sails.config.blueprints
// defaultLimit: 30
populate_limit:999 //default value for populate limit

NHibernate not finding named query result sets in 2nd level cache

I have a simple unit test where I execute the same NHibernate named query 2 times (different session each time) with the identical parameter. It's a simple int parameter, and since my query is a named query I assume these 2 calls are identical and the results should be cached.
In fact, I can see in my log that the results ARE being cached, but with different keys. So, my 2nd query results are never found in cache.
here's a snip from my log (note how the keys are different):
(first query)
DEBUG NHibernate.Caches.SysCache2.SysCacheRegion [(null)] <(null)> -
adding new data: key= [snipped]... parameters: ['809']; named
parameters: {}#743460424 &
value=System.Collections.Generic.List`1[System.Object]
(second query)
DEBUG NHibernate.Caches.SysCache2.SysCacheRegion [(null)] <(null)> -
adding new data: key=[snipped]... parameters: ['809']; named
parameters: {}#704749285 &
value=System.Collections.Generic.List`1[System.Object]
I have NHibernate set up to use the query cache. And I have these queries set to cacheable=true. Don't know where else to look. Anyone have any suggestions?
Thanks
-Mike
Okay - i figured this out. I was executing my named query using the following syntax:
IQuery q = session.GetNamedQuery("MyQuery")
.SetResultTransformer(Transformers.AliasToBean(typeof(MyDTO)))
.SetCacheable(true)
.SetCacheRegion("MyCacheRegion");
( which, I might add, is EXACTLY how the NHibernate docs tell you how to do it.. but I digress ;) )
If you use create a new AliasToBean Transformer for every query, then each query object (which is the key to the cache) will be unique and you will never get a cache hit. So, in short, if you do it like the nhib docs say then caching wont work.
Instead, create your transformer one time in a static member var and then use that for your query, and caching will work - like this:
private static IResultTransformer myTransformer = Transformers.AliasToBean(typeof(MyDTO))
...
IQuery q = session.GetNamedQuery("MyQuery")
.SetResultTransformer(myTransformer)
.SetCacheable(true)
.SetCacheRegion("MyCacheRegion");