Can you access statistics for a RavenDB document session - ravendb

I am doing some performance work in my unit tests and wondering if it was possible to get access to statistics for my RavenDb session (similar to NHibernate session statistics)?
I want to know things like total query count and number of trips to the server.

Berko,
Yes, you can.
Look at the session.Advanced property, you have a number of things there. The most important one of them is probably the NumberOfRequests that this session made.

Raven DB Profiler can give you some of the information you need.... Here
http://ayende.com/blog/38913/ravendb-mvc-profiler-support

Related

Azure SQL DW - Control Resource Class on Query Level

I am running some ETL on my Azure SQL DW at DW500
so I have 20 concurrency slots available
some of my queries would require RC xlargerc, some largerc, etc
so the expected load can vary from query to query
is there any option to control the assigned RC in the query directly?
e.g. using OPTION or any other hints?
the only workaround I could find so far is to create separate users with different resource classes assigned which is not really feasible
thanks in advance,
-gerhard
There is currently no option to control this at query level. You have to be logged in as the appropriate user with the appropriate resource class (smallrc, mediumrc, largerc, and xlargerc) assigned to them.
DWU500 is pretty low, with max 20 concurrent queries and only 20 concurrency slots. Remember an xlargerc user would take 16 of those slots, as per here, so you could only have 1 other mediumrc user or 4 smallrc users running at the same time. ie you could not have one largerc and one xlargerc user running at the same time. These queries would queue.
Can you tell us a bit more about your scenario? For example, why switch users during ETL? What ETL tool are you using, eg SSIS, Azure Data Factory etc
If you think this is a worthwhile option, consider making a feedback request.

Sql query over Ignite CacheStore or over database

I am a beginner for Ignite, so I have some puzzles, one of which is as follows:when I try to query cache, whether it can look if memory contains or not. If not, then whether it will query database? If not,how to achieve such way?
Please help me if you know.Thx.
Queries work over in-memory data only. You can either use key access (operations like get(), getAll(), etc.) and utilize automatic read-through from the persistence store, or manually preload the data before running queries. For information on how effectively load large data set into the cache, see this page: https://apacheignite.readme.io/docs/data-loading

Postgres Paginating a FTS Query

What is the best way to paginate a FTS Query ? LIMIT and OFFSET spring to mind. However, I am concerned that by using limit and offset I'd be running the same query over and over (i.e., once for page 1, another time for page 2.... etc).
Will PostgreSQL be smart enough to transparently cache the query result ? Thus subsequently satisfying the pagination queries from a cache ? If not, how do I paginate efficiently ?
edit
The database is for single user desktop analytics. But, I still want to know what the best way is, if this were a live OLTP application. I have addressed the problem in the past with SQL Server by creating a ordered set of document id's and cache the query parameters against the IDs in a seperate table. Clearing the cache every few hours (so as to allow new documents to enter the result set).
Perhaps this approach is viable for postgres. But still I wanna know the mechanics present in the database and how best to leverage them. If I were a DB developer I'd enable the query-response cache to work with the FTS system.
A server-side SQL cursor can be effectively used for this if a client session can be tied to a specific db connection that stays open during the entire session. This is because cursors cannot be shared between different connections. But if it's a desktop app with a unique connection per running instance, that's fine.
The doc for DECLARE CURSOR explains how the resultset is going to be materialized when the cursor is declared WITH HOLD in a committed transaction.
Locking shouldn't be a concern at all. Should the data be modified while the cursor is already materialized, it wouldn't affect the reader nor block the writer.
Other than that, there is no implicit query cache in PostgreSQL. The LIMIT/OFFSET technique implies a new execution of the query for each page, which may be as slow as the initial query depending on the complexity of the execution plan and the effectiveness of the buffer cache and disk cache.
Well, to be honest, what you may want is for your query to return a live Cursor, that you can then reuse to fetch certain portions of the results that it (the Cursor) represents. Now, I don't know if PostGre supports this, Mongo DB does, and I've tried going down that road but it's not cool. For example: do you know how much time it will pass between when a query is done and a second page of results from that query are demanded? Can the cursor stay on for that amount if time? And if it can, what does it mean exactly, will it block resources, such that if you have many lazy users, who start queries but take a long time to navigate through pages, your server might be bogged down by locked cursors?
Honestly, I think redoing a paginated query each time someone asks for a certain page is ok. First of all, you'll be returning a small number of entries (no need to display more than 10-20 entries at a time) and that's gonna be pretty fast, and second, you should more likely tune up your server so that it executes frequent request fast (add indexes, put it behind a Solr server if necessary, etc.) rather than have those queries run slow, but caching them.
Finally, if you really want to speed up full text searches, and have fancy indexes like case insensitive, prefix and suffix enabled, etc, you should take a look at Lucene or better yet Solr (which is Lucene on steroids) as an in-between search and indexing solution between your users and your persistence tier.

NHibernate mapping very slow

I am using nhibernate, to create a collection of immutable domain objects from a legacy oracle DB. some simple lookup using Criteria api take over 60 seconds. If subsequent lookups of the same lookup is very fast usually less than 300ms (100ms in db and rest by nhibernate, i dont have 2-level cache or query cache enabled, all queries do go the DB I checked using nhibernate prof). If however i leave the app idle for a couple of minutes and run the lookup again it takes usualy 50-60 secs,
I have used nhibernate profiler and in every case its clearly showing only at the most 100ms is spend in database, i figure the rest of the time must be taken by nhibernate, I cant understand why ?
Some background info :
I am using dynamic-component to map a 20 columns into key value
pairs.
Using nhibernate 2.1
i am using dynamic-component in the mapping
Once retrieved the data is never modified, in mapping i am
using mutable=false flag.
its a legacy db so i am using a composite
key in the mapping.
I am only retriving around 50 objects in each lookup
When I open session I have set FlushMode=Never
I also tried stateless session (still have slow performance on initial lookup)
I dont not define or use any custom user types in the mapping
I am clearly doing something wrong or missed some thing, any ideas ?
I suggest downloading a C# performance profiler such as dotTrace. You will be able to quickly get a more accurate understanding of where your performance problem is. I'm pretty sure it is not an NHibernate mapping issue.
How is the lifetime of your SessionFactory being managed? Is it possible that your SessionFactory is being disposed of after some period of inactivity?
It is most likely not an Nhibernate issue.
Use the code below to figure the amount of time it takes to get your data back. (db+network_latency+nhibernate_execution)
Once you are positive that there is no APP related latency involved, check the database by looking at the query plan caching and query result caching. The first time the query runs, a cache miss, your db will invest in time-consuming and intensive operations to generate the resultset.
If 1 and 2 don't yield any useful information, check your network. Maybe some network pressure is causing heavy latency.
As mentioned by JeffreyABecker below, study how your session factories get disposed/created. Find usages of ISessionFactory.Dispose() or configuration.BuildSessionFactory(). Building ISessionFactory objects is an expensive operation and, typically, you should create them on application start and dispose them on application stop/shutdown. 60s> it is still a sound number for ISessionFactory instantiation.
//Codez
Stopwatch stopwatch = new Stopwatch();
// Begin timing
stopwatch.Start();
// Nhibernate specific stuff ONLY in here
// Depending on your setup, do a session.Flush(); if possible.
// End Timing
stopwatch.Stop();
// Write result - console/log4net/diagnostics.debug/etc
Console.WriteLine("Time elapsed: {0}",stopwatch.Elapsed);

NHibernate - counters with concurrency and second-level-caching

I'm new to NHibernate and am having difficulties setting it up for my current website. This website will run on multiple webservers with one database server, and this leaves me facing some concurrency issues.
The website will have an estimated 50.000 users or so registered, and each user will have a profile page. On this page, other users can 'like' another user, much like Facebook. This is were the concurrency problem kicks in.
I was thinking of using second-level cache, most likely using the MemChached provider since I'll have multiple webservers. What is the best way to implement such a 'Like' feature using NHibernate? I was thinking of three options:
Use a simple Count() query. There will be a table 'User_Likes' where each row would represent a like from one user to another. To display the number the number of likes, I would simply ask the number of Likes for a user, which would be translated to the database as a simple SELECT COUNT(*) FROM USER_LIKES WHERE ID = x or something. However, I gather this would be come with a great performance penalty as everytime a user would visit a profile page and like another user, the number of likes would have to be recalculated, second-level cache or not.
Use an additional NumberOfLikes column in the User table and increment / decrement this value when a user likes or dislikes another user. This however gives me concurrency issues. Using a simple for-loop, I tested it by liking a user 1000 times on two servers and the result in the db was around 1100 likes total. That's a difference of 900. Whether a realistic test or not, this is of course not an option. Now, I looked at optimistic and pessimistic locking as a solution (is it?) but my current Repository pattern is, at the moment, not suited to use this I'm afraid, so before I fix that, I'd like to know if this is the right way to go.
Like 2, but using custom HQL and write the update statement myself, something along the lines of UPDATE User SET NumberOfLikes = NumberOfLikes + 1 WHERE id = x. This won't give me any concurrency issues in the database right? However, I'm not sure if I'll have any datamismatch on my multiple servers due to the second level caching.
So... I really need some advice here. Is there another option? This feels like a common situation and surely NHibernate must support this in an elegant manner.
I'm new to NHIbernate so a clear, detailed reply is both necessary and appreciated :-) Thanks!
I suspect you will see this issue in more locations. You could solve this specific issue with 3., but that leaves other locations where you're going to encounter concurrency issues.
What I would advise is to implement pessimistic locking. The usual way to do this is to just apply a transaction to the entire HTTP request. With the BeginRequest in your Global.asax, you start a session and transaction. Then, in the EndRequest you commit it. With the Error event, you go the alternative path of doing a rollback and discarding the session.
This is quite an accepted manner of applying NHibernate. See for example http://dotnetslackers.com/articles/aspnet/Configuring-NHibernate-with-ASP-NET.aspx.
I'd go with 3. I believe this in this kind of application it's not so critical if some pages show a slightly outdated value for a while.
IIRC, HQL updates do not invalidate the entity cache entry, so you might have to do it manually.