NHibernate - counters with concurrency and second-level-caching - nhibernate

I'm new to NHibernate and am having difficulties setting it up for my current website. This website will run on multiple webservers with one database server, and this leaves me facing some concurrency issues.
The website will have an estimated 50.000 users or so registered, and each user will have a profile page. On this page, other users can 'like' another user, much like Facebook. This is were the concurrency problem kicks in.
I was thinking of using second-level cache, most likely using the MemChached provider since I'll have multiple webservers. What is the best way to implement such a 'Like' feature using NHibernate? I was thinking of three options:
Use a simple Count() query. There will be a table 'User_Likes' where each row would represent a like from one user to another. To display the number the number of likes, I would simply ask the number of Likes for a user, which would be translated to the database as a simple SELECT COUNT(*) FROM USER_LIKES WHERE ID = x or something. However, I gather this would be come with a great performance penalty as everytime a user would visit a profile page and like another user, the number of likes would have to be recalculated, second-level cache or not.
Use an additional NumberOfLikes column in the User table and increment / decrement this value when a user likes or dislikes another user. This however gives me concurrency issues. Using a simple for-loop, I tested it by liking a user 1000 times on two servers and the result in the db was around 1100 likes total. That's a difference of 900. Whether a realistic test or not, this is of course not an option. Now, I looked at optimistic and pessimistic locking as a solution (is it?) but my current Repository pattern is, at the moment, not suited to use this I'm afraid, so before I fix that, I'd like to know if this is the right way to go.
Like 2, but using custom HQL and write the update statement myself, something along the lines of UPDATE User SET NumberOfLikes = NumberOfLikes + 1 WHERE id = x. This won't give me any concurrency issues in the database right? However, I'm not sure if I'll have any datamismatch on my multiple servers due to the second level caching.
So... I really need some advice here. Is there another option? This feels like a common situation and surely NHibernate must support this in an elegant manner.
I'm new to NHIbernate so a clear, detailed reply is both necessary and appreciated :-) Thanks!

I suspect you will see this issue in more locations. You could solve this specific issue with 3., but that leaves other locations where you're going to encounter concurrency issues.
What I would advise is to implement pessimistic locking. The usual way to do this is to just apply a transaction to the entire HTTP request. With the BeginRequest in your Global.asax, you start a session and transaction. Then, in the EndRequest you commit it. With the Error event, you go the alternative path of doing a rollback and discarding the session.
This is quite an accepted manner of applying NHibernate. See for example http://dotnetslackers.com/articles/aspnet/Configuring-NHibernate-with-ASP-NET.aspx.

I'd go with 3. I believe this in this kind of application it's not so critical if some pages show a slightly outdated value for a while.
IIRC, HQL updates do not invalidate the entity cache entry, so you might have to do it manually.

Related

Can memcached be used for locking?

memcached can be used for a caching static data which reduces database lookup and typically does memcached.get(id) and memcached.set(id).
However is it fine to use this for locking mechanisms? Does memcache.set and memcached.get always give the data if it is present or will it just return None if the request is taking too much time?
I want to avoid concurrent access to a particular resource identified by a id and I use this logic:
def access(id):
if memcache.get(id):
return access
else:
memcache.set(id)
return true
If any user tries to access that resource, if memcache.get(id) = username returns a value we decline the access else we do memcache.set(id) = username to stop subsequent access and allow access for the current user.
Is it fine to using memcached like this? Will set and get actually give the data if its available regardless of the time it takes or does it give best possible result in the least possible time from whatever I have found (for example: Guaranteed memcached lock) so far is of the former category and might not work for locking thought it might work 99% of the time.
Can anyone clarify and if there are alternative locking mechanisms?
For anyone intersted in this, I have created a thread on Memcache Github at Will memcached work reliably for implementing a locking mechanism?. It explains some of the common caveats using get and set and how to avoid that using add. Some blogs also explain this problem if you can search for distributed locking using memcache on your favorite search engine.
There is also a related question Memcached, Locking and Race Conditions which might help on getting more clarity on memcache race conditions.
Here is more discussions on this at the Memcache Forum:
Thread 1 and Thread 2

Should I create separate SQL Server database for each user?

I am working on Asp.Net MVC web application, back-end is SQL Server 2012.
This application will provide billing, accounting, and inventory management. The user will create an account by signup. just like http://www.quickbooks.in. Each user will create some masters and various transactions. There is no limit, user can make unlimited records in the database.
I want to keep stable database performance, after heavy data load. I am maintaining proper indexing and primary keys in it, but there would be a heavy load on the database, per user.
So, should I create a separate database for each user, or should maintain one database with UserID. Add UserID in each table and making a partition based on UserID?
I am not an expert in SQL Server, so please provide suggestions with clear specifications.
Please inform me if there is any lack of information.
A DB per user is what happens when customers need to be able pack up and leave taking the actual database with them. Think of a self hosted wordpress website. Or if there are incredible risks to one user accidentally seeing another user's data, so it's safer to rely on the servers security model than to rely on remembering to add the UserId filter to all your queries. I can't imagine a scenario like that, but who knows-- maybe if the privacy laws allowed for jail time, I would rather data partitioned by security rules rather than carefully writing WHERE clauses.
If you did do user-per-database, creating a new user will be 10x more effort. While INSERT, UPDATE and so on stay the same from version to version, with each upgrade the syntax for database, user creation, permission granting and so on will evolve enough to break those scripts each SQL version upgrade.
Also, this will multiply your migration headaches by the number of users. Let's say you have 5000 users and you need to add some new columns, change a columns data type, update a trigger, and so on. Instead of needing to run that change script 1x, you need to run it 5000 times.
Per user Dbs also probably wastes disk space. Each of those databases is going to have a transaction log, sitting idle taking up the minimum log space.
As for load, if collectively your 5000 users are doing 1 billion inserts, updates and so on per day, my intuition tells me that it's going to be faster on one database, unless there is some sort of contension issue (everyone reading and writing to the same table at the same time and the same pages of the same table). Each database has machine resources (probably threads and memory) per database doing housekeeping, so these extra DBs can't be free.
Anyhow, the best thing to do is to simulate the two architectures and use a random data generator to simulate load and see how they perform.
It's not an easy answer to give.
First, there is logical design to be considered. Then you have integrity, security, management and performance (in this very order).
A database is a logical unit of data, self contained. Ideally, you should be able to take a database, move it to another instance, probably change the connection strings and be running again.
All the constraints are database-level. No foreign keys can exist referencing some object outside the database.
So, try thinking in these terms first.
How would you reliably prevent one user messing up the other user's data? Keep in mind that it's just a matter of time before someone opens an excel sheet and fire up queries on the database bypassing your application. Row level security in SQL Server is something you don't want to deal with.
Multiple databases mean that all management tasks should be scripted out and executed on all databases. Yes, there is some overhead to it, but once you set it up it's just the matter of monitoring. If a database goes suspect, it's a single customer down, not all of them. You can even have different versions for different customes if each customer have it's own database. Additionally, if you roll an upgrade, you can do it per customer, so the inpact will be much less.
Performance is the least relevant factor here. Of course, it really depends on how many customers and how much data, but proper indexing will solve these issues. Scale-out is much easier with multiple databases.
BTW, partitioning, as you mentioned it, is never a performance booster, it's simply a management feature, allowing for faster loading and evicting of data from a table.
I'd probably put each customer in separate database, but it's up to you eventually to make a decision for yourself. Hope I've helped some with this.

How to make VB.NET application work as Multi-user?

I am developing a VB.Net application. That application might be working on a LAN. MS Access as a back end will be used. I have developed many single user applications, but don't know of multi user , LAN, manage DB etc. How do I make the program as Multi user on LAN. Data will be accessed at the same time. How to manage such things.
Please give me some help and Guidance.
Thanks
Your VB application does not care how many people run it.
Your database, with MS Access, has some serious issues with multiple users. Get away from it if you can. SQL Server has a free version called SQL Express. If you only plan on 2 people, you might be OK with Access for a while but be prepared to support it more.
That was all the easy stuff, now you have to think about how you are going to handle multiple users trying to access and update the same data (concurrency).
Imagine this, you are a user looking at employee record 1 and so is someone else. You change the birthday and save. The the other user changes thier suppervisor and saves. How do you know something changed? What do you do if something changed? These are questions I cannot answer for you, you must decide based on your situation.
There are 2 main types of concurrency, optimistic and pessimistic. See this link for a great explaination and discussion on them: optimistic-vs-pessimistic-locking
You can look at this on a table-by-table basis.
If a table is never updated, you dont have to worry about concurrency
If a table is rarely updated, like a table of states, you can decide if it is worth the extra effort to add concurrency.
Everything else, pretty much should have some type of concurrency.
Now, the million dollar question, how?
You will find as many ways to handle concurrency as you will find colors in the rainbow. Here are some of the ones I like:
Simple number that you increment with each save. Small and easy.
DateTime stamp - As long as you dont expect to ever have 2 people save the same record during the same second, this is easy. (I personally dont like it by it's self)
User Name - Pretty simple gives a little bit of an audit by knowing who last inserted/edited the record but doesn't handle an issue I have seen to often. Imagine the same senerio as above but you had 2 instances of record 1. Now you change the data again, maybe supervisor, and when you save, you overwrite the changes from your first save with those of the second save.
Guid - VB can create a guid, SQL Server can create a guid and so can Access. It is nice an unique and most important, you can create it on the client so you dont have to requery the database after you save the record to get a refreshed record.
Combination of these. I like 2 and 3 myself. Gives a mini audit and is unique to the user.
If you use a DataAdapter, by default, MS will assume concurrency checking means to compare EVERY field to make sure it did not change. This works, but is completely un-scaleable and should not be done.
All of this depends on the size of your application and how you see it being used. Definately do some more research before you settle on a decision.
There are a number of solutions here.
If I may suggest a drastic alternative, have you considered pairing the client running on the user's computer with a server component (through a web service)? A simpler alternative would be for the client to talk directly to a SQL Server (or other database) instance through the network?*
*I'm not a fan of having client side apps talk directly to the database. It will mean maintenance headaches in the future, but I
included it to give you options
.
I found this random example via Google so YMMV.

Executing untrusted SQL with SELECT only user

I am creating an application that allows users to construct complex SELECT statements. The SQL that is generated cannot be trusted, and is totally arbitrary.
I need a way to execute the untrusted SQL in relative safety. My plan is to create a database user who only has SELECT privileges on the relevant schemas and tables. The untrusted SQL would be executed as that user.
What could possibility go wrong with that? :)
If we assume postgres itself does not have critical vulnerabilities, the user could do a bunch of cross joins and overload the database. That could be mitigated with a session timeout.
I feel like there is a lot more than could go wrong, but I'm having trouble coming up with a list.
EDIT:
Based on the comments/answers so far, I should note that the number of people using this tool at any given time will be very near 0.
SELECT queries can not change anything in databse. Lack of dba privileges guarantee that any global settings can not be changed. So, overload is truely the only concern.
Onerload can be result of complex queryies or too much simple queries.
Too complex queryies can be ruled out by setting statement_timeout in postgresql.conf
Receiving plenties of simple queryies can be avoided too. Firstly, you can set parallel connection limit per user (alter user with CONNECTION LIMIT). And if you have some interface program between user and postgresql, you can additionally (1) add some extra wait after each query completion, (2) introduce CAPTCHA to avoid automated DOS-attack
ADDITION: PostgreSQL public system functions give many possible attack vectors. They can be called like select pg_advisory_lock(1) and every user have privilege to call them. So, you should restrict access to them. Good option is creating whitelist of all "callable words" or, more precisely, identifiers that can be used with ( after them. And rule out all queryies that include call-like construct identifier ( with an identifier not in white list.
Things that come to mind, in addition to having the user SELECT-only and revoking privileges on functions:
Read-only transaction. When a transaction is started by BEGIN READ ONLY, or SET TRANSACTION READ ONLY as its first instruction, it cannot write anything, independantly of the user permissions.
At the client side, if you want to restrict it to one SELECT, better use a SQL submission function that does not accept several queries bundled into one. For instance, the swiss-knife PQexec method of the libpq API does accept such queries and so does every driver function that is built on top of it, like PHP's pg_query.
http://sqlfiddle.com/ is a service dedicated to running arbitrary SQL statements which may be seen somehow as a proof-of-concept that it's doable without being hacked or DDos'ed all day long.
The problem with this, is i'm not sure if the sql itself will still continue to run in the background after a session timeout (can't really find much evidence either way via google and haven't had any real experience where I've attempted it myself either). If you're limiting to just select access, i think this is about the worst that could happen though. The real issue would be what happens if you got a hundred users trying to do complex cross joins? Session timeout dropping the query or not, it'll put a real heavy load on the database (could very easily be enough to pull the database down entirely)
The only way (from my point of view) to protect yourself against DoS on main server with crafted queries is to set up a read only replica of the Postgres DB and a special limited user on this replica DB. This way the main Postgres server wont be affected by queries on replica.
Also you will get hot standby / continuous replication DB for the case, when main DB fails for some reason.

Log user activity - which is better

I am using Action Filter Attributes for loging user activity on certain action which has SQL database interaction. Similarly I can log the activity in the SQL tables using triggers on tables during each activity on the tables. I would like to know which of the above two methods is a best practice ( perfomance wise )
I think that the actionfilter is certainly the cleanest and best practice appraoch since it is in the application layer. Part of the benefit of being there is its managed code and if something breaks you can easily locate the problem. There is also the benefit that all your code is in one spot too.
Database triggers are a big no no in many companies since they have a habit of causing infinite loop well an unknowing programmer creates some logic that steps on the trigger over and over again causing the database to fail. Some companies do allow triggers but very well documented and very lightly used. Hope this helps.
Performance of logging depends greatly on the system architecture. If you have 3 load balanced web servers hitting one main database, triggers would have to handle all the load while Action Filters would split the load in three. In that scenario, Action Filters would be better.
In terms of best practices, I wouldn't use either of those approaches. I would set up Transactional Replication to another SQL server. This approach would run without impacting performance at all. The transaction log is already being generated and replication would just spin up a separate process that's reading that log.