I am using NHibernate and really new to that. My dilemna is when
I open a web browser, it shows the table data. Meantime another person opens another web browser and hence read the existing data from the database.
Meantime, I make changes in the my pages and save. And the user save his changes afterwards. When I reload the page, I no more find my data, but that of the user, i.e his was the latest and mine were replaced.
How can I avoid this issue?
You need to implement optimistic concurrency control: http://nhibernate.info/doc/nhibernate-reference/transactions.html#transactions-optimistic
The most performant way is adding a <version> to your entities (see http://nhibernate.info/doc/nhibernate-reference/mapping.html#mapping-declaration-version)
Related
I have a split database and I have duplicated front-end file to make multiple copies for different users. Every-time a change is made on one front-end form, I want the other forms in other front-ends to always drop changes. How can I trap this write conflict to always drop changes maybe through VBA if possible?
Not quite sure what you mean by "drop changes" - the frontend should never be redesigned during normal use.
You must distribute a new copy of the frontend to the users.
A smooth and proven method using a shortcut and a script is described in my article:
Deploy and update a Microsoft Access application with one click
(If you don't have an account, browse for the link: Read the full article)
Edit:
If it is the data that is updated by several users, and you update via VBA, you may study another of my articles:
Handle concurrent update conflicts in Access silently
Though simple to use, the code is a bit too much to post here. It is also on GitHub:
VBA.ConcurrencyUpdates
My colleague and I are participating in a huge project located in Accurev. We've already created own workspaces backed with some stream (let's call it zzz-stream) which is used by many other participants, not only by us.
The point is that we want to exchange our work between our workspaces, make some changes, exchange again, etc. BEFORE making the changes accessible for others, i.e. in other words we don't want to propagate our changes until it is stable and tested, but we want be able to work on it together.
My idea was to create new stream (yyy-stream) backed with zzz-stream, and then change our workspaces to be backed with yyy-stream. But unfortunately I have no rights to create streams.
My second idea was to use a workspace as backed stream, but it doesn't work because Accurev can't use ws as backed stream.
Is there any solution for our problem?
UPD: I accepted Brad's answer as most detailed. However Accurev is too heavy and sluggish to be used effectively. So actually I prefer to use Git for internal needs over the accurev workspace. (see Accurev externally, git internally)
Your idea of creating the yyy-stream is the EXACT right way to do it. The other options are decent workarounds for one-off situations, but creating the extra stream is simple and is fully leveraging AccuRev's capabilities.
That being said, I understand that your admins have stream creation locked down. They of course want control, but should be allowing for maximizing developer productivity and not forcing workarounds like this. My guess is they have stream creation locked down to a particular group being enforced by the server-admin trigger. One common thing I have seen other large sites do is:
- allow streams to be freely created off of a list of acceptable streams (easy to do in the trigger)
- enforce naming rules on the stream creation. This is important to admins in large sites to keep things organized. Again, this is very easy to enforce via the server-admin trigger.
Bottom line, if this is a common situation, work with the admins to allow this capability as per the above. If they have any questions, they are more than welcome to contact AccuRev and we will help them out.
Your idea on using another stream for you and your peer is a good one and is commonly called a collaboration stream. If your site has stream creation locked down, you would need to work with your AccuRev administrator to make that happen.
Another option is for you and the other developer to pull the keeps from the other workspace into your own stream. This relies on both of you being diligent about doing keeps and then you can look at the history of the other developer's workspace to find the keep operation, right-click that transaction and then select Send to Workspace. The destination workspace must be your own.
A third option (more for a situation where you are in your workspace and know exactly what file you want to grab the other users changes)is to bring up the version browser for the file, right click and select history/browse versions. Look for the other workspace, highlight the version in that workspace, right click and select send to workspace. This will checkout that version into your workspace.
This is similar to the change palette suggestion but quicker if your looking to this on a file basis.
Another idea is to use different version control system (e.g. git or svn) over Accurev workspace to exchange the changes and keep our history separated from zzz-stream. (similar to Accurev externally, git internally) Only changed files should be added to other VCS, not whole project. Some merge problems occur though.
So here's the situation. I want to add 'old' news from our previous website in to an asset publisher portlet on our new Liferay 6.1 site. The problem is that I want them to show up as if I had added them in the past.
So, I figure, how hard can it be to modify the createDate? I've since been able to directly access the MySQL database and perform updates on the article object's createDate field. However, it doesn't seem to propagate to my Liferay deployment, regardless of clearing caches, reindexing search indices, and restarting Liferay. The web content still maintains it's 'original' createDate even though the database shows it as the value I have changed it to.
Here's the query I used:
mysql> UPDATE JournalArticle SET createDate='2012-03-08 15:17:12' WHERE ArticleID = 16332;
I have since learned that it is a no-no to directly manipulate the database, as the dynamics of database/Liferay isn't as straight forward as Liferay performing lookups. So it looks like I might need to use the Liferay API, namely, setCreateDate as seen here.
But I have absolutely no idea where and how to leverage the API. Do I need to create a dummy portlet with the sole purpose of using this API call? Or can I create a .java file somewhere on the server running my Liferay deployment and run it to leverage this method?
I only have like 15 articles I need to do this to. I can find them by referencing the ArticleID and GroupID.
Any help would be greatly appreciated. I've grepped the Liferay deployment and found setCreateDate being used heavily within .java files inside the knowledge-base-portlet, but I can't tell how else to directly use them without creating a portlet.
On the other hand, if anybody knows how to get my database to propagate it's changes to the Liferay deployment, even though I know it's a dirty hack, that would probably be the easiest.
Thanks; I really appreciate it.
Using of Liferay API is of course the clear and better way, but for only 15 articles I would try to change it directly through the database.
I checked the database and it seems that Liferay stores the data in these tables: JOURNALARTICLE and ASSETENTRY.
Try to change the created date in both these tables.
Then reload the cache: Control Panel -> Server Administration --> Clear Database Cache.
You can write hook for application startup event. This way whenever liferay is first started it will change the create date as you desire. Later if you want to remove the hook it can be done easily. See here on how to create a hook and deploy it.
http://www.liferay.com/community/wiki/-/wiki/Main/Portal+Hook+Plugins
Also, changing in database itself is not at all recommended even for 1 value/article. Always use Liferay provided service api to modify.
I am using NHibernate for a project, and I am absolutely beginner. I am fetching some objects from a table and showing it to a form, where they can be edited. If an user inserts a new object into the table from some other window, I want to show this newly inserted object into the edit window. My application uses tabbed window interface, so user can simultaneously open insert window and edit window at the same time.
So basically what I need is a way to determine if a newly created object exists in the database which is not fetched before by ISession, and if not, then fetch that new object from the database. In other words, I need to synchronize my session with the database, just like flush method, but in the reverse way.
Can anyone help me?
Publish/Subscription method works well for this. Check out the Publishing Events part of Ayende's sample desktop application. Basically after you've added a new item, you publish that information and other parts of your application that subscribed can update their lists accordingly.
You are taking the path to NHibernate Hell.
Be sure to work your infrastructure (ie defining interfaces, defining session management patterns and notification pattern) and isolate these non-business utilies from the rest of your code before using NHibernate to implement them.
Good luck.
Background
I have an SQL CE database, that is constantly updated (every second).
I have a (web) application that allows a user to look at the data in real-time. At some point a user can click "take a snapshot" button, and it will open the snapshot in a different window.
And then on that form, there is "print" and "download" buttons that will either generate a page for printing, or will stream the data as CSV file - but same data snapshot has to be used, i.e. I can't go to the DB to get latest data for that.
Details
SQL CE dabatase is exposed through WCF web service.
Snapshot consists of up to 500 records, 10 columns each. Expiration time on the snapshot of 2 hours is sufficient.
It is a low-traffic application, so I don't expect more than few (5) connections at the same time.
Loosing snapshot is not a big deal, user can simply generate new one.
database is accessed by self-hosted WCF web service using Linq-to-SQL.
Web site is ASP.NET MVC hosted on UltiDev Cassini.
database, and web site are most likely be on the same box, when deployed. The entire app is intranet bound.
Problem
I need to cache the snapshot of the data at the moment user pressed "take a snapshot" button, so that I can use same data to generate print page, or generate a file for download.
Solution 1:
Each time there is a need to generate a snapshot, I will create a table in the database. Since there are no temp tables in SQL CE, I will need to clean it up myself.
Solution 2:
Cache the snapshot in-memory on either DB server, or web server.
Question:
Is there anything wrong with proposed solutions? Any different solution suggestions?
A consideration is the typical usage pattern. Do most snapshots eventually result in either being printed or exported or both?
If such is the case, we might as well "get it in memory" (temporarily) in the form of a non blocking (asynchronous) select statement from the device to the server. In this fashion the data will "be there" or well on its way when user decides to use it.
If on the other hand many snapshot end up not being effectively used, Solution #1 seems quite ok (maybe the table could be named after the account/user, hence guaranteeing "self clean up" based on the number of snapshot a user can maintain at a given time (though it seems to be just one, with even the tolerance of loosing it sometimes).
500 rows by 10 columns isn't really very large at all. For the sake of simplicity in this case, I might just generate the CSV data at the same time I generate the initial snapshot page, and then place the CSV data in a hidden field in the snapshot page. The "Print" and "Download CSV" buttons would then POST the form that contains the CSV data to a Print page that generates the printable version from the posted CSV data, or a page that streams the CSV directly back to the client's browser, respectively. This way, at least, you wouldn't have any clean-up issues to deal with, and you avoid having to cache something on the server (either in the cache proper or in the database) that might well end up never being used at all.
If you cached the CSV data in a hidden field client-side, you could even handle both the printing and the CSV display completely client-side with javascript, although I don't know if that's worth the trouble or not.