Linq to SQL - Dirty Reads after Updating - WCF Service - wcf

I have a dbml file (linq to sql) to connect to my database. I have a manager that uses a generic repository (for each entity) to talk to the dbml. A manager uses the Repo to do work (CRUD operations). A WCF wraps the manager and exposes the it to the outside world (so my application can use it).
DB <--> IRepository< Entity, DataContext> <--> IManager(IRepo, IRepo...) <--> WCFService(IManager)
I've tested the manager, it works fine every time. The problem is the WCF. I'll read data (e.g. GetAllLocations), update the data (e.g. change a Location name), but then at some random point later, I'll read dirty data. E.g. if I've been changing a location name to from "BC 1", to "BC 2", to "BC 3", sometimes I'll read it and get older values. Like after I've changed it to "BC 3" and I am expecting to read "BC 3", I get "BC 1" (which doesn't make sense since the value before the update was "BC 2" anyway). WCF does not have caching by default, so why is this happening on my WCF and NOT on my manager? All the WCF does is pass values to the manager and get values from the manager, it's a very basic wrapper class.
NOTE: I am using StructureMap to automatically resolve DependencyInjection and IoC stuff.
Problem methods: anything that's reading (e.g. GetLocationById and GetAllLocations). They just don't always return the latest data from the database. I know it's not the manager because I created a simple project to test both the Manager and WCF independently, and only the WCF had dirty reads after updating data (specifically the Location). I have not testing all the other entities yet.
One final note: I kept getting the "System.Data.Linq.ChangeConflictException: Row not found or changed." exception. I changed the DBML designer Location Entity's properties (except the PK) to Update Check: Never. The dirty reads were happening before this change anyway and the manager works fine (and it uses the DBML). So I have no reason to believe this is causing the dirty reads.
Also, the Location Entity has a trigger in the database, but I'm eliminated this as the cause because I disabled it and that didn't help, and once again, the manager works fine.

There are a couple of things that could be causing this:
The Isolational level of the ambient transaction scope, could be different
One difference going over WCF is that going through IIS could cause things to run concurrently
The things that you could try are:
Set the isolation level of the transaction scope
make sure that your connection string enables MARS - Multiple Active Result Sets

Turns out this was a caching issue caused by using WCF and Structure Map. I'd changed the default structure map caching when registering my data contexts in the data registry.
For<MyDataContext>()
//.HybridHttpOrThreadLocalScoped()
.Use(() => new MyDataContext());
The default was per call which is what I needed.

Related

Entity Framework 5 entity in separate dll

I have a DLL that is a logging component that we use in many different projects. This logging component is designed to be entirely self-contained, and thus it must have its database connection string internal to it. In this project, it is completely unacceptable to require that its connection string be copied to the app.config of any project that uses it.
This has been working great for years, but now we have found that mixing its older ADO tech with new apps that use EF results in horrible performance when the logging is being done. For example, adding a single log entry when the application starts results in a > 30 second delay before the app opens.
So to combat this, I have re-written this component to use EF.
The problem is under the current EF (version 4.4 since we are targeting .Net Framework 4.0) does not offer a constructor to DBContext that allows you to specify the entire connection string. The code attempts to change the Database.Connection.ConnectionString, but the DBContext object insists on looking in the App.Config for the connection string even though we are giving it a new one.
I must get around this behavior.

Concurrency violation while updating and deleting newly added rows

I've been developing a CRUD application using Datasets in C# and Sql Server 2012. The project is basically an agenda wich holds information about Pokémon (name, habilities, types, image, etc).
By the few months I've been facing a problem related to concurrency violation. In other words, when I try to delete or update rows that I've just added during the same execution of the program, the concurrency exception is generated and it isn't possible to perform any other changes in the database. So I need to restart the program in order to be able to perform the changes again (Important Note: this exception only happens for the new rows added through C#).
I've been looking for a solution for this violation (without using Entity Framework or LINQ to SQL), but I couldn't find anything that I could add in the C#'s source code. Does anyone knows how to handle this? What should I implement in my source code? Is there anything to do in SQL Server that could help on it?
Here is a link to my project, a backup from the database, images of the main form and the database diagram:
http://www.mediafire.com/download.php?izkat44a0e4q8em (C# source code)
http://www.mediafire.com/download.php?rj2e118pliarae2 (Sql backup)
imageshack .us /a /img 708/38 23/pokmonform .png (Main Form)
imageshack .us /a /img 18/95 46/kantopokdexdiagram .png (Database Diagram)
I have looked on your code and it seems, that you use AcceptChanges on a datatable daKanto inconsistently. In fact you use AcceptChangesDuringUpdate, which is also fine, although I prefer to call method datatable.AcceptChanges() explictly after the update, but your way also is fine.
Anyway I have noticed, that you use AcceptChangesDuringUpdate in the method Delete_Click and Update__Click, but do not use it in the Save_Click, and also I think you should use AcceptChangesDuringFill in MainForm_Load, where you fill your datasets.
I cannot guarantee you, that it will help, but I know that uniformity of data access throughout the application reduces the risk of the unexpected data consistency errors.

Accessing a single RavenDB from different applications

I have a web project that stores objects in raven db. For simplicity the classes live in the web project.
I now have a batch job that is a separate application that will need to query the same database and extract information from it.
Is there a way I can tell raven to map the documents to the classes in the batch job project that have the same properties as those in the web project.
I could create a shared dll with just these classes in if that's needed. seems unnecessary hassle though
As long as the structure of the classes you are deserializing into partially matches the structure of the data, it shouldn't make a difference.
The RavenDB server doesn't care at all what classes you use in the client. You certainly could share a dll, or even share a portable dll if you are targeting a different platform. But you are correct that it is not necessary.
However, you should be aware of the Raven-Clr-Type metadata value. The RavenDB client sets this when storing the original document. It is consumed back by the client to assist with deserialization, but it is not fully enforced. The logic basically is this:
is there ClrType metadata?
if yes, do we have that type loaded in the current app domain?
if yes, then deserialize into that type
if none of the above, then deserialize dynamically and cast into the type
requested (basically, duck-typing)
You can review this bit of the internals in the source code on github here.

SharePoint Content Type Event Receivers Impossible to Remove

I have a very odd situation in my SharePoint staging environment. We recently stood up a new SharePoint 2010 server (single WFE + a DB server), and attached a backed-up content database from our existing environment. We created a new web application, and pointed it at the attached content database. All of our site collections, sites, lists, etc. appeared, and things appeared good.
We had deployed some custom content types to our existing environment prior to moving the database, and we wanted to upgrade those content types. Specifically, we attach event receivers to the content types (using code, not XML) and we needed to update the assembly version that those event receivers point to. So we ran our usual code (part of a feature receiver) to remove the event receivers, but to our surprise, the receivers remained.
In an attempt to remedy the situation, we wrote a console application that iterates over all content types (SPWeb.ContentTypes) in the root site of each site collection and deletes them, and then calls SPContentType.Update(true) on each content type. There are no errors returned from the call to Update, but again to our even greater surprise, SharePoint still reports the event receivers are attached.
In a desparate last ditch effort, we even went into the content database (after taking a snapshot -- and remember, this is staging, not production!) and manually DELETED the offending receivers from the EventReceivers table. We figured that should have at least some kind of effect. Alas, SharePoint still reports the receivers as being present.
We perform these types of upgrades on content type event receivers all the time, but have never run into this issue on any other SharePoint farm. Does it sound like an environmental problem? Is it something that could have been caused by moving the content database? Any help would be appreciated, because we are completely stumped at this point.
1st of all, I will never recommend changing anything in DB. It will surely give you trouble in long run.
You did mention that you tried to remove the event reciever from Web level but not sure if you have tried removing it from List/Library level
Use ContentTypeUsage class and try deleting from List/Library level
http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spcontenttypeusage.aspx

Nhibernate Profiler - Shows no information other than "session"?

So I am having problems getting NHibernate intergated in my MVC project. I therefore, installed the NHProfiler and initialized it in the Global.asax.cs file (NhibernateProfiler.Initialize();).
However, all I can see in the NHProf is a Session # and the time it took to come up. But selecting it or any other operations doesn't show me any information about the connection to the database or any information at all in any of the other windows such as:
- Statements, Entities, Session Usage
The Session Factory Statistics only shows Start time, execution time, and thats it.
Any thoughts.
Do you have any custom log4net configuration? Just thinking that might be overwriting NHProf's log4net listener after startup. If you refresh the page (and hence start another session*), does NHProf display another Session start? Also verify that your HibernatingRhinos.Profiler.Appender.dll (or HibernatingRhinos.Profiler.Appender.v4.0.dll if you're using .NET 4) is the same one as the current version of NHProf.
* I'm assuming that you're using Session-per-Request since this is a web app.