I just setup a new EF6 project. In my database I have 2 tables:
- languages
- langugesDescriptions
(with relation)
the context lazyLoadingEnabled is set to false.
(both on edmx as in code)
When getting data from languages:
return context.languages
gives me on FIRST RUN, correct ouput, all language records.
But, when running context.languageDescriptions, and then again context.languages, in the output are also descriptions included.
any ideas? caching ?
Language class is auto generated: (under the .tt file)
Partial Public Class Language
Public Property Lang_ID As Integer
Public Property Lang_Name As String
Public Property Lang_Code As String
Public Overridable Property LanguageDescription As ICollection(Of LanguageDescription) = New HashSet(Of LanguageDescription)
End Class
Internally, Entity Framework leverages the Identity Map pattern which
Ensures that each object gets loaded only once by keeping every loaded
object in a map. Looks up objects using the map when referring to
them.
When you're getting Languages from the context, then LangaugesDescriptions, EF knows that LanguageDescriptions is a Navigation-Property of Language so even if your Lazy-Loading is turned off, each loaded Langauge will contain its associated LanguageDescriptions because it's already loaded.
However, some will argue that LanguageDescriptions is actually a Value-Object and should not be exposed directly on your context, it should only be accessible as part of its (root entity) Language.
Update (as per your comment):
If you want to explicitly disable auto-filling of the LanguageDescription property, you can try to clear the local cache using:
context.LanguageDescription.Local.Clear()
You can also get the list of LanguageDescription using the AsNoTracking() method to prevent the entities from tracking each other:
context.LanguageDescription.AsNoTracking();
Or project the existing Language collection into a new collection that doesn't fill the LanguageDescription property:
From Lang In context.Languages
Select New Language With {
.Lang_ID = Lang.Lang_ID,
.Lang_Name = Lang.Lang_Name,
.Lang_Code = Lang.Lang_Code,
}
Related
I know, this sounds strange... But this is what I'm trying to do, how far along I am, and why I'm even doing it in the first place:
The class is configured, as an instance, with the name of the class. The context class is also available for preparing the batch class.
Pass a generic list of objects (entities, really) to a class.
That class (which can be pre-configured for this particular class) does just one thing: Adds a new entity to the backend database via DBContext.
The same class can be used for any entity described in the Context class, but each instance of the class is for just one entity class.
I want to write a blog article on my blog showing the performance of dynamically adjusting the batch size when working with EF persistence, and how constantly looking for the optimum batch size can be done.
When I say "DBContext" class, I mean a class similar to this:
Public Class CarContext
Inherits DbContext
Public Sub New()
MyBase.New("name=vASASysContext")
End Sub
Public Property Cars As DbSet(Of Car)
Protected Overrides Sub OnModelCreating(modelBuilder As DbModelBuilder)
modelBuilder.Configurations.Add(Of Car)(New CarConfiguration())
End Sub
End Class
That may sound confusing. Here is a use-case (sorta):
Need: I need to add a bunch of entities to a database. Let's call them cars, for lack of an easier example. Each entity is an instantiation of a car class, which is configured via Code First EF6 to be manipulated like any other class that is well defined in DBContext. Just like in real world classes, not all attributes are mapped to a database column.
High Level:
-I throw all the entities into a generic list, the kind of list supported by our 'batch class'.
-I create an instance of our batch class, and configure it to know that it is going to be dealing with car entities. Perhaps I even pass the context class that has the line:
-Once the batch class is ready (may only need the dbcontext instance and the name of the entity), if is given the list of entities via a function something like:
Public Function AddToDatabase(TheList as List(Of T)) As Double
The idea is pretty simple. My batch class is set to add these cars to the database, and will add the cars in the list it was given. The same rules apply to adding the entities in batch as they do when adding via DBContext normally.
All I want to happen is that the batch class itself does not need to be customized for each entity type it would deal with. Configuration is fine, but not customization.
OK... But WHY?
Adding entities via a list is easy. The secret sauce is the batch class itself. I've already written all the logic that determines the rate at which the entities are added (in something akin to "Entities per Second). It also keeps track of the best performance, and varies the batch size occasionally to verify.
The reason the function above (called AddToDatabase() for illustrative purposes) returns a double is that the double represents the amount of entities that were added per second.
Ultimately, the batch class returns to the calling method the number of entities to put in the next batch to maintain peak performance.
So my question for you all is how I can determine the entity fields, and use that to save the entities in a given list.
Can it be as simple as copying the entities? Do I even need to use reflection at all, and have the class use 'GetType' to figure out the entity class in the list (cars)?
How would you go about this?
Thank yu very much in advance for your reading this far, and your thoughtful response..
[Don't read further unless you are into this kind of thing!]
The performance of a database operation isn't linear, and is dependent on several factors (memory, CPU load, DB connectivity, etc.), and the DB is not always on the same machine as the application. It may even involve web services.
Your first instinct is to say that more entities in a single batch is best, but that is probably not true in most cases. When you add entities to a batch add, at first you see an increase in performance (increase in entities/second). But as the batch size increases, the performance may reach a maximum, then start to decrease (for a lot of reasons, not excluding environmental, such as memory). For non-memory issues, the batch performance may start to level off, and we haven't even discussed the impact of the batch on the system itself.
So in the case of a leveling off, I don't want my batch size any larger than it needs to be to be in the neighborhood of peak performance. Also, with smaller batch sizes, the class is able to evaluate the system's performance more frequently.
Being new to Code First and EF6, I can see that there must be some way to use reflection to determine how to take the given list of entities, break them apart into the entity attributes, and persist them via the EF itself.
So far, I do it this way because I need to manually configure each parameter in the INSERT INTO...
For Each d In TheList
s = "INSERT INTO BDTest (StartAddress, NumAddresses, LastAddress, Duration) VALUES (#StartAddress, #NumAddresses, #LastAddress, #Duration)"
Using cmd As SqlCommand = New SqlCommand(s, conn)
conn.Open()
cmd.Parameters.Add("#StartAddress", Data.SqlDbType.NVarChar).Value = d.StartIP
cmd.Parameters.Add("#NumAddresses", Data.SqlDbType.Int).Value = d.NumAddies
cmd.Parameters.Add("#LastAddress", Data.SqlDbType.NVarChar).Value = d.LastAddie
singleRate = CDbl(Me.TicksPerSecond / swSingle.Elapsed.Ticks)
cmd.Parameters.Add("#Duration", Data.SqlDbType.Int).Value = singleRate
cmd.ExecuteNonQuery()
conn.Close()
End Using
Next
I need to steer away in this test code from using SQL, and closer toward EF6...
What are your thoughts?
TIA!
So, there are two issues I see that you should tackle. First is your question about creating a "generic" method to add a list of entities to the database.
Public Function AddToDatabase(TheList as List(Of T)) As Double
If you are using POCO entities, then I would suggest you create an abstract base class or interface for all entity classes to inherit from/implement. I'll go with IEntity.
Public Interface IEntity
End Interface
So your Car class would be:
Public Class Car
Implements IEntity
' all your class properties here
End Class
That would handle the generic issue.
The second issue is one of batch inserts. A possible implementation of your method could be as follows. This will insert a batches of 100, modify the paramater inputs as needed. Also replace MyDbContext with the actual Type of your DbContext.
Public Function AddToDatabase(ByVal entities as List(Of IEntity)) As Double
Using scope As New TransactionScope
Dim context As MyDbContext
Try
context = new MyDbContext()
context.Configuration.AutoDetectChangesEnabled = false
Dim count = 0
For Each entityToInsert In entities
count += 1
context = AddToContext(context, entityToInsert, count, 100, true)
Next
context.SaveChanges()
Finally
If context IsNot Nothing
context.Dispose()
End If
End Try
scope.Complete()
End Using
End Function
Private Function AddToContext(ByVal context As MyDbContext, ByVal entity As IEntity, ByVal count As Integer, ByVal commitCount As Integer, ByVal recreateContext as Boolean) As MyDbContext
context.Set(entity.GetType).Add(entity)
If (count % commitCount = 0)
context.SaveChanges()
If (recreateContext)
context.Dispose()
context = new MyDbContext()
context.Configuration.AutoDetectChangesEnabled = false
End If
End If
return context
End Function
Also, please apologize if this is not 100% perfect as I mentally converted it from C# to VB.Net while typing. It should be very close.
I have been seeing some odd behaviour in an entity that I have for which I created a partial class to override the ToSting Method and provide some basic property setting when a new instance of that entity is created (for example I might set an order date to 'Now') in a constructor.
This odd behaviour led me to look closely at the partial class and I was surprised to see that even when a set of pre existing records was being retrieved the constructor was being called for each retrieved record.
below is a very simple example of what I might have:
Partial Public Class Product
Public Sub New()
CostPrice = 0.0
ListPrice = 0.0
End Sub
Public Overrides Function ToString() As String
Return ProductDescription
End Function
End Class
I have two questions that arise from this:
1) is this normal behaviour in the Entity Framework if you add a partial class to which you add a constructor?
2) if not then I must assume that I have done something wrong, so what would be the correct way to
override the constructor to do things similar to the example I mentioned above?
Thanks for any insights that you can give me.
This is using EF 5.0 in a vb project
think to the sequence of events leading to the retrieval of an entity from the database. Basically it should be something like:
query the database
for each row of the query result give an entity
The giving is then as follow for each retrieved row:
create a new instance of the retrieved entity
populate this new instance with the value of the row
Well with each instance creation, the constructor is called.
I think you are mixing:
instance initialization where you "allocate" the object, and
business initialization where you enforce business logic
both may be done, at least partially, in the constructor.
new is always called when a class is first instantiated and if you do not explicitly declare a constructor then a default constructor will be created by the compiler.
Unless the class is static, classes without constructors are given a public default constructor by the C# compiler in order to enable class instantiation.
When defining POCO classes for Entity Framework the class must have a default constructor and EF will always call this default constructor whether you have explicitly defined it or the compiler did it for you.
If for any reason you have need to pass anything into the class when it is instantiated you can use the ObjectContext.ObjectMaterialized event.
This is a follow on question to My earlier question on lazy loading properties. Since the application is an enhancement to a production application in a fairly major enterprise, and it currently running using NHib 1.2, upgrading to version 3 is not going to happen, so the suggested answer of using Lazy Properties in 3.0 won't work for me.
To summarize the problem, I have simple database with 2 tables. One has about a dozen simple fields, plus a one to many relation to the second table as a child table. Two of the fields are very large blobs (several megabytes each), and I want to, effectively, lazy load them. This table has about 10 records, and they populate a grid at start up, but access to the large blobs are only needed for whatever row is selected.
The object structure looks something like this:
[Serializable]
[Class(Schema = "dbo", Lazy = false)]
public class DataObject
{
[Id(-2, Name = "Identity", Column="Id")]
[Generator(-1, Class="native")]
public virtual long Identity { get; set;}
[Property]
public string FieldA { get; set;}
[Property]
public byte[] LongBlob {get; set;}
[Property]
public string VeryLongString { get; set;}
[Bag(-2, Cascade=CascadeStyle.All, Lazy= false, Inverse=true)]
[Key(-1, Column="ParentId")]
[OneToMany(0, ClassType=typeof(DataObjectChild))]
public IList<DataObjectChild> ChildObjects { get; set;}
}
Currently, the table is accessed with a simple query:
objectList = (List<DataObject>) Session.CreateQuery("from DataObject").List<DataObject>();
And that takes care of everything, including loading the child objects.
What I would like is a query to do exactly the same thing, except select a list of the properties of the DataObject that includes everything EXCEPT the two blobs. Then I can add custom property Getters that will go out and load those properties when they are accessed.
So far, I have not been successful at doing that.
Things I have tried:
a) constructing an HQL query using a SELECT list.
It looks something like this:
objectList = (List<DataObject>) Session.CreateQuery(
"SELECT new DataObject " +
"(obj.Identity, obj.FieldA) " +
"from DataObject as obj")
That works, though I have to add a constructor to the DataObject that will construct it from fields I select, which is rather cumbersome. But then it doesn't load and expand the list of child objects, and I can't figure out how to do that easily (and don't really want to - I want to tell NHib to do it.)
b) removing the [Property] attribute from the two fields in the object definition. That keeps the NHibernate.Mapping.Attributes from mapping those fields, so they don't get included in the query, but then I have no way to access them from NHib at all, including writing them out when I go to save a new or modified DataObject.
I'm thinking there has to be an easier way. Can somebody point me in the right direction?
Thanks
I think you're on the right path. However, you're going to have to do more manual work since you're using a very old version of NHibernate. I would fetch projections of your entities that contain only the columns you want to eagerly load. Then when the UI requests the large blob objects, you're going to have to write another query to get them and supply them to the UI.
Another option would be to have a second class (e.g. SmallDataObject) with the same mapping (but without the blobs) and use that for the list. When editing a list item you would load the class with the blobs using the id of the selected list item.
In any case, modifying the mapping after the creation of the SessionFactory is not possible, so you cannot get rid of the mapped properties on demand.
In my Google Web Toolkit project, I got the following error:
com.google.gwt.user.client.rpc.SerializationException: Type ‘your.class.Type’ was not included in the set of types which can be serialized by this SerializationPolicy or its Class object could not be loaded. For security purposes, this type will not be serialized.
What are the possible causes of this error?
GWT keeps track of a set of types which can be serialized and sent to the client. your.class.Type apparently was not on this list. Lists like this are stored in .gwt.rpc files. These lists are generated, so editing these lists is probably useless. How these lists are generated is a bit unclear, but you can try the following things:
Make sure your.class.Type implements java.io.Serializable
Make sure your.class.Type has a public no-args constructor
Make sure the members of your.class.Type do the same
Check if your program does not contain collections of a non-serializable type, e.g. ArrayList<Object>. If such a collection contains your.class.Type and is serialized, this error will occur.
Make your.class.Type implement IsSerializable. This marker interface was specifically meant for classes that should be sent to the client. This didn't work for me, but my class also implemented Serializable, so maybe both interfaces don't work well together.
Another option is to create a dummy class with your.class.Type as a member, and add a method to your RPC interface that gets and returns the dummy. This forces the GWT compiler to add the dummy class and its members to the serialization whitelist.
I'll also add that if you want to use a nested class, use a static member class.
I.e.,
public class Pojo {
public static class Insider {
}
}
Nonstatic member classes get the SerializationException in GWT 2.4
I had the same issue in a RemoteService like this
public List<X> getX(...);
where X is an interface. The only implementation did conform to the rules, i.e. implements Serializable or IsSerializable, has a default constructor, and all its (non-transient and non-final) fields follow those rules as well.
But I kept getting that SerializationException until I changed the result type from List to X[], so
public X[] getX(...);
worked. Interestingly, the only argument being a List, Y being an interface, was no problem at all...
I have run into this problem, and if you per chance are using JPA or Hibernate, this can be a result of trying to return the query object and not creating a new object and copying your relavant fields into that new object. Check the following out, which I saw in a google group.
#SuppressWarnings("unchecked")
public static List<Article> getForUser(User user)
{
List<Article> articles = null;
PersistenceManager pm = PMF.get().getPersistenceManager();
try
{
Query query = pm.newQuery(Article.class);
query.setFilter("email == emailParam");
query.setOrdering("timeStamp desc");
query.declareParameters("String emailParam");
List<Article> results = (List<Article>) query.execute(user.getEmail
());
articles = new ArrayList<Article>();
for (Article a : results)
{
a.getEmail();
articles.add(a);
}
}
finally
{
pm.close();
}
return articles;
}
this helped me out a lot, hopefully it points others in the right direction.
Looks like this question is very similar to what IsSerializable or not in GWT?, see more links to related documentation there.
When your class has JDO annotations, then this fixed it for me (in addition to the points in bspoel's answer) : https://stackoverflow.com/a/4826778/1099376
I'd like to use for table storage an entity like this:
public class MyEntity
{
public String Text { get; private set; }
public Int32 SomeValue { get; private set; }
public MyEntity(String text, Int32 someValue)
{
Text = text;
SomeValue = someValue;
}
}
But it's not possible, because the ATS needs
Parameterless constructor
All properties public and
read/write.
Inherit from TableServiceEntity;
The first two, are two things I don't want to do. Why should I want that anybody could change some data that should be readonly? or create objects of this kind in a inconsistent way (what are .ctor's for then?), or even worst, alter the PartitionKey or the RowKey. Why are we still constrained by these deserialization requirements?
I don't like develop software in that way, how can I use table storage library in a way that I can serialize and deserialize myself the objects? I think that as long the objects inherits from TableServiceEntity it shouldn't be a problem.
So far I got to save an object, but I don't know how retrieve it:
Message m = new Message("message XXXXXXXXXXXXX");
CloudTableClient tableClient = account.CreateCloudTableClient();
tableClient.CreateTableIfNotExist("Messages");
TableServiceContext tcontext = new TableServiceContext(account.TableEndpoint.AbsoluteUri, account.Credentials);
var list = tableClient.ListTables().ToArray();
tcontext.AddObject("Messages", m);
tcontext.SaveChanges();
Is there any way to avoid those deserialization requirements or get the raw object?
Cheers.
If you want to use the Storage Client Library, then yes, there are restrictions on what you can and can't do with your objects that you want to store. Point 1 is correct. I'd expand point 2 to say "All properties that you want to store must be public and read/write" (for integer properties you can get away with having read only properties and it won't try to save them) but you don't actually have to inherit from TableServiceEntity.
TableServiceEntity is just a very light class that has the properties PartitionKey, RowKey, Timestamp and is decorated with the DataServiceKey attribute (take a look with Reflector). All of these things you can do to a class that you create yourself and doesn't inherit from TableServiceEntity (note that the casing of these properties is important).
If this still doesn't give you enough control over how you build your classes, you can always ignore the Storage Client Library and just use the REST API directly. This will give you the ability to searialize and deserialize the XML any which way you like. You will lose the all of the nice things that come with using the library, like ability to create queries in LINQ.
The constraints around that ADO.NET wrapper for the Table Storage are indeed somewhat painful. You can also adopt a Fat Entity approach as implemented in Lokad.Cloud. This will give you much more flexibility concerning the serialization of your entities.
Just don't use inheritance.
If you want to use your own POCO's, create your class as you want it and create a separate tableEntity wrapper/container class that holds the pK and rK and carries your class as a serialized byte array.
You can use composition to achieve what you want.
Create your Table Entities as you need to for storage and create your POCOs as wrappers on those providing the API you want the rest of your application code to see.
You can even mix in some interfaces for better code.
How about generating the POCO wrappers at runtime using System.Reflection.Emit http://blog.kloud.com.au/2012/09/30/a-better-dynamic-tableserviceentity/