Find Underlying Column Size Via NHibernate Metadata - nhibernate

Is there a way to use SessionFactory.GetClassMetadata(), or any other method you're aware of, to dynamically get the maximum size of a varchar column that underlies an NHibernate class' string property?
To clarify, I'm not looking to read a length attribute that's specified in the NHibernate mapping file. I want to deduce the actual database column length.

See the code below for two different ways you can get the column size for a string from NHib metadata.
Cheers,
Berryl
[Test]
public void StringLength_DefaultIs_50_v1()
{
_metadata = _SessionFactory.GetClassMetadata(typeof(User));
var propertyType = _metadata.GetPropertyType("Email") as StringType;
Assert.That(propertyType.SqlType.Length, Is.EqualTo(50));
}
[Test]
public void StringLength_DefaultIs_50_v2()
{
var mapping = _Cfg.GetClassMapping(typeof(User));
var col = mapping.Table.GetColumn(new Column("Email"));
Assert.That(col.Length, Is.EqualTo(50));
}

When the Session factory is generated the NH engine does not check (and retrieve) what the underlying database is. For your case either you provide a "rich" mapping to have everything available at runtime, OR make a function that reads the necessary information from the DB (ie select * from sys.columns ..... for sql-server) when you need it.
Mind you that a rich mapping also allows the NH engine to make some automations (like checking if the size of the string passed is larger than the length of the (n)varchar column)

Related

How to update two columns with same name from two tables in a join query

I am getting an error:
Property or Indexer cannot be assigned to "--" it is read only
when trying to update two columns with the same name in two tables in a join query. How do I get this to work? Thanks!
The anonymous object created in your projection ("select new" part) is read-only and its properties are not tracked by data context by any means.
Instead, you can try this:
//...
select new
{
p1 = p,
p2 = t
}
foreach (var row in updates)
{
row.p1.Processed = true;
row.p2.Processed = true;
}
In order to improve performance you may also want to take a look at batch update capabilities of Entity Framework Extensions (if you are using Entity Framework): https://entityframework-extensions.net/overview
Yes, that's due to anonymous type properties are read only, from documentation:
Anonymous types provide a convenient way to encapsulate a set of
read-only properties into a single object without having to explicitly
define a type first.
I suggest you to create a custom class with the two entities you need (a DTO):
public class PassengerDTO
{
public Passenger Passenger {get;set}
public PassengerItinerary PassengerItinerary {get;set}
}
And use it in your projection, You need the entity instances and not just the properties you want to modify because, when you modify the Processed property in the foreach the proxy class that represent your entity is going to change the status of you entity to Updated.

How to convert existing POCO classes in C# to google Protobuf standard POCO

I have POCO classes , I use NewtonSoft json for seralization. Now i want to migrate it to Google protocol buff. Is there any way i can migrate all my classes (not manually) so that i can use google protocol buff for serialization and deseralization.
Do you just want it to work? The absolute simplest way to do this would be to use protobuf-net and add [ProtoContract(ImplicitFields = ImplicitFields.AllPublic)]. What this does is tell protobuf-net to make up the field numbers, which it does by taking all the public members, sorting them alphabetically, and just counting upwards. Then you can use your type with ProtoBuf.Serializer and it should behave in the way you expect.
This is simple, but it isn't very robust. If you add, remove or rename members it can all get out of sync. The problem here is that the protocol buffers format doesn't include names - just field numbers, and it is much harder to guarantee numbers over time. If your type is likely to change, you probably want to define field numbers explicitly. For example:
[ProtoContract]
public class Foo {
[ProtoMember(1)]
public int Id {get;set;}
[ProtoMember(2)]
public List<string> Names {get;} = new List<string>();
}
One other thing to watch out for would be non-zero default values. By default protobuf-net assumes certain things about implicit default values. If you are routinely using non-zero default values without doing it very carefully, protobuf-net may misunderstand you. You can turn that off globally if you desire:
RuntimeTypeModel.Default.UseImplicitZeroDefaults = false;

Serialize array of simple types into a single database field

Is it possible to configure NHibernate (specifically Fluent NHibernate) to serialize an array of simple types to a single database column? I seem to remember that this was possible but its been a while since I've used NHibernate.
Essentially I need to store the days of week that a person works (int[]) and would rather not have a separate table just for this purpose.
It is possible.
You need to implement a IUserType that takes care of mapping between your array and a data column (google that first; it's possible that somebody already implemented it)
Alternatively, you can do the conversion in your entity class, and map the single-field representation instead of the property. For example:
string numbers;
public int[] Numbers
{
get { return numbers.Split(','); }
set { numbers = string.Join(",", value.Select(x => x.ToString())); }
}
Yes. There's UserType for scenarios like that. You could also use enum and bit flag.

Does CF ORM have an Active Record type Update()?

Currently I am working partly with cfwheels and its Active Record ORM (which is great), and partly raw cfml with its Hibernate ORM (which is also great).
Both work well for applicable situations, but the thing I do miss most when using CF ORM is the model.update() method that is available in cfwheels, where you can just pass a form struct to the method, and it will map up the struct elements with the model properties and update the records.. really good for updating and maintaining large tables. In CF ORM, it seems the only way to to update a record is to set each column individually, then do a save. Is this the case?
Does cf9 ORM have an Active Record type update() (or equivalent) method which can just receive a struct with values to update and update the object without having to specify each one?
For example, instead of current:
member = entityLoadByPK('member',arguments.id);
member.setName(arguments.name);
member.setEmail(arguments.email);
is there a way to do something like this in CF ORM?
member = entityLoadByPK('member',arguments.id);
member.update(arguments);
Many thanks in advance
In my apps I usually create two helper functions for models which handle the task:
/*
* Get properties as key-value structure
* #limit Limit output to listed properties
*/
public struct function getMemento(string limit = "") {
local.props = {};
for (local.key in variables) {
if (isSimpleValue(variables[local.key]) AND (arguments.limit EQ "" OR ListFind(arguments.limit, local.key))) {
local.props[local.key] = variables[local.key];
}
}
return local.props;
}
/*
* Populate the model with given properties collection
* #props Properties collection
*/
public void function setMemento(required struct props) {
for (local.key in arguments.props) {
variables[local.key] = arguments.props[local.key];
}
}
For better security of setMemento it is possible to check existence of local.key in variables scope, but this will skip nullable properties.
So you can make myObject.setMemento(dataAsStruct); and then save it.
There's not a method exactly like the one you want, but EntityNew() does take an optional struct as a second argument, which will set the object's properties, although depending on how your code currently works, it may be clunky to use this method and I don;t know whether it'll have any bearing on whether a create/update is executed when you flush the ORM session.
If your ORM entities inherit form a master CFC, then you could add a method there. Alternatively, you could write one as a function and mix it into your objects.
I'm sure you're aware, but that update() feature can be a source of security problems (known as the mass assignment problem) if used with unsanitized user input (such as the raw FORM scope).

WCF Entity Framework Concurrency

I've got a WCF service that is making calls to my Entity Framework Repository classes to access data. I'm using Entity Framework 4 CTP, and am using my own POCO objects rather than the auto generated entity objects.
The context lifetime is limited to the method call. For Select/Insert and Update methods I create the context and dispose of it in the same method returning disconnected entity objects.
I'm now trying to work out the best way to handle concurrency issues. For example this is what my update method looks like
public static Sale Update(Sale sale)
{
using (var ctx = new DBContext())
{
var SaleToUpdate =
(from t in ctx.Sales where t.ID == sale.ID select t).FirstOrDefault();
if (SaleToUpdate == null) throw new EntityNotFoundException();
ctx.Sales.ApplyCurrentValues(sale);
ctx.SaveChanges();
return sale;
}
}
This works fine, but because I'm working in a disconnected way no exception is thrown if the record has been modified since you picked it up. This is going to cause concurrency issues.
What is the best way to solve this when your using the entity framework over WCF and are not keeping a global context?
The only method I can think of is to give my objects a version number and increment it each time a save is called. This would then allow me to check the version hasnt changed before I save. Not the neatest solution I know and would still allow the client to change their version number which I really don't want them to be able to do.
EDIT :
Using Ladislav Mrnka's suggestion of RowVersion fields in my entities, each of my entities now has a field called Version of type RowVersion. I then changed my Update method to look like this.
public static Sale Update(Sale sale)
{
using (var ctx = new DBContext())
{
var SaleToUpdate =
(from t in ctx.Sales where t.ID == sale.ID select t).FirstOrDefault();
if (SaleToUpdate == null) throw new EntityNotFoundException();
if (!sale.Version.SequenceEqual(SaleToUpdate .Version))
throw new OptimisticConcurrencyException("Record is out of date");
ctx.Sales.ApplyCurrentValues(sale);
ctx.SaveChanges();
return sale;
}
}
It seems to work but if I should be doing it differently please let me know. I tried to use Entity Frameworks built in concurrency control by setting the version fields concurrency mode to fixed, unfortunately this didn't work as when I did the query to get the unchanged SaleToUpdate it picked up its version and used that to do its concurrency check which is obviously current. It feels like the entity framework might be missing something here.
Like it mentioned, the best practice is to use a column of row version type in your DB table for concurrency checking, but how it is implemented with Code First:
When using Code First in CTP3, you would need to use the fluent API to describe which properties needs concurrency checking but in CTP4 this can be done declaratively as part of the class definition using data annotation attributes:
ConcurrencyCheckAttribute:
ConcurrencyCheckAttribute is used to specify that a property has a concurrency mode of “fixed” in the model. A fixed concurrency mode means that this property is part of the concurrency check of the entity during save operations and applies to scalar properties only:
public class Sale
{
public int SaleId { get; set; }
[ConcurrencyCheck]
public string SalesPersonName { get; set; }
}
Here, ConcurrencyCheck will be turn on for SalesPersonName property. However, if you decide to include a dedicated Timestamp property of type byte[] in your class then TimestampAttribute will definitely be a better choice to go for:
TimestampAttribute:
TimestampAttribute is used to specify that a byte[] property has a concurrency mode of “fixed” in the model and that it should be treated as a timestamp column on the store model (non-nullable byte[] in the CLR type). This attribute applies to scalar properties of type byte[] only and only one TimestampAttribute can be present on an entity.
public class Sale
{
public int SaleId { get; set; }
[Timestamp]
public byte[] Timestamp { get; set; }
}
Here, not only Timestamp property will be taken as concurrency token, but also EF Code First learn that this property has store type of timestamp and also that this is a computed column and we will not be inserting values into this property but rather, the value will be computed on the SQL Server itself.
Don't use custom version number. Use build in row version data type of your DB. Row version data type is automatically modified each time you change the record. For example MSSQL has Timestamp data type. You can use the timestamp column in EF and set it as Fixed concurrency handler (not sure how to do it with EF Code First but I believe that fluent API has this possibility). The timestamp column has to be mapped to POCO entity as byte array (8 bytes). When you call your update method you can check timestamp of loaded object with timestamp of incomming object by yourselves to avoid unnecessary call to DB. If you do not make the check by yourselves it will be handled in EF by setting where condition in update statement.
Take a look at Saving Changes and Managing Concurrency
from the article:
try
{
// Try to save changes, which may cause a conflict.
int num = context.SaveChanges();
Console.WriteLine("No conflicts. " +
num.ToString() + " updates saved.");
}
catch (OptimisticConcurrencyException)
{
// Resolve the concurrency conflict by refreshing the
// object context before re-saving changes.
context.Refresh(RefreshMode.ClientWins, orders);
// Save changes.
context.SaveChanges();
Console.WriteLine("OptimisticConcurrencyException "
+ "handled and changes saved");
}