Checking for an optional header in a SignalR hub - http-headers

In a SignalR hub method, Context.Headers provides a INameValueCollection with the request headers. INameValueCollection has just three members:
string this[string key] { get; }
string Get(string key);
IEnumerable<string> GetValues(string key);
Unfortunately, none of them are documented. If you want to get a header but not throw an exception if it doesn't exist, what do you use? I'm guessing Get, but it sure would be nice if the author had bothered to document these details.
One thing I like about the "old" Microsoft was that even if a bit verbose, its documentation covered nearly all the semantics. SignalR was a wonderful, rapid development, but it would be even better if it retained that old-school diligence.
Perhaps I'm missing something. Are the semantics documented somewhere? Or does someone know and care to document them here as a quick and dirty workaround?

Had the same problem and in the end I looked it up in the github source which takes you System.Collections.Specialized.NameValueCollection
(there are other implementation of INameValueCollection as well but the one linked seems to be used in the request)
In short:
string this[string key] { get; }
string Get(string key);
A String that contains the comma-separated list of values associated with the specified key, if found; otherwise, null.
IEnumerable<string> GetValues(string key);
A String array that contains the values associated with the specified key from the NameValueCollection, if found; otherwise, null.

Related

Why is my complex [FromBody] parameter null?

I am having trouble with the [FromBody] parameter of my method not binding.
example C#:
[Route("api/path")]
[HttpPost]
public void Post([FromBody] ComplexType param)
{
// param is null
}
public class ComplexType
{
public string name { get; set;}
}
I've checked the POST body content and content-type and it looks correct.
Why is it null despite throughly checking that the data being posted and content type all match what is expected?
N.B. This is a deliberatly vague question since I was having a lot of trouble diagnosing an issue and I couldn't find a suitable question and answer.
When I eventually found the problem I kicked myself for it, but I feel the need to share how I found the problem to hopefully spare others the pain.
As it happens there may well be nothing wrong with the example given.
In my case there was a problem with the definition of the complex type, I had a parameter marked as string while it should have been string[] and so the JSON parsed did not match the model.
The important part though is how I found this out:
When debugging any API method there is the magic ModelState property.
This property gives you information about any failures that occur while binding the data received to the expected parameters.
e.g:
here we can see the parameter (uploaded), and the property within that parameter which failed to bind correctly.
Check the definition of that property and you'll probably find an error.

HttpContext.Features vs HttpContext.Items In Asp.Net Core

What is the differences between these two Properties?
I can use HttpContext.Items instead of HttpContext.Features to share data between middlewares. The only difference I see is that I tell Items for a key and it gives me an object and I have to cast it. This casting can be done in Features automatically.
Is there something else behind them?
The biggest difference is that the HttpContext.Items is designed to store Key-Value-Pair, while the HttpContext.Features is designed to store Type-Instance-Pair.
To be more clear, HttpContext.Items is designed to share items within the scope of current request, while the HttpContext.Features, which is an instance of IFeatureCollection, is by no means to be used like that .
The IFeatureCollection interface represents a collection of HTTP features, such as:
IAuthenticationFeature which stores original PathBase and original Path.
ISessionFeature which stores current Session.
IHttpConnectionFeature which stores the underlying connection.
and so on.
To help store and retrieve a Type-Instance-Pair, the interface has three important methods:
public interface IFeatureCollection : IEnumerable<KeyValuePair<Type, object>>{
// ...
object this[Type key] { get; set; }
TFeature Get<TFeature>();
void Set<TFeature>(TFeature instance);
}
and the implementation (FeatureCollection) will simply cast the value into required type:
public class FeatureCollection : IFeatureCollection
{
// ... get the required type of feature
public TFeature Get<TFeature>()
{
return (TFeature)this[typeof(TFeature)]; // note: cast here!
}
public void Set<TFeature>(TFeature instance)
{
this[typeof(TFeature)] = instance; // note!
}
}
This is by design. Because there's no need to store two IHttpConnectionFeature instances or two ISession instances.
While you can store some Type-Value pairs with FeatureCollection, you'd better not . As you see, the Set<TFeature>(TFeature instance) will simply replace the old one if the some type already exists in the collection; it also means there will be a bug if you have two of the same type.
HttpContext.Items is designed to share short-lived per-request data, as you mentioned.
HttpContext.Features is designed to share various HTTP features that allow middleware to create or modify the application's hosting pipeline. It's already filled with several features from .NET, such as IHttpSendFileFeature.
You should use HttpContext.Items to store data, and HttpContext.Features to add any new HTTP features that another middleware class might need.

How to convert existing POCO classes in C# to google Protobuf standard POCO

I have POCO classes , I use NewtonSoft json for seralization. Now i want to migrate it to Google protocol buff. Is there any way i can migrate all my classes (not manually) so that i can use google protocol buff for serialization and deseralization.
Do you just want it to work? The absolute simplest way to do this would be to use protobuf-net and add [ProtoContract(ImplicitFields = ImplicitFields.AllPublic)]. What this does is tell protobuf-net to make up the field numbers, which it does by taking all the public members, sorting them alphabetically, and just counting upwards. Then you can use your type with ProtoBuf.Serializer and it should behave in the way you expect.
This is simple, but it isn't very robust. If you add, remove or rename members it can all get out of sync. The problem here is that the protocol buffers format doesn't include names - just field numbers, and it is much harder to guarantee numbers over time. If your type is likely to change, you probably want to define field numbers explicitly. For example:
[ProtoContract]
public class Foo {
[ProtoMember(1)]
public int Id {get;set;}
[ProtoMember(2)]
public List<string> Names {get;} = new List<string>();
}
One other thing to watch out for would be non-zero default values. By default protobuf-net assumes certain things about implicit default values. If you are routinely using non-zero default values without doing it very carefully, protobuf-net may misunderstand you. You can turn that off globally if you desire:
RuntimeTypeModel.Default.UseImplicitZeroDefaults = false;

WCF result deserializing to default values for value types in a list of key/value pairs

I have a WCF service and the result is a custom TimeSeries class defined as:
[DataContract]
public class TimeSeries
{
[DataMember]
public string Name { get; set; }
[DataMember]
public List<KeyValuePair<DateTime, double>> Data { get; set; }
}
My service method creates an array of these objects to return. Debugging the service method, I can see that an array containing one of these objects is created correctly (it has a name and 37 vk pairs of data). Using Fiddler, I can see that the object is being serialized and sent (the data is still correct in the HTTP response). However the problem comes when on the client I check the result object and it is incorrect. Specifically, I get a TimeSeries object with the correct name, and the correct number of of kv pairs, but they contain the default values for each DateTime and double (ie 01/01/0001 12:00AM & 0.0).
My client is Silverlight v4 and I am using an automagically generated service reference. The problem appears to be related to deserialization. Anyone have any thoughts as to why it is doing this, what I am missing, or how I can fix it?
As it is stated in Serializing a list of Key/Value pairs to XML:
KeyValuePair is not serializable, because it has read-only properties
So you need your own class, just like the answer on that page says.
An alternative rather than using your own class is to use a Dictionary<DateTime,double> instead which seems to serialize and deserialize fine.

WCF Entity Framework Concurrency

I've got a WCF service that is making calls to my Entity Framework Repository classes to access data. I'm using Entity Framework 4 CTP, and am using my own POCO objects rather than the auto generated entity objects.
The context lifetime is limited to the method call. For Select/Insert and Update methods I create the context and dispose of it in the same method returning disconnected entity objects.
I'm now trying to work out the best way to handle concurrency issues. For example this is what my update method looks like
public static Sale Update(Sale sale)
{
using (var ctx = new DBContext())
{
var SaleToUpdate =
(from t in ctx.Sales where t.ID == sale.ID select t).FirstOrDefault();
if (SaleToUpdate == null) throw new EntityNotFoundException();
ctx.Sales.ApplyCurrentValues(sale);
ctx.SaveChanges();
return sale;
}
}
This works fine, but because I'm working in a disconnected way no exception is thrown if the record has been modified since you picked it up. This is going to cause concurrency issues.
What is the best way to solve this when your using the entity framework over WCF and are not keeping a global context?
The only method I can think of is to give my objects a version number and increment it each time a save is called. This would then allow me to check the version hasnt changed before I save. Not the neatest solution I know and would still allow the client to change their version number which I really don't want them to be able to do.
EDIT :
Using Ladislav Mrnka's suggestion of RowVersion fields in my entities, each of my entities now has a field called Version of type RowVersion. I then changed my Update method to look like this.
public static Sale Update(Sale sale)
{
using (var ctx = new DBContext())
{
var SaleToUpdate =
(from t in ctx.Sales where t.ID == sale.ID select t).FirstOrDefault();
if (SaleToUpdate == null) throw new EntityNotFoundException();
if (!sale.Version.SequenceEqual(SaleToUpdate .Version))
throw new OptimisticConcurrencyException("Record is out of date");
ctx.Sales.ApplyCurrentValues(sale);
ctx.SaveChanges();
return sale;
}
}
It seems to work but if I should be doing it differently please let me know. I tried to use Entity Frameworks built in concurrency control by setting the version fields concurrency mode to fixed, unfortunately this didn't work as when I did the query to get the unchanged SaleToUpdate it picked up its version and used that to do its concurrency check which is obviously current. It feels like the entity framework might be missing something here.
Like it mentioned, the best practice is to use a column of row version type in your DB table for concurrency checking, but how it is implemented with Code First:
When using Code First in CTP3, you would need to use the fluent API to describe which properties needs concurrency checking but in CTP4 this can be done declaratively as part of the class definition using data annotation attributes:
ConcurrencyCheckAttribute:
ConcurrencyCheckAttribute is used to specify that a property has a concurrency mode of “fixed” in the model. A fixed concurrency mode means that this property is part of the concurrency check of the entity during save operations and applies to scalar properties only:
public class Sale
{
public int SaleId { get; set; }
[ConcurrencyCheck]
public string SalesPersonName { get; set; }
}
Here, ConcurrencyCheck will be turn on for SalesPersonName property. However, if you decide to include a dedicated Timestamp property of type byte[] in your class then TimestampAttribute will definitely be a better choice to go for:
TimestampAttribute:
TimestampAttribute is used to specify that a byte[] property has a concurrency mode of “fixed” in the model and that it should be treated as a timestamp column on the store model (non-nullable byte[] in the CLR type). This attribute applies to scalar properties of type byte[] only and only one TimestampAttribute can be present on an entity.
public class Sale
{
public int SaleId { get; set; }
[Timestamp]
public byte[] Timestamp { get; set; }
}
Here, not only Timestamp property will be taken as concurrency token, but also EF Code First learn that this property has store type of timestamp and also that this is a computed column and we will not be inserting values into this property but rather, the value will be computed on the SQL Server itself.
Don't use custom version number. Use build in row version data type of your DB. Row version data type is automatically modified each time you change the record. For example MSSQL has Timestamp data type. You can use the timestamp column in EF and set it as Fixed concurrency handler (not sure how to do it with EF Code First but I believe that fluent API has this possibility). The timestamp column has to be mapped to POCO entity as byte array (8 bytes). When you call your update method you can check timestamp of loaded object with timestamp of incomming object by yourselves to avoid unnecessary call to DB. If you do not make the check by yourselves it will be handled in EF by setting where condition in update statement.
Take a look at Saving Changes and Managing Concurrency
from the article:
try
{
// Try to save changes, which may cause a conflict.
int num = context.SaveChanges();
Console.WriteLine("No conflicts. " +
num.ToString() + " updates saved.");
}
catch (OptimisticConcurrencyException)
{
// Resolve the concurrency conflict by refreshing the
// object context before re-saving changes.
context.Refresh(RefreshMode.ClientWins, orders);
// Save changes.
context.SaveChanges();
Console.WriteLine("OptimisticConcurrencyException "
+ "handled and changes saved");
}