Alright here goes nothing. After reading Best Practices on Service Versioning and Data Contract Versioning (http://msdn.microsoft.com/en-us/library/ms733832.aspx) I mostly understand how its all done. I am planning to use Agile Versioning for Data Contracts but cant figure out what the difference or better practice is between Creating a WorkRequestV2 to add new properties or just adding the new properties to WorkRequestV1. Now I tried doing both ways and it worked but when I do create WorkRequestV2 I have to modify ServiceContractor to use WorkRequestV2 why do this rather than just adding properties to WorkRequestV1? What is the difference?
The Example I looked at was here (http://msdn.microsoft.com/en-us/library/ms731138.aspx)
CarV1 and CarV2 why not add HorsePower to CarV1 and not have to create a whole new Contract.
[DataContract(Name = "WorkRequest")]
public class WorkRequestV1 : IExtensibleDataObject {
[DataMember(Name = "workrequest",Order=1,IsRequired=true)]
public int workrequest { get; set; }
[DataMember(Name = "CQ")]
public string CrewHeadquarter { get; set; }
[DataMember(Name = "JobCode")]
public string JobCode { get; set; }
[DataMember(Name = "JobType")]
public string JobType { get; set; }
[DataMember(Name = "Latitude")]
public string Latitude { get; set; }
[DataMember(Name = "Longitute")]
public string Longitute { get; set; }
private ExtensionDataObject theData;
public ExtensionDataObject ExtensionData {
get {
return theData;
}
set {
theData = value;
}
}
}
Have another read of the Data Contract versioning (your second link)
Here is a quote from that page:
Breaking vs. Nonbreaking Changes
Changes to a data contract can be
breaking or nonbreaking. When a data contract is changed in a
nonbreaking way, an application using the older version of the
contract can communicate with an application using the newer version,
and an application using the newer version of the contract can
communicate with an application using the older version. On the other
hand, a breaking change prevents communication in one or both
directions.
For your case, adding some additional properties is a non-breaking change. You can quite safely add the properties to the existing data contract rather than create a new one, as long as you don't have strict schema validation (such as the new properties don't have 'required' marked on them)
Old clients communicating with new services still continue to work, values of the new properties will remain the default value. New clients communicating with old services will also work, as the new properties will be ignored.
But as you can see, you will run into the problem of how can you ensure new clients communicate with new services, and old clients with old services? If this isn't an issue, then you don't have a problem. Otherwise you may need to introduce a new data contract.
Further reading:
MSDN Service Versioning
IBM Best practice for Web service versioning
Oracle Web services versioning
What are your WebService Versioning best practices?
Related
I am currently building an app, and I would like to use micro services.
I use Mediatr for implementing a CQRS pattern and EventStore for event sourcing.
I have a problem with checking that an entity exists before creating an event of aggregate and appending it to the EventStore.
For example: I have LanguageAggregateRoot
public class LanguageAggregateRoot
{
public Guid Id {get;set}
public string Code { get; private set; }
public string Name { get; private set; }
public bool Enable { get; private set; }
public string Icon { get; private set; }
}
Field Code is unique and user can change code for language.
When I use Code field for stream id of eventstore, if the user sends a CreateLanguageCommand and ChangeCodeCommand, I need to check that the new code exists.
So I use Id field for stream id. But I don't understand how I can validate whether code field is unique?
As far as I've found out should not use query handling in command handling.
If i use client to check existed then send command to server. I think it doesn't look good. Because something/someone can request only command with out my client.
How can I do that?
Thanks for your support.
It should be fine to validate your request in your command itself.
you can use the below link for more details.
CQRS - is it allowed to call the read side from the write side?
We have starting new project in our company. We finalize the architecture as follows
There are 5 different project as follows
1) BusinessEntities(Class Library) which contains DataContract as follows
[DataContract]
public class Cities
{
/// <summary>
/// Gets or sets City Id
/// </summary>
[DataMember]
public int Id { get; set; }
/// <summary>
/// Gets or sets City name
/// </summary>
[DataMember]
[Display(Name = "CityName", ResourceType = typeof(DisplayMessage))]
[Required(ErrorMessageResourceName = "CityName", ErrorMessageResourceType = typeof(ErrorMessage))]
[RegularExpression(#"[a-zA-Z ]*", ErrorMessageResourceName = "CityNameAlphabates", ErrorMessageResourceType = typeof(ErrorMessage))]
[StringLength(50, ErrorMessageResourceName = "CityNameLength", ErrorMessageResourceType = typeof(ErrorMessage))]
public string Name { get; set; }
}
2) Interface which contains
[ServiceContract]
public interface ICity : IService<CityViewModel>
{
[OperationContract]
Status Add(Cities entity);
}
3) DAL which contains implementation of WCF service
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
public class City : ICity
{
public Status Add(Cities entity)
{
//Insert Logic goes here
}
}
4) Webcomponent which call the WCF service
public class City
{
public static Status Add(Cities entity)
{
using (var service = new WcfServiceProvider<ICity>())
{
return service.GetProxy().Add(entity);
}
}
}
5) UI (Asp.Net MVC Project) which call webcomponent to access service
City.Add(entity);
Now we finalize this structure. But the problem is how to use Repository Pattern for Unit Testing? Is it possible to use repository pattern on this structure if yes how? Or is there any other pattern we have to use?
I recommend that you read about seperation of concerns. Right now you are using your business object as DTO and BO. That effectivly couples your WCF service with your domain layer AND with the UI layer.
That means that version control will be impossible. If you want to do any change in the UI or in the DL you have to make sure that all changes are made in both layers as the UI won't be able to talk with the BL otherwise.
It's much better to have dedicated DTOs since you then can handle versioning issues a lot easier (like default values or a newly introduced property etc).
Your naming does not make sense. Your Cities class contains ONE city, right? Why don't you name it City.
[ServiceContract]
public interface ICity : IService<CityViewModel>
{
[OperationContract]
Status Add(Cities entity);
}
Can you explain what the service definition is? I don't see the relation between the view model and the DTO. Same thing goes here. The name ICity is misleading. If it's a repository name it as such. Most of us use the City name to point out the object that we work with and use other names like ICityService or ICityRepository to point out the access technologies.
Now to the real question:
But the problem is how to use Repository Pattern for Unit Testing?
You don't.
The only responsibilty of repositories is to load and store data in the data source. You can of course mock the DbConnection etc. But that doesn't guarantee anything at all since the repositories is effectivly coupled to the data source. If you use mocks you'll still get failures from incorrect SQL queries, invalid column types, incorrect table relations etc.
Hence if you truly want to make sure that the repositories work you have to query a database.
I'm trying to use nHibernate, Spring and WCF together. I've got an Order object, and that contains a Customer object.
I can call a WCF method findOrder on my service, and providing the Order's Customer field does not have a DataMember annotation, the Web Service returns the Order I wanted. It does not contain the Customer details though, as expected.
But when I try to include the Customer as well, the WebService fails, and looking in the WCF trace logs, I can see this error:
System.Runtime.Serialization.SerializationException:
Type 'DecoratorAopProxy_95d4cb390f7a48b28eb6d7404306a23d' with data contract name
'DecoratorAopProxy_95d4cb390f7a48b28eb6d7404306a23d:http://schemas.datacontract.org/2004/07/'
is not expected. Consider using a DataContractResolver or add any types not known statically to
the list of known types - for example, by using the KnownTypeAttribute attribute or by adding them to the
list of known types passed to DataContractSerializer
Pretty sure this is because the Customer contains extra nHibernate details, but I don't understand why WCF would be happy to send the Order, but not the Customer.
Can anyone help me understand?
Order object
[DataContract]
[KnownType(typeof(Customer))]
public class Order
{
// Standard properties
[DataMember]
public virtual int Id { get; set; }
public virtual Enums.OrderStatus Status { get; set; }
[DataMember]
[StringLength(20, ErrorMessage = "Order name must not be more than 20 characters long")]
public virtual string Name { get; set; }
[DataMember]
public virtual Customer Customer { get; set; }
[DataContract]
...
}
Customer object
public class Customer
{
public virtual int CustomerId { get; set; }
[DataMember]
private string name = "";
...
}
You should use a data transfer objects (DTO) to get your data over the wire. This is good practice anyway as you do not want to let your domain model leak into (and out of) the boundaries of your application.
Think about things like every change in your domain model results in a change of your data contract, resulting in a new wsdl, resulting in a change on the client. In addition you are telling the consumer of your service too many insights of your aplication.
Despite all this architectural bla bla. NHibernate uses proxies to enable lazy loading, those proxies are of another type than you serializer expects. You can disable lazy loading for your domain to get the application working. This is imho a bad idea.
<class name="Customer" table="tzCustomer" lazy="false" >
I am working on a datacontract as follows that uses IExtensiblesDataObject to make it forward compatible with version 02 of this contract, but am worried about possible 'accidental' denial of service via clients passing excessive quantities of data over the wire that needs to be de-serialised, re-serialised and sent back.
Without turning the support off via the ignoreExtensionDataObject in the config file, is there a way of protecting against such an eventuality, i.e. can you cap the quantity somehow ?
[DataContract(Namespace="http://schemas.myComany.com/sample/01")]
public class Sample : IExtensibleDataObject
{
[DataMember]
public int32 sample_ID;
private ExtensionDataObject _data;
public virtual ExtensionDataObject ExtensionData
{
get { return _data; }
set { _data = value; }
}
....
}
Thanks in advance
The way to protect your service is limiting MaxReceivedMessageSize (by default it is 65KB) and reader quotas in your binding.
I have a web service that is currently using asmx. The operations are decorated with WebMethod and each takes in a request and returns a response. I started creating a WCF app and I am referencing the business layer so I can reuse the Web methods. My question is, do I have to decorate each class with DataContract and each property of the request with DataMember?
Currently, one of the classes is decorated with SerializableAttribute, XmlTypeAttribute, and XmlRootAttribute. Do I need to remove these and add DataContract or do I can I add DataContract to it? It is a .NET 2 app by the way. The class also contains a bunch of private fields and public properties, do I need to decorate these with a DataMember attribute. Is this even possible if it is using the .NET 2 framework?
The WCF Service is currently targeting .NET Framework 4.0. A few of the methods need to still use the XmlSerializer, so does this mean I can just decorate the operation with [XmlSerializerFormat]?
Can you elaborate on not using any business objects on the service boundary? and what is DTO?
If possible, can you give an example?
Since .NET 3.5 SP1 the DataContractSerializer does not require the use of attributes (called POCO support). Although this gives you little control over the XML that is produced
However, if you already have an ASMX service you want to port then to maintain the same serialization you really want to use the XmlSerializer. You can wire this in in WCF using the [XmlSerializerFormat] attribute which can be applied at the service contract or individual operation level
Edit: adding section on DTOs
However, putting business objects on service boundaries causes potential issues:
You may be exposing unnecessary data that is purely part of your business rules
You tightly couple your service consumers to your business layers introducing fragility in their code and preventing you from refactoring freely
The idea of Data Transfer Objects (DTOs) is to create classes whose sole role in life is to manage the transition between the XML and object worlds. This also conforms to the Single Responsibility Principle. The DTOs oinly expose the necessary data and act as a buffer between business changes and the wire format. Here is an example
[ServiceContract]
interface ICustomer
{
[OperationContract]
CustomerDTO GetCustomer(int id);
}
class CustomerService : ICustomer
{
ICustomerRepository repo;
public CustomerService (ICustomerRepository repo)
{
this.repo = repo;
}
public CustomerService()
:this(new DBCustomerRepository())
{
}
public CustomerDTO GetCustomer(int id)
{
Customer c = repo.GetCustomer(id);
return new CustomerDTO
{
Id = c.Id,
Name = c.Name,
AvailableBalance = c.Balance + c.CreditLimit,
};
}
}
class Customer
{
public int Id { get; private set; }
public string Name { get; set; }
public int Age { get; set; }
public decimal Balance { get; set; }
public decimal CreditLimit { get; set; }
}
[DataContract(Name="Customer")]
class CustomerDTO
{
[DataMember]
public int Id { get; private set; }
[DataMember]
public string Name { get; set; }
[DataMember]
public decimal AvailableBalance { get; set; }
}
Using DTOs allows you to expose existing business functionality via services without having to make changes to that business functionality for purely technical reasons
The one issue people tend to baulk at with DTOs is the necessity of mapping between them and business objects. However, when you consider the advantages they bring I think it is a small price to pay and it is a price that can be heavily reduced by tools such as AutoMapper
WCF uses the DataContractSerializer which is primarily based upon attributes like: DataContract, DataMember, ServiceContract and so forth. But it also supports SerializableAttribute amongst others. This http://msdn.microsoft.com/en-us/library/ms731923.aspx document gives you all the insight you need.
So it might be that you don't need to refactor all your existing code but it aks some further investigation and testing ;)