WCF Rest DataContract and ServiceContract Versioning - wcf

I've spend many hours reading about DataContact and ServiceContract versioning techniques:
Best practices for API versioning?
My take away from all of these are the following
1) REST Uri's needs to be versioned.
[http://example.com/v1/car]
[http://example.com/v2/car]
2) Each REST resource operation that involves XML needs to contain XML namespace
<SampleItemCol xmlns="http://api.sample.com/2011/04/05">
<Items>
<SampleItem xmlns="http://api.sample.com/2011/04/01">
<Test xmlns="http://schemas.datacontract.org/2004/07/WcfRestService2">String content</Test>
<Id>2147483647</Id>
<StringValue>String content</StringValue>
<TestGuid>1627aea5-8e0a-4371-9022-9b504344e724</TestGuid>
</SampleItem>
<SampleItem xmlns="http://api.sample.com/2011/04/01">
<Test xmlns="http://schemas.datacontract.org/2004/07/WcfRestService2">String content</Test>
<Id>2147483647</Id>
<StringValue>String content</StringValue>
<TestGuid>1627aea5-8e0a-4371-9022-9b504344e724</TestGuid>
</SampleItem>
</Items>
</SampleItemCol>
So here are my questions:
1) Assuming there are hundreds of data contracts and many ServiceContracts, what would be the best class library structure to maintain different versions and namespaces?
2) If Uri's are versionined, do we even need to specify namespace for ServiceContracts?
3) Suppose there are 50 data contracts. All of them have namespace http://example.com/2011/04/01/. If 10 of these change and new namespace is created, http://example.com/2011/04/05/. Should the other 40 be copied to new namespace as well?
My biggest concern about REST namespaces and URI versions is the maintainability and class redundancy.
Thanks in advance for you suggestions and answers!

I went down this route with versioned service contracts and datacontracts. It was a nightmare. The worst/best part is that if you take advantage of hypermedia you really do not need to version your API at all.
If you read shonzilla's post again you will see that he is really not advocating putting versions in the URI. He shows a way to do it by using redirects, but most of his reasoning advocates against it. My previous answer to this question is here
It is also worth reading Peter Williams post on the subject.
I use XML almost exclusively for the format of my media types and I don't use namespaces at all.

Related

NestJS Schema First GraphQL Serialization

I've done some research into the subject of response serialization for NestJS/GraphQL. There's some helpful information to be found here, but the documentation seems to be completely focused on a code first approach. My project happens to be taking schema first approach, and from what I've read across a few sources, the option available for a schema-first project would be to implement interceptors for the resolvers, and carry out the serialization there.
Before I run off and start writing these interceptors, my question is this; is there any better options provided by nestjs to implement serialization for a schema first approach?
If it's just transformation of values then an interceptor is a great tool for that. Everything shown for "code-first" should work for "schema-first" in terms of high level ideas of the framework (interceptors, pipes, filters, etc). In fact, once the server is running, there shouldn't be a distinguishable difference between the two approaches, and how they operate. The big thing you'd need to be concerned with is that you won't be easily able to take advantage of class-transformer and class-validator because the original class definitions are created via the gql-codegen, but you can still extend those types and add on the decorators necessary if you choose.

Spring Data Rest Without HATEOAS

I really like all the boilerplate code Spring Data Rest writes for you, but I'd rather have just a 'regular?' REST server without all the HATEOAS stuff. The main reason is that I use Dojo Toolkit on the client side, and all of its widgets and stores are set up such that the json returned is just a straight array of items, without all the links and things like that. Does anyone know how to configure this with java config so that I get all the mvc code written for me, but without all the HATEOAS stuff?
After reading Oliver's comment (which I agree with) and you still want to remove HATEOAS from spring boot.
Add this above the declaration of the class containing your main method:
#SpringBootApplication(exclude = RepositoryRestMvcAutoConfiguration.class)
As pointed out by Zack in the comments, you also need to create a controller which exposes the required REST methods (findAll, save, findById, etc).
So you want REST without the things that make up REST? :) I think trying to alter (read: dumb down) a RESTful server to satisfy a poorly designed client library is a bad start to begin with. But here's the rationale for why hypermedia elements are necessary for this kind of tooling (besides the probably familiar general rationale).
Exposing domain objects to the web has always been seen critically by most of the REST community. Mostly for the reason that the boundaries of a domain object are not necessarily the boundaries you want to give your resources. However, frameworks providing scaffolding functionality (Rails, Grails etc.) have become hugely popular in the last couple of years. So Spring Data REST is trying to address that space but at the same time be a good citizen in terms of restfulness.
So if you start with a plain data model in the first place (objects without to many relationships), only want to read them, there's in fact no need for something like Spring Data REST. The Spring controller you need to write is roughly 10 lines of code on top of a Spring Data repository. When things get more challenging the story gets becomes more intersting:
How do you write a client without hard coding URIs (if it did, it wasn't particularly restful)?
How do you handle relationships between resources? How do you let clients create them, update them etc.?
How does the client discover which query resources are available? How does it find out about the parameters to pass etc.?
If your answers to these questions is: "My client doesn't need that / is not capable of doing that.", then Spring Data REST is probably the wrong library to begin with. What you're basically building is JSON over HTTP, but nothing really restful then. This is totally fine if it serves your purpose, but shoehorning a library with clear design constraints into something arbitrary different (albeit apparently similar) that effectively wants to ignore exactly these design aspects is the wrong approach in the first place.

WCF code generation for large/complex schema (HR-XML/OAGIS) - is there an alternative?

and thank you for reading.
I am implementing a WCF Service based on a predefined specification (HR-XML 3.0). As such, I am starting with the schema, and working my way back to code. There are a number of large Schema documents (which import yet more Schema documents) related to my implementation, provided by this specification.
I am able to generate code using xsd.exe, by supplying the "main" and "supporting" xsd files as arguments. But there are several issues, and I am wondering if this is the right approach.
there are litterally hundreds of classes - the code file is half a meg in size
duplicate classes (ex. Type, Type1 - which both represent the same type)
there are classes declared as inheriting from a base class, but that base class is not generated/defined
I understand that there are limitations to the types of Schema supported by svcutil.exe/xsd.exe when targeting the DataContractSerializer and even XmlSerializer. My question is two-fold:
Are code generation "issues" fairly common when dealing with larger, modular xsd files? Has anyone had success with generating data contracts from OAGIS or HR-XML schema?
Given the above issues, are there better approaches to this task, avoiding generating code and working with concrete objects? Does it make better sence to read and compose a SOAP message directly, while still taking advantage of the rest of the WCF framework? I understand that I am loosing the convenience of working with .NET objects, and the framekwork-provided (de)serialization; given these losses, would it still be advantageous to base my Service on WCF? Is there some "middle ground" between working with .NET types and pure XML?
Thank you very much!
-Sasha Borodin
DFWHC.org
Sasha, If you are going to use code generation, you likely should never start with the modular schemas. When you put a code generator against the modular schemas, you'll generate a class for all the common compoents in the HR-XML library and a good bit of the common components in OAGIS. You don't want this. HR-XML is distributed with standalone schemas, which are a better starting point. An even better starting point would be to create a flattened package xsd containing only the types brought in by the WSDL. If you use a couple standalone schemas, you are going to at least have some duplications among your generated code.
Well, you could try and do something like this:
convert your XSD to C# code separately, using something like the xsd.exe tool from Microsoft, or something like Xsd2Code as a Visual Studio Plugin.
Xsd2Code in Visual Studio http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=Xsd2Code&DownloadId=41336
once you have your C# classes, weed out any inconsistencies, duplications, and so forth
package everything up into a separate class library assembly
now, when generating your WCF service from the WSDL, either using Add Service Reference from Visual Studio or the svcutil.exe tool, reference that assembly with all the data classes. Doing so, WCF should skip re-creating the whole set of classes again, and use whatever is available in that data assembly
With this, you might be able to get this mess under control.

NHibernate classes as Data Contracts

I'm exposing some CRUD methods through WCF service, for some data objects persisted in a database through NHibernate. Is it a good approach to use NHibernate classes as data contracts, or is it better to wrap them or replace them with some other data contracts? What is your approach?
Our team just went through a good few months debating this design point, so I've got a lot of links to share ;-)
Short answer: You "should" translate from your NHibernate classes into a domain model.
Long answer: I think the answer to this is a matter of principle. If you ever want to be interoperable, you should not use Datasets as your DTOs (I love Hanselman's post on this). I'm not saying that it's never a good idea; clearly people have had success doing so. Just know that you are cutting corners and it's a risky proposition.
If you have complete control over the classes you are pushing the data into, you could build a nice domain model and just map the NHibernate data into those classes. You will more than likely have serious issues doing that, as IList<> (which a <bag> maps to) is not serializeable. You'd have to write your own serializer, or use something like NetDataContractSerializer, but you lose interoperability.
You will need to measure the amount of work involved in building some wrapper classes, and the translation between, but then you have complete flexibility in what your domain model will look like. Then, you're able to do things (as we have done) like code generation for your NHibernate maps and objects. Then, your data contracts serve as an abstraction from your data, as they should.
P.S. You might want to take a look at ADO.NET Data Services, which is a RESTful way to expose your data, which, at this point, seems to be the most interoperable choice to expose your data.
You would not want to expose your domain model directly, but map the domain to some kind of message as it hits the process boundary. You could leverage NHibernate to do the mapping work for you. In this case you would have 2 mappings, one for you domain model and another for your lightweight messages.
I don't have direct experience in doing this, but I have sent Datasets across via WCF before and that works just fine. I think your biggest issue in using NHibernete as data objects over WCF will be a lack of interoperability (as is also the case when using Datasets). Not only does the client have to use .NET, the client must also use NHibernate. This goes against SOA principles, but if you know for sure that you won't be reusing this component then there's not a great reason not to.
It's at least worth a try.

Schema First WCF Development

It is well-known how to create a "contract first" WCF service where the first step is to define the ServiceContracts and DataContracts.
How should one approach WCF development if one has the "schema first". In other words, an XSD schema has been independently developed. The service may not deviate from the schema that is already defined. As a complication, the schema might use features that don't translate into DataContract (the DataContract capabilities, after all, are quite minimal).
Using XDocument on the server or client side for the entire document would be fine and good. (Use of XDocument would be greatly preferred over anything involving the XmlSerializer which unfortunately seems to have fallen out of favor without replacement). It is a requirement that the metadata/WSDL properly report the actual schema per the standards. It may not report a "generic" schema such as xsd:any. (Figuring out how to deal with these WSDL requirements is the part that is giving me the most trouble.)
(Similar questions/answers here do not address XDocument or WSDL requirements.)
If you already have the XSD, the only missing link between those and a WCF interface is the WSDL. Once you have a WSDL, you can use svcutil.exe to generate WCF interfaces and classes properly annotated with the required attributes.
You can do it the hard way and write the WSDL by hand, but you also migth want to consider the WSCF tool.