I would like to know, in a nutshell, a summary of the limitations of using OData from the point of view of the query. For example:
Can I do recursive queries?
What subset of LINQ features it includes?
I found that the specs are very long to analyze.
Well, when you ask about the query limitations of OData, I think you mean the limitations of WCF Data Services. OData is the protocol; the implementation of what's supported or not in conjunction with the syntax and operation keywords of the OData protocol is up to the OData provider (which is WCF Data Services in your case, I believe since you tagged the question as WCF).
Given that, the subset of LINQ features are spelled out in this MSDN article. The number of limitations is pretty substantive, so it's probably better to just link to it instead of listing them out.
For your first question about recursive queries, I have to admit I'm not sure of how a typical LINQ recursive function would look like, unless you define your own extension method. If you're doing something like that, your best bet may be to wrap that recursive call in a WCF Data Service custom service method and invoke it via the URL as you would any other service method.
I hope this helps!
Related
I am very interested in Integration Platform as a Service.
As we know, it is possible to generate an API spec from an API. Is the opposite possible?
I want to write a piece of software that automatically create some functions for calling the endpoints of an Open API. In order to achieve that, the piece of software should consume an API spec and generate the code. Theoretically, this could be possible, if the spec covers all endpoints with all parameters of the API, etc. but:
this is often not the case
specs are written differently from each other
My question is: what should my software consume in order to get reliable information about the API endpoints, parameters etc.? Is there a standard for that? Is the API spec the way to go?
Look at Github Copilot. They're able to generate pretty descent facsimiles of functions from API spec. Just a word of caution, the function might not be 100% accurate and so you'll still need to check over it.
In a properly designed webservice there are at least 4 layers, presentation, application, domain and infrastructure.
REST documentation describes only the presentation layer's outer surface: HTTP interface and operations, so it is not possible to generate a complete webservice based on the REST API documentation. If your application does not have any kind of logic, it is just a data structure and CRUD, then it is possible, but in that case you'd better to find a database which has a REST API and very good access control and problem solved mostly.
If you have some sort of standard documentation like WADL or JSON-LD with Hydra, then you can generate a REST API skeleton for the presentation. I just googled the topic a little bit, maybe this thesis can be useful for you: https://repositorio.ul.pt/bitstream/10451/35311/1/ulfc121800_tm_Telmo_Santos.pdf
I am wanting to setup a REST API with a legacy database. I am looking into LINQ to SQL as an option to be the communicator between the database and the REST API. The order of operations are as follows: client will request data from REST API, REST API receives the "GET" request, calls LINQ to SQL method to get the object requested, object is returned in JSON format to client for client-side processing. I like LINQ to SQL because it can take a data table and spit out a C# object.
Most of the requests will be "GET"; however, in the future "POST" and "PUT" will be added.
Is this reasonable, or should I look to a different method? I am a new developer, so please excuse my ignorance!
The particular technologies that you use are just tools that help you achieve a goal. A means to an end. That goal is typically to solve some business problem. It is also good to do so in a robust, flexible and easily extensible way so as to provide the most possible business value in the least amount of time.
If one were to analyze an aspect of your question, using LINQ to SQL is fine. However, there are any number of ORM solutions that would accomplish a similar result (nHibernate, Entity Framework, ADO.NET). They all can achieve the goal of solving the business problem.
The important thing to remember is that the specific technology you choose for an implementation is typically not nearly as important as how you go about implementing the solution. Good and bad code are independent of technology they are written in.
In summary, the technologies that you listed in your question are all excellent technologies that can provide robust, flexible, and easily extensible solutions when used properly.
I was looking at how you can create a WCF data service around an entity framework context and you can consume it as an EF context as well.
Creating an OData API for StackOverflow including XML and JSON in 30 minutes
I really just started looking at this, but I was wondering where would the business logic go? As a service I would expect that you couldn't just freely add/delete etc without it having some validation.
If I wrote an MVC app to consume this service, how would I best implement business logic. Not simple property level validation that you could do with attributes, but more complex stuff that needs to check the data store first etc.
It sounds like you need a custom data service provider (msdn link). They're quite powerful and give you full control over all of your reading/writing logic.
For example I wrote one that enforces our licensing logic in the update provider.
You can put some in the Data Service class, but you are limited as to what you can and can't do there. And then of course you can put some in the client above the service, but that's not ideal either.
I've only spent a few weeks with WCF Data Services but you highlight (one of) the big problems with it - lack of flexibility. It's fantastic for rapid development and banging out LOB applications, but anything with a deliberate design is very difficult to implement. I needed to include objects in my entity model simply to allow them to be exposed through the service, and I had huge headaches just trying to extend those classes with a simple property.
I'd only recommend using WCF Data Services for trivially simple applications that needed extremely fast development - a one or two day development cycle, for example. Anything else is worth doing thoroughly with regular WCF services, writing your own data layer and so on.
Depending on your specific needs, it sounds like Web API might be a good fit. Web API may never get the full range of OData support that WCF Data Services has, but it does make certain things easier (like adding business logic). I'm quite confident that Web API's initial support for OData will cover a significant number of use cases, and that support will grow over time.
While a custom data provider will most certainly do about anything you want and may well be a great solution for you if you have a complex architecture, I wasn't really thrilled when I attempted to save back through the client and found out I had to implement my the IUpdatable Interface as part of my Context.(I was attempting to build a repository pattern out of my context and DataService).
I'm sure it's very useful for many people, but I really only needed the functionality the EntityProvider already contained and didn't have the time in my project schedule to figure the Iupdatable piece of the custom provider out, so my team, specifically Geoff , stuck with the Entity Provider and used Change and Query Interceptors to route the DataService requests through our Business Logic classes on the server. It provides a central point of control. We used these to provide security checks, run calculations and other operations on Insert/Update, etc. Turned out great. You can also use service methods as another way to provide specific business logic functions to your clients.
I'm exposing some CRUD methods through WCF service, for some data objects persisted in a database through NHibernate. Is it a good approach to use NHibernate classes as data contracts, or is it better to wrap them or replace them with some other data contracts? What is your approach?
Our team just went through a good few months debating this design point, so I've got a lot of links to share ;-)
Short answer: You "should" translate from your NHibernate classes into a domain model.
Long answer: I think the answer to this is a matter of principle. If you ever want to be interoperable, you should not use Datasets as your DTOs (I love Hanselman's post on this). I'm not saying that it's never a good idea; clearly people have had success doing so. Just know that you are cutting corners and it's a risky proposition.
If you have complete control over the classes you are pushing the data into, you could build a nice domain model and just map the NHibernate data into those classes. You will more than likely have serious issues doing that, as IList<> (which a <bag> maps to) is not serializeable. You'd have to write your own serializer, or use something like NetDataContractSerializer, but you lose interoperability.
You will need to measure the amount of work involved in building some wrapper classes, and the translation between, but then you have complete flexibility in what your domain model will look like. Then, you're able to do things (as we have done) like code generation for your NHibernate maps and objects. Then, your data contracts serve as an abstraction from your data, as they should.
P.S. You might want to take a look at ADO.NET Data Services, which is a RESTful way to expose your data, which, at this point, seems to be the most interoperable choice to expose your data.
You would not want to expose your domain model directly, but map the domain to some kind of message as it hits the process boundary. You could leverage NHibernate to do the mapping work for you. In this case you would have 2 mappings, one for you domain model and another for your lightweight messages.
I don't have direct experience in doing this, but I have sent Datasets across via WCF before and that works just fine. I think your biggest issue in using NHibernete as data objects over WCF will be a lack of interoperability (as is also the case when using Datasets). Not only does the client have to use .NET, the client must also use NHibernate. This goes against SOA principles, but if you know for sure that you won't be reusing this component then there's not a great reason not to.
It's at least worth a try.
Our application is interfacing with a lot of web services these days. We have our own package that someone wrote a few years back using UTL_HTTP and it generally works, but needs some hard-coding of the SOAP envelope to work with certain systems. I would like to make it more generic, but lack experience to know how many scenarios I would have to deal with. The variations are in what namespaces need to be declared and the format of the elements. We have to handle both simple calls with a few parameters and those that pass a large amount of data in an encoded string.
I know that 10g has UTL_DBWS, but there are not a huge number of use-cases on-line. Is it stable and flexible enough for general use? Documentation
I have used UTL_HTTP which is simple and works. If you face a challenge with your own package, you can probably find a solution in one of the many wrapper packages around UTL_HTTP on the net (Google "consuming web services from pl/sql", leading you to e.g.
http://www.oracle-base.com/articles/9i/ConsumingWebServices9i.php)
The reason nobody is using UTL_DBWS is that it is not functional in a default installed database. You need to load a ton of Java classes into the database, but the standard instructions seem to be defective - the process spews Java errors right and left and ultimately fails. It seems very few people have been willing to take the time to track down the package dependencies in order to make this approach work.
I had this challenge and found and installed the 'SOAP API' package that Sten suggests on Oracle-Base. It provides some good envelope-creation functionality on top of UTL_HTTP.
However there were some limitations that pertain to your question. SOAP_API assumes all requests are simple XML- i.e. only one layer tag hierarchy.
I extended the SOAP_API package to allow the client code to arbitrarily insert an extra tag. So you can insert a sub-level such as , continue to build the request, and remember to insert a closing tag.
The namespace issue was a bear for the project- different levels of XML had different namespaces.
A nice debugging tool that I used is TCP Trace from Pocket Soap.
www.pocketsoap.com/tcptrace/
You set it up like a proxy and watch the HTTP request and response objects between client and server code.
Having said all that, we really like having a SOAP client in the database- we have full access to all data and existing PLSQL code, can easily loop through cursors and call the external app via SOAP when needed. It was a lot quicker and easier than deploying a middle tier with lots of custom Java or .NET code. Good luck and let me know if you'd like to see my enhanced SOAP API code.
We have also used UTL_HTTP in a manner similar to what you have described. I don't have any direct experience with UTL_DBWS, so I hope you can follow up with any information/experience you can gather.
#kogus, no it's a quite good design for many applications. PL/SQL is a full-fledged programming language that has been used for many big applications.
Check out this older post. I have to agree with that post's #1 answer; it's hard to imagine a scenario where this could be a good design.
Can't you write a service, or standalone application, which would talk to a table in your database? Then you could implement whatever you want as a trigger on that table.