As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I am currently developing an application which by design has a three-tier architecture. It is composed of an administration website, a WCF webservice and a database. Of course, the website doesn't directly connect to the database, so that all requests and responses must pass through the service. The "problem" is that there are several entities involved in this application, and for each one of them I must support the basic CRUD operations and many more, which makes the service have right now more than 30 methods in a single endpoint. Since I already had to increase the maximum message size, I begin to ask myself if having all those methods in a single service is not too much. What do you think? What alternatives do I have?
I can't really give you a good answer since it kind of depends on the requirements and complexity of your application. Typically a CRUDy service interface is an antipattern you should avoid. There shouldn't be a one-to-one mappings between your data layer and your service layer. If there is then the service layer is kind of not pulling it's own weight. SOA is a huge topic that I'm only starting to get my head around but my understanding is that SOA's are supposed to encapsulate the logic of many operations. Ie (Authentication, Authorization, Logging, Scaling, Transactions, etc.)
http://msdn.microsoft.com/en-us/library/ms954638.aspx
Have you heard of the Repository pattern? That's a software design pattern where a particular class/assembly encapsulates the logic required for getting data into and out of the database. It's lighter weight then using full blown services and might be all you need if what you want is a good way of decoupling your application from the database. A really neat feature of the Repository pattern is that you can make all the methods of the Repository an Interface and then make a MockImplementation to perform testing on your business/UI layers independently of your database.
http://msdn.microsoft.com/en-us/library/ff649690.aspx
There is no real concrete answer to this question. It is a tradeoff either way. Try to keep the Single Responsibility Principle in mind. A couple options are to:
Create an endpoint per data type, putting CRUD operations into each. That would mean maintaining more endpoint configuration through.
Separate your methods into multiple files, but leave them in one class, using partial classes.
If all you are doing is CRUD operations for each data type, then maybe use WCF Data Services / OData instead.
Implementing the repetitive CRUD functionality over the WCF service could be a pain but if the service is not exposed globally (so you don't have to pay a lot for the authentication/authorization) you can save a LOT of time by using one of the two: ADO.NET Data Services or WCF RIA Services.
Whether or not 30 methods on a single service is a bad design - this is not clear. It's like asking whether or not a class with 30 members is a bad design - some people will say that it definitely is, most will comment "it depends on what you are doing".
Nevertheless, using HTTP for CRUD brings some restrictions - one of which you mention. You will not be able to insert/update large batches unless you increase the message size. You can also have some serious issues with transaction handling unless you somehow involve the transactions over http in an explicit way.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
My understanding is that the fundamental tenant of OO design is that one should focus on modeling a class as the union of code and data. In day to day development, however, I tend to separate all of my business logic into classes of their own. The 'data' ends up being in tightly controlled POCOs/DTOs with basically no real code or logic. I then instantiate a business logic class and pass in POCOs to each method whenever I want an operation to occur.
But this feels like two separate approaches. In fact, the latter approach seems to be at odds with the purpose of OO!
I supposed I've always kept the two things separate since business logic may function on multiple POCOs. Plus, not forcing validation on the data in the POCOs made it easier to unit test since it seemed simpler to prepare the POCO for a test (no need to worry about internal class state, encapsulation, etc). Now that I'm looking back on those reasons, they seem weak.
I'm looking for a comparison/contrast of the two approaches. Specifically, why does it seem that making 'dumb' POCOs is the way to go these days? Why not just stick the data and the business logic into a single class? Are we abandoning the original goals of object oriented design?
Thanks!
Indeed, separating business logic from associated data goes against the principles of OOP, and this is what Martin Fowler refers to as an anemic domain model. Personally, I would always go with a proper domain model that puts data and behaviour together.
Specifically, why does it seem that making 'dumb' POCOs is the way to go these days?
I don't know why you thought this was so, but this is certainly not true. There are many "dumb" models out there, but there are also many well-designed domain models.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been working on my own ssl based multi process multi file descriptor threaded server for a few weeks now, needless to say it can handle a good amount of punishment. I am writing it in C++ in an object oriented manner and it is nearly finished with signal handling (atomic access included) and exception / errno.h handled.
The goal is to use the server to make multi-player applications and games for Android/iOS. I am actually very close to the completion, but it recently occurred to me that I can just use Apache to accomplish that.
I tried doing some research but couldn't find anything so perhaps someone can help me decide weather I should finish my server and use that or use apache or whatever. What are the advantages and disadvantages of apache vs your own server?
Thank you to those who are willing to participate in this discussion!
We would need more details about what you intend to accomplish but I would go with Apache in any case if it matches your needs:
it is battle tested for all kind of cases and loads
you can benefit from all the available modules (see http://httpd.apache.org/docs/2.0/mod/)
you can benefit from regular security patches
you don't have to maintain it yourself!
Hope this helps!
You can always write your own software even when perfectly well-proven alternatives exists, but you should be conscious about what are your reasons for doing so, and what are the costs.
For instance, your reasons could be:
Existing software too slow/high latency/difficult to synchronize
Existing software not extensible for my purpose
Your needs don't overlap with the architecture imposed by the software - for instance if you need a P2P network, then a client/server-based HTTP protocol is not your best
You just want to have fun exploring low-level protocols
I believe none of the above except possibly the last of these apply to your case, but you have not provided much details, so my apologies if I am wrong.
The costs could be:
Your architecture might get muddled - for instance you can fall into the trap of having your server being too busy calculating if a gunshot hits the enemy, when 10 clients are trying to initiate a TCP connection, or a buffer overflow in your persistent storage routine takes down the whole server
You spend time on lower level stuff when you should dealing with your game engine
Security is hard to get right, it takes many man-years of intrusion testing and formal proofs (even if you are using openSSL)
Making your own protocol means making your own bugs
Your own protocol means you have to make your own debuggers (for instance you can't test using curl or trace using HTTP proxies)
You have to solve many of the issues that have already been solved for the existing solution. For instance caching, authentication, redirection, logging, multi-node scaling, resource allocation, proxies
For your own stuff you can only ask yourself for help
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What are the pros and cons of using a rest service vs a wcf service?
I am wondering which type to use and I was interested to find some sort of comparision.
Rest is a way of doing communication over the internet. It is a very basic process of picking addresses to serve as method locations and returning HTML standard data (javascript, css, html of course).
WCF is a .net library used to have two programs talk to each other using SOAP. Which consists of two very familiar programs trading class info.
Seeing as Rest is a process, and WCF is a class library, a better question might be "Rest vs Soap".
The bottom line is, if you need two apps to talk, you might want to use WCF. Even if the apps are not both written in .net. However if you need information to be accessed by web tech(usualy javascript access is done this way) you'll want to use Rest.
Just a quick side note though, WCF does Rest well too, so you realy can't go wrong there.
You're asking a question about apples and oranges. REST is pattern used in creating web services. I'm not an expert on it, but you can find plenty of details on Wikipedia. WCF is a Microsoft technology for creating web services (primarily using SOAP, although it's so configurable that you can do REST on it as well - see ASP.Net WebAPI).
Pros for WCF:
Very configurable - If you can imagine it, WCF can probably do it.
Simple to use if you're sticking to the Microsoft stack. Visual Studio does 90% of the work for you.
Cons for WCF:
Very configurable - It can be a bit of a pain to get it do exactly what you want sometimes, especially if you're new to it.
There can be some problems communicating between different technology stacks. I've heard of Java services curling up and dying when pointed at a WCF service. As far as I've heard, this is a problem with the Java libraries, not WCF, but who knows for sure.
That's all that comes to mind right now, but hopefully that gives a you a decent impression on WCF.
If you are absolutely sure that HTTP is the protocol you want to use and you want to embrace it as an "Application" protocol, not just a "Transport" protocol then something like ASP.NET Web API.
If you building a service for your servers in your datacenter to talk to each other then seriously consider WCF.
Whether to do REST is a completely different question. Will this service last for many years? Will it have many different clients? Will some of those clients be out of your control? If you answered yes, then it may be worth investigating what benefits the REST constraints can bring.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
This question was inspired by Jon Skeet's question here where he asked about people pain points with LINQ so I hope this question isn't out of place ...
Version 4 of WCF tackled, probably, one of the areas where many people struggled with WCF - namely configuration. However, from this tagged set of questions and other forums there are obviously other areas that people struggle with.
I've made a bunch of blog posts and screencasts in the past trying to focus on common issues (such as duplex, sessions, etc). I'm planning another set but want to focus on things that are causing people problems even with the changes in version 4.0.
Areas I see are things like
Instancing and Threading
Security
REST support
WCF and Silverlight
Large message processing / streaming
Configuration (still)
Serialization
And I'm sure there are more, so I'd like to get input and maybe we can make sure that the product team also get some feedback to the greatest pain points people have with WCF
I sometimes participate both here and on MSDN and after answering many questions my opinion is that the greatest pains people have are:
Configuration
Configuration is a pain even more then before. Simplified configuration makes a lot of things even worse because before this simplification if you made mistake in the configuration you got an exception. Today you will make typo in your service name (or you will forgot to add namespace) and your service will silently use another configuration.
Security
Security is a paint, it was a pain and it will be a pain.
Security itself is complicated and WCF is making this more blured because where programmers on other platforms use shared vocabulary based on real WS standards, WCF uses its own names.
Only subset of security standards implemented - one failure is missing UserName token profile with digested password directly in WCF.
When hosting services in IIS security features in services are completely shared with IIS and restricts settings for the whole site / virtual directory.
When hosting services in IIS basic authentication is handled in IIS - you must build custom module to handle it differently (but if you use self hosting you can use custom user name password validator in WCF directly - IIS should support that as well).
Bad support for generating security configuration when creating proxy from WSDL. Currently the best what WCF has is custom binding. Custom binding on client side is useful only when service is also WCF and uses custom binding. We need better support in security binding element to provide same configuration features as its counterpart in code. Then the WSDL importer should be able to use new binding element and create proxies for secured services. Once such importer is not able to import WSDL we will be sure that default WCF doesn't support security requirements expected by the service.
REST
Still a lot of people don't see difference between REST and SOAP and the most common mistake is adding service reference to REST service. Also problem of the REST is that it was added to unified protocol independent API but REST is heavy protocol dependent and is not message oriented. This will be hopefully improved in Web-API.
Protocols
It looks like new protocols or protocols versions are not added to WCF.
Extensibility
WCF has great extensibility unless you are trying to extend existing feature. If you decide to extend existing implementation you usually can't. For example to add mentioned UserName token profile with digested password you must do it completely from the scratch. You cannot extend existing user name implementation.
Edit: Last two are my personal pains.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
The advantages of ORM are pretty clear. But I noticed that some companies prefer to build their own home made ORM. Why?
There are only two arguments that I can possibly see for ever hand-rolling your ORM (and these have happened to me in the past, which forced me to write my own):
The company refuses to use Open Source software because of liabilities they assume might creep into their application.
The company refuses to spend money on a commercial ORM.
Any other argument (like the quality of Entity Framework is too poor for us to use it) is completely moot. No matter how bad Entity Framework (or whatever other ORM you may be referring to) is, you're not going to come close to the robustness and reliability by hand rolling your own.
As O/R mappers are very complex pieces of software, writing your own which goes beyond the typical datareader wrapper and pre-fab SQL query executor will take a lot of time (think 6+ months full time at least). That's not the biggest problem. The biggest problem is that once you go with your own O/R mapper, you have to maintain it for the rest of the time the application using it is in production. Which can be a long time. Make no mistake, maintaining an O/R mapper yourself is not a simple task: you have to re-invent every trick O/R mapper developers already know about and have solved themselves.
Last but not least: doing this yourself should not be done on a billable contract. After all, you're writing infrastructure code which is already available elsewhere.
I know I'm biased (I wrote LLBLGen Pro), but I also am one of the few people in this industry who has written a full O/R mapper framework and knows what it takes to get a decent one up and running with good performance and a great feature set.
Simply do the math: if it takes 1000$ to get an o/r mapper framework license (or less) and you can get started right away with the application of your customer, how many hours do you get for that 1000$ so you can built the O/R mapper without costing the company any money? And maintain it? No way you can do it for that money.
If you have an in-house database that has evolved to have a bad schema, it can be simpler to write your own ORM layer than try and get an out of the box solution to play nice with it.
In my opinion, ORMs are specialized and purposed to solve typical problems. If you want some more generic solution (e.g. for much more complex queries) or just different functionality you can either modify existing solution (what for various reasons often isn't the best choice) or create your own.
ORMs also limit you by forcing you to use their conventions and accept their limitations.