As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
On my previous job I used benefits of AMQP, but I was not involved in rabbitMQ subproject development. On my current job I want to take charge of integrating one of AMQP implementations (probably rabbitMQ). The issue here that I have to convinced my Boss in using AMQP.
I am reading "RabbitMQ in Action" and Mr. Videla wrote that AMQP could improve any system, but I do not see how I can improve my project. We use only 2 servers making API calls in between, so we do not have scalability problem right now. We deal with real money flows and this means we need success confirmation for any operation, i.e. I can not put task in queue and "forget" about it. What benefits could AMQP bring in this case?
Could you please provide couple real world examples for relatively small systems when you do not need to scale hardly? Please omit standard "logging" and "broadcast message" situations :)
It sounds like all you need is RPC. Rabbit is not known for RPC but it actually does a really good job because:
You can make many messages transactional (ie all in one transaction)
Its platform, language and protocol format agnostic (ie you could send binary over)
Because of the broker idea you can easily add more servers to handle the procedures.
You can easily see the message flow and rate with RabbitMQ's admin UI
RabbitMQ is sort of an inversion of control at the architecture level
In RabbitMQ the message is the contract... not the procedure. This is the right way to do it.
Now lets compare this to say SOAP:
SOAP does not give you a broker or routing so all your severs need to know about each other. I can't tell you how annoying it is to have to go plugin IP address for dev, staging, production.
SOAP does not provide transactions. You have to do that yourself.
SOAP you have to use XML
There are more reliable RabbitMQ clients than SOAP clients. SOAP compatibility is a PITA.
SOAP you have the message and the endpoint. In some cases this is a pro.
You don't have to use RabbitMQ to use the idea of a eventbus/messagebus. I personally would not make any sort of app with out one because going from pure synchronous RPC to a asynchronous eventbus/messagebus require lots of work. Better to do it right from the beginning.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
How can I best write an application that sits in front of a generic SQL database (SQL Server, MySQL, Oracle, etc.) and listens to SQL queries?
The application needs to be able to intercept (prevent passing to the SQL database) or pass (send to SQL database) the query, based on the specific query type.
Is there a way to do this generically so that it is not tied to a specific database backend?
The basic system isn't particularly easy, though neither is it incredibly difficult. You create a daemon that listens on a port (or a set of ports) for connection attempts. It accepts those connections, then establishes its own connection to the DBMS, forming a man-in-the-middle relay/interception point. The major issues are in how to configure things so that:
the clients connect to your Generic SQL Listener (GSL) instead of the DBMS's own listener, and
the GSL knows how to connect to the DBMS's listener (IP address and port number).
You can still run into issues, though. Most notably, if the GSL is on the same machine as the DBMS listener, then when the GSL connects to the DBMS, it looks to the DBMS like a local connection instead of a remote connection. If the GSL is on a different machine, then it looks like all connections are coming from the machine where the GSL is running.
Additionally, if the information is being sent encrypted, then your GSL can only intercept encrypted communications. If the encryption is any good, you won't be able to log it. You may be able to handle Diffie-Hellman exchanges, but you need to know what the hell you're up to, and what the DBMS you're intercepting is up to — and you probably need to get buy-in from the clients that they'll go through your middleman system. Of course, if the 'clients' are web servers under your control, you can deal with all this.
The details of the connection tend to be straight-forward enough as long as your code is simply transmitting and logging the queries. Each DBMS has its own protocol for how SQL requests are handled, and to intercept and modify or reject operations will require understanding of the each DBMS's protocol.
There are commercial products that do this sort of thing. I work for IBM and know that IBM's Guardium products include those abilities for a number of DBMS (including, I believe, all those mentioned above — if there's an issue, it is likely to be MySQL that is least supported). Handling encrypted communications is still tricky, even for systems like Guardium.
I've got a daemon which runs on Unix that is adapted to one specific DBMS. It handles much of this — but doesn't attempt to interfere with encrypted communication; it simply records what the two end parties say to each other. Contact me if you want the code — see my profile. Many parts would probably be reusable with other DBMS; other parts are completely peculiar to the specific DBMS it was designed for.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been working on my own ssl based multi process multi file descriptor threaded server for a few weeks now, needless to say it can handle a good amount of punishment. I am writing it in C++ in an object oriented manner and it is nearly finished with signal handling (atomic access included) and exception / errno.h handled.
The goal is to use the server to make multi-player applications and games for Android/iOS. I am actually very close to the completion, but it recently occurred to me that I can just use Apache to accomplish that.
I tried doing some research but couldn't find anything so perhaps someone can help me decide weather I should finish my server and use that or use apache or whatever. What are the advantages and disadvantages of apache vs your own server?
Thank you to those who are willing to participate in this discussion!
We would need more details about what you intend to accomplish but I would go with Apache in any case if it matches your needs:
it is battle tested for all kind of cases and loads
you can benefit from all the available modules (see http://httpd.apache.org/docs/2.0/mod/)
you can benefit from regular security patches
you don't have to maintain it yourself!
Hope this helps!
You can always write your own software even when perfectly well-proven alternatives exists, but you should be conscious about what are your reasons for doing so, and what are the costs.
For instance, your reasons could be:
Existing software too slow/high latency/difficult to synchronize
Existing software not extensible for my purpose
Your needs don't overlap with the architecture imposed by the software - for instance if you need a P2P network, then a client/server-based HTTP protocol is not your best
You just want to have fun exploring low-level protocols
I believe none of the above except possibly the last of these apply to your case, but you have not provided much details, so my apologies if I am wrong.
The costs could be:
Your architecture might get muddled - for instance you can fall into the trap of having your server being too busy calculating if a gunshot hits the enemy, when 10 clients are trying to initiate a TCP connection, or a buffer overflow in your persistent storage routine takes down the whole server
You spend time on lower level stuff when you should dealing with your game engine
Security is hard to get right, it takes many man-years of intrusion testing and formal proofs (even if you are using openSSL)
Making your own protocol means making your own bugs
Your own protocol means you have to make your own debuggers (for instance you can't test using curl or trace using HTTP proxies)
You have to solve many of the issues that have already been solved for the existing solution. For instance caching, authentication, redirection, logging, multi-node scaling, resource allocation, proxies
For your own stuff you can only ask yourself for help
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What are the pros and cons of using a rest service vs a wcf service?
I am wondering which type to use and I was interested to find some sort of comparision.
Rest is a way of doing communication over the internet. It is a very basic process of picking addresses to serve as method locations and returning HTML standard data (javascript, css, html of course).
WCF is a .net library used to have two programs talk to each other using SOAP. Which consists of two very familiar programs trading class info.
Seeing as Rest is a process, and WCF is a class library, a better question might be "Rest vs Soap".
The bottom line is, if you need two apps to talk, you might want to use WCF. Even if the apps are not both written in .net. However if you need information to be accessed by web tech(usualy javascript access is done this way) you'll want to use Rest.
Just a quick side note though, WCF does Rest well too, so you realy can't go wrong there.
You're asking a question about apples and oranges. REST is pattern used in creating web services. I'm not an expert on it, but you can find plenty of details on Wikipedia. WCF is a Microsoft technology for creating web services (primarily using SOAP, although it's so configurable that you can do REST on it as well - see ASP.Net WebAPI).
Pros for WCF:
Very configurable - If you can imagine it, WCF can probably do it.
Simple to use if you're sticking to the Microsoft stack. Visual Studio does 90% of the work for you.
Cons for WCF:
Very configurable - It can be a bit of a pain to get it do exactly what you want sometimes, especially if you're new to it.
There can be some problems communicating between different technology stacks. I've heard of Java services curling up and dying when pointed at a WCF service. As far as I've heard, this is a problem with the Java libraries, not WCF, but who knows for sure.
That's all that comes to mind right now, but hopefully that gives a you a decent impression on WCF.
If you are absolutely sure that HTTP is the protocol you want to use and you want to embrace it as an "Application" protocol, not just a "Transport" protocol then something like ASP.NET Web API.
If you building a service for your servers in your datacenter to talk to each other then seriously consider WCF.
Whether to do REST is a completely different question. Will this service last for many years? Will it have many different clients? Will some of those clients be out of your control? If you answered yes, then it may be worth investigating what benefits the REST constraints can bring.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I am currently developing an application which by design has a three-tier architecture. It is composed of an administration website, a WCF webservice and a database. Of course, the website doesn't directly connect to the database, so that all requests and responses must pass through the service. The "problem" is that there are several entities involved in this application, and for each one of them I must support the basic CRUD operations and many more, which makes the service have right now more than 30 methods in a single endpoint. Since I already had to increase the maximum message size, I begin to ask myself if having all those methods in a single service is not too much. What do you think? What alternatives do I have?
I can't really give you a good answer since it kind of depends on the requirements and complexity of your application. Typically a CRUDy service interface is an antipattern you should avoid. There shouldn't be a one-to-one mappings between your data layer and your service layer. If there is then the service layer is kind of not pulling it's own weight. SOA is a huge topic that I'm only starting to get my head around but my understanding is that SOA's are supposed to encapsulate the logic of many operations. Ie (Authentication, Authorization, Logging, Scaling, Transactions, etc.)
http://msdn.microsoft.com/en-us/library/ms954638.aspx
Have you heard of the Repository pattern? That's a software design pattern where a particular class/assembly encapsulates the logic required for getting data into and out of the database. It's lighter weight then using full blown services and might be all you need if what you want is a good way of decoupling your application from the database. A really neat feature of the Repository pattern is that you can make all the methods of the Repository an Interface and then make a MockImplementation to perform testing on your business/UI layers independently of your database.
http://msdn.microsoft.com/en-us/library/ff649690.aspx
There is no real concrete answer to this question. It is a tradeoff either way. Try to keep the Single Responsibility Principle in mind. A couple options are to:
Create an endpoint per data type, putting CRUD operations into each. That would mean maintaining more endpoint configuration through.
Separate your methods into multiple files, but leave them in one class, using partial classes.
If all you are doing is CRUD operations for each data type, then maybe use WCF Data Services / OData instead.
Implementing the repetitive CRUD functionality over the WCF service could be a pain but if the service is not exposed globally (so you don't have to pay a lot for the authentication/authorization) you can save a LOT of time by using one of the two: ADO.NET Data Services or WCF RIA Services.
Whether or not 30 methods on a single service is a bad design - this is not clear. It's like asking whether or not a class with 30 members is a bad design - some people will say that it definitely is, most will comment "it depends on what you are doing".
Nevertheless, using HTTP for CRUD brings some restrictions - one of which you mention. You will not be able to insert/update large batches unless you increase the message size. You can also have some serious issues with transaction handling unless you somehow involve the transactions over http in an explicit way.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
Why is the Mono project implementing WCF interfaces and classes "as is"?
I do not understand what is the point to repeat Microsoft's design. My experience says that WCF is a huge framework with an implementation based on SOAP services. There are tremendous problems with their approach. It simply does not fit well for simple HTTP request processing cycle. Why not try to invent a better framework instead?
Update:
OK, I get it. :) I like the .NET platform, C# and I like that this platform is available on another OS, but ...
Don't you guys see that many things in the original (Microsoft) frameworks can be done better?
Look at System.ServiceModel.Channels.Message. This is one of big things of customization landscape.
Why do I see XML everywhere? How can I easily do anything with classes like this? It is feasible, but I cannot say this is good design for a general purpose communication framework. I thought that the purpose of the Mono project is not just bringing the .NET ecosystem to unix* but make it better.
I think the whole point is to make WCF platform available in other operating systems than Microsoft Windows. So, if you have an application developed with MS VisualStudio (Microsoft's compilers), you can deploy it on Linux or Mac OS X if you wish.
You can also use MonoDevelop and Mono Compilers if you decide to code WCF in alternative platforms.
Because not everything is suitable for a simple http request processing cycle. Because SOAP offers features REST does not. Because it hooks into a wide set of encryption, authentication and authorization options. Because what you see as as tremendous problems solve problems for others.
Mono exists to allow .net on other OS's. Mono is not about picking and choosing what to implement based on merit.