Advantages and Disadvantages of Apache vs. Your own server? [closed] - apache

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been working on my own ssl based multi process multi file descriptor threaded server for a few weeks now, needless to say it can handle a good amount of punishment. I am writing it in C++ in an object oriented manner and it is nearly finished with signal handling (atomic access included) and exception / errno.h handled.
The goal is to use the server to make multi-player applications and games for Android/iOS. I am actually very close to the completion, but it recently occurred to me that I can just use Apache to accomplish that.
I tried doing some research but couldn't find anything so perhaps someone can help me decide weather I should finish my server and use that or use apache or whatever. What are the advantages and disadvantages of apache vs your own server?
Thank you to those who are willing to participate in this discussion!

We would need more details about what you intend to accomplish but I would go with Apache in any case if it matches your needs:
it is battle tested for all kind of cases and loads
you can benefit from all the available modules (see http://httpd.apache.org/docs/2.0/mod/)
you can benefit from regular security patches
you don't have to maintain it yourself!
Hope this helps!

You can always write your own software even when perfectly well-proven alternatives exists, but you should be conscious about what are your reasons for doing so, and what are the costs.
For instance, your reasons could be:
Existing software too slow/high latency/difficult to synchronize
Existing software not extensible for my purpose
Your needs don't overlap with the architecture imposed by the software - for instance if you need a P2P network, then a client/server-based HTTP protocol is not your best
You just want to have fun exploring low-level protocols
I believe none of the above except possibly the last of these apply to your case, but you have not provided much details, so my apologies if I am wrong.
The costs could be:
Your architecture might get muddled - for instance you can fall into the trap of having your server being too busy calculating if a gunshot hits the enemy, when 10 clients are trying to initiate a TCP connection, or a buffer overflow in your persistent storage routine takes down the whole server
You spend time on lower level stuff when you should dealing with your game engine
Security is hard to get right, it takes many man-years of intrusion testing and formal proofs (even if you are using openSSL)
Making your own protocol means making your own bugs
Your own protocol means you have to make your own debuggers (for instance you can't test using curl or trace using HTTP proxies)
You have to solve many of the issues that have already been solved for the existing solution. For instance caching, authentication, redirection, logging, multi-node scaling, resource allocation, proxies
For your own stuff you can only ask yourself for help

Related

When is it appropriate to use AMQP? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
On my previous job I used benefits of AMQP, but I was not involved in rabbitMQ subproject development. On my current job I want to take charge of integrating one of AMQP implementations (probably rabbitMQ). The issue here that I have to convinced my Boss in using AMQP.
I am reading "RabbitMQ in Action" and Mr. Videla wrote that AMQP could improve any system, but I do not see how I can improve my project. We use only 2 servers making API calls in between, so we do not have scalability problem right now. We deal with real money flows and this means we need success confirmation for any operation, i.e. I can not put task in queue and "forget" about it. What benefits could AMQP bring in this case?
Could you please provide couple real world examples for relatively small systems when you do not need to scale hardly? Please omit standard "logging" and "broadcast message" situations :)
It sounds like all you need is RPC. Rabbit is not known for RPC but it actually does a really good job because:
You can make many messages transactional (ie all in one transaction)
Its platform, language and protocol format agnostic (ie you could send binary over)
Because of the broker idea you can easily add more servers to handle the procedures.
You can easily see the message flow and rate with RabbitMQ's admin UI
RabbitMQ is sort of an inversion of control at the architecture level
In RabbitMQ the message is the contract... not the procedure. This is the right way to do it.
Now lets compare this to say SOAP:
SOAP does not give you a broker or routing so all your severs need to know about each other. I can't tell you how annoying it is to have to go plugin IP address for dev, staging, production.
SOAP does not provide transactions. You have to do that yourself.
SOAP you have to use XML
There are more reliable RabbitMQ clients than SOAP clients. SOAP compatibility is a PITA.
SOAP you have the message and the endpoint. In some cases this is a pro.
You don't have to use RabbitMQ to use the idea of a eventbus/messagebus. I personally would not make any sort of app with out one because going from pure synchronous RPC to a asynchronous eventbus/messagebus require lots of work. Better to do it right from the beginning.

Tool to migrate from Embedded SQL to ODBC [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a bunch of C code accessing database (Oracle, DB2 and Sybase) through Embedded SQL : the base code is the same, but with three different precompilers, three sort of executables are built, one for each database/platform.
I works perfectly fine, but we need now migrate to a solution using ODBC access.
The problem is : what tools / api can be used ? A direct way seems to write a custom precompiler (or modify an existent) to wrap all SQL and host variables calls to calls on an ODBC connection.
Can somebody recommend tools for that task or api to keep it simple ?
Or is it a simpler way, another approach ?
Thank you
As is usual for such situations, there are likely no off shelf answers; people's codebases always have a number of surprise in them, and the combination prevents a COTs tool from ever being economical for individual situations.
What you want is a program transformation system (PTS), with a C front end, that can be customized to parse embedded SQL. Such tools can apply source-to-source rewrite rules ("if you see this pattern, then replace it by that pattern") to solve the problem.
These tools require some pretty technical effort to configure. In your case, you'd have to adjust a C front end to handle embedded SQL; that's typically not in C parsers. (How is it that you can process this stuff in its current form?) You'll have trouble with the C preprocessor, because people do abusive things with it that really violate a parsers nested-structures-view of the universe. Then you'll have to write and test the rules.
This effort is a sunk cost to be traded against the effort of doing the work by hand or some more ad hoc scripting (e.g., Perl) that partially does the job leaving you to clean it up. Our experience is that it is not worth the trouble below 100K SLOC, and that you have no chance of manual/ad hoc remediation above 1M SLOC, and in between your mileage will vary.
At these intermediate sizes, you can agonize over the tradeoffs; that costs energy and time, too. Sometimes its just better to bite the bullet and do it any way you can an clean it up.
Our DMS Software Reengineering Toolkit is one of these PTS. It has a customizable C parser and preprocessor, precisely to help deal with these configuration troubles. The other PTSs mentioned in the Wikipedia article, do not, I beleive, have any serious C parser associated with them. (I'm the guy behind DMS).

Generic SQL Query Listener [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
How can I best write an application that sits in front of a generic SQL database (SQL Server, MySQL, Oracle, etc.) and listens to SQL queries?
The application needs to be able to intercept (prevent passing to the SQL database) or pass (send to SQL database) the query, based on the specific query type.
Is there a way to do this generically so that it is not tied to a specific database backend?
The basic system isn't particularly easy, though neither is it incredibly difficult. You create a daemon that listens on a port (or a set of ports) for connection attempts. It accepts those connections, then establishes its own connection to the DBMS, forming a man-in-the-middle relay/interception point. The major issues are in how to configure things so that:
the clients connect to your Generic SQL Listener (GSL) instead of the DBMS's own listener, and
the GSL knows how to connect to the DBMS's listener (IP address and port number).
You can still run into issues, though. Most notably, if the GSL is on the same machine as the DBMS listener, then when the GSL connects to the DBMS, it looks to the DBMS like a local connection instead of a remote connection. If the GSL is on a different machine, then it looks like all connections are coming from the machine where the GSL is running.
Additionally, if the information is being sent encrypted, then your GSL can only intercept encrypted communications. If the encryption is any good, you won't be able to log it. You may be able to handle Diffie-Hellman exchanges, but you need to know what the hell you're up to, and what the DBMS you're intercepting is up to — and you probably need to get buy-in from the clients that they'll go through your middleman system. Of course, if the 'clients' are web servers under your control, you can deal with all this.
The details of the connection tend to be straight-forward enough as long as your code is simply transmitting and logging the queries. Each DBMS has its own protocol for how SQL requests are handled, and to intercept and modify or reject operations will require understanding of the each DBMS's protocol.
There are commercial products that do this sort of thing. I work for IBM and know that IBM's Guardium products include those abilities for a number of DBMS (including, I believe, all those mentioned above — if there's an issue, it is likely to be MySQL that is least supported). Handling encrypted communications is still tricky, even for systems like Guardium.
I've got a daemon which runs on Unix that is adapted to one specific DBMS. It handles much of this — but doesn't attempt to interfere with encrypted communication; it simply records what the two end parties say to each other. Contact me if you want the code — see my profile. Many parts would probably be reusable with other DBMS; other parts are completely peculiar to the specific DBMS it was designed for.

LDAP vs Relational Database [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I come to you after a desperate disappointing search online for an answer to my question:
Which one is faster: LDAP or Relational Database?
I need to setup a system with both authentication and authorization of users.
I know LDAP has the "structure" for that kind of need, but is it really faster than, say, MySQL?
For authentication and authorization purposes, in my opinion LDAP provides the best mix between performance and simplicity or installation and maintenance. LDAP as a protocol is quite small, requiring relatively little network bandwidth. The small protocol also makes encrypted transmission fairly high-performance.
LDAP is also simple, servers are easy to deploy, and modern, professional-quality LDAP servers provide impressive performance vs. relational database, all other things being equal such as hardware and query type.
I agree that either could be used in your case, but generally LDAP is better for authentication and authorization because of its simplicity and lower maintenance costs. As for performance, the LDAP server with which I am testing provides about 28,000 authentications per second vs. postgres providing about 42% of that number on the same hardware, though it is difficult to compare apples and oranges.
Modern professional-quality LDAP servers also provide extremely powerful and fast cryptographic hashes for secure password storage - as well as reasonably strong reversible block ciphers like AES in the event a reversible password is required if the client must SASL's DIGEST-MD5 mechanism for password-less authentication.
I agree with Al, it is impossible to say generally which is faster. It's all contextual.
I love that after this truism, Al then offers up a general opinion that LDAP is slow. :) I digress...
Joking aside, it comes down to what you're trying to do vs. what the target system is optimized to do. MySQL/MSFT SQL Server/etc. are built as general purpose stores where you will (tend) to store normalized data with a variety of query patterns over the data. They have all sorts of logic at many layers of the stack to try and help you do a variety of types of queries & computations over your data, and even let you hint things in to the QP when you know best.
LDAP directories tend to be optimized quite differently...like for the storage of hierarchically organized objects with a specific set of query patterns over it (as specified by LDAP RFCs). AD for example is fast...quite fast. It's optimized for object search & retrieval and associated operations (like auth).
Like anything, you can use either well or poorly.
Short of being in a crazy scale mode, I suspect you could use either quite successfully.
IMO this is not a real question, because it always depends on particular implementations.
I can put here only my experience: LDAP is slow, SQL is fast. I use MS SQL 2008 and in my case it is very fast thanks to its intelligent caching of repeated queries.
But do you need it to be exceptionally fast? LDAP-based solution can often offer easier and better integration to other software and/or LAN infrastructure when working with users, authorization and authentication.

Google maps is going to get replaced? is it true? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I have been working on an application from past 4 months and it's very much depends on Google Maps on iOS platform. Recently one of my friends raised a concern that what if Apple Inc. decides to use a different Map provider?
It turns out after few searching on internet that Apple is going to replace Google Maps with some new advance 3D maps build by company called C3. (One of the researched resource). Well now I am worried of my already written code,
Should I delay my development work until this new technology gets in? or just wait until Apple announces officially.
Thanks
This is a common dilemma in programming, and there's a common solution too. Develop your own primitives - whether you need to display overlays, show landmarks, draw polygons and lines, do everything through stubs in your own code. If the underlying platform has to change, you then have a few well-known places to update to the new API.
Be very strict about not accessing the underlying API anywhere that isn't in your wrapper layer, and it should be straight-forward to change to a different later. Not free, of course, but so long as it's possible to implement the primitives you need in the new layer, you just need to change those, and can leave the rest of your project untouched.
It's not worth losing months' of having a finished project to avoid this situation.
Edit: This approach has another benefit - if you end up writing multiple primitive layers for different APIs, you may be able to let the user pick between them: you may have a (more expensive) higher-quality map layer which you charge for, and a cheap/free one which you don't - allowing people free access to a lower-quality version, and letting them buy an upgrade to the better maps. Or ... there are lots of possibilities. It's the same pattern some applications take with data-persistence layers, letting people run the same application on top of differing data platforms. There are lots of examples of this patterm.