non-http server - apache

I'm writing a server that needs to serve many clients. The traffic is NOT http (but rather some proprietary protocol on top of TCP). I'm not very familiar with commercial web servers such as IIS and Apache. Can anyone tell me if it's possible to write some sort of "extension" to run on top of one of these platforms so that I don't have to write the logic for the sockets? Or perhaps there is another way (not IIS or Apache) of doing it which is better?
My server is generally going to behave as a web service (gets request, queries db, sends response) however there is one scenario in which it stays connected to the client socket and sends updates at a given interval on that socket.
It seems reasonable for it to be a way to do this in a way that I'd only have to write my logic without the general logic of a server. Any ideas?
Thanks!

Good question, and its also good too look to leverage an existing web server - you get scalability and stability, effectively for free.
I've never done this myself, but it should be totally possible in IIS (i recommend v7+ for this, makes it easier).
You can set up a new web site through the administration tool, and assign it a port to listen on - this bit is pretty straight forward. You should set its Binding Type to net.tcp (this is a dropdown in the dialog to add a new website, you can't miss it).
You can then use either modules or handlers to implement the rest of your custom functionality. This article Developing IIS 7.0 Modules and Handlers with the .NET Framework is a good intro to the subject. Most of the doco out there about writing custom handlers and modules is focussed on the HTTP protocol, but there are some snippets floating around for TCP and/or net.tcp (because IIS and Apache are web servers, and web is synonymous with http). Another resource that may be useful is this: Configure Request-Processing for a Web Server (IIS 7)
Alternatively, you may consider changing your approach and do it as a net.tcp WCF service, with this you get the benefits of using IIS, the flexibility of choosing the protocol (can be statically configured, doesn't need to be compiled in), and you don't have to write handlers or modules.

Related

Publically exposing a WCF service which is behind firewall

Enviroment
Consider the following production environment setup for a web application:
End user --Internet--> web server in DMZ --Firewall--> WCF hosted on app server --> DB Server
Constraint:
Also consider that we cannot change anything from the infrastructure point of view. For example, open ports, change any firewall setting etc.
Problem:
We want to expose the WCF, which is hosted on the app server, to external clients. We are trying to solve this as follows:
End user --Internet--> Router WCF in DMZ --Firewall--> WCF hosted on app server --> DB Server
Please note that we cannot establish a db connection from the DMZ environment where the WCF needs to be hosted so that the external clients can consume it. We have developed a "Router WCF" which passes through all messages to the internal WCF and vice-versa.
This solution adds an unnecessary overhead of serializing and de-serializing data. There must a better and proper way of doing this. We are looking forward to the community for guidance. Thank you.
In DMZ the bibliography tells you: always create an intermediate layer. This means another machine on the internet will be the point of connection and it will proxy the connection back to WCF.
The machine is the web server you seem to mention, that is stupid, has no data, and (to be a proper DMZ) has a firewall between it and all the machines (WCF and the others) it serves that permits only IP:PORTS used on such machines.
In this scenario, usually Apache on the public web server with a URL-rewrite rule (i.e if it is /x/y send it to servera.internal.com:9900 - if it is /x/z send it to serverb.internal.com:9901 etc...) is enough, but there are plenty of solutions of course.
It seems you are doing exactly this, why do you say it is not the proper solution?
DMZs could seem a bit dated as protection mechanism (I agree) but you have to think when servers like your WCF machine had dozens of ports opened, and you wanted to lower the risk of random ports on web-facing machines, a giant attack surface. Nowadays everything can work with couple of ports opened, so it can seem "silly" to do all of this just to forward a TCP port. But it is still valuable as (for example) if servers behind the web server in DMZ do not have internet access, even when WCF is compromised, the attacker cannot use its own reverse shell to deploy what it is nowadays called an APT (yesterday backdoor). The attacker "won't see" his own machine from WCF as the DMZ provides the connection to the external world.

For a SaaS running on Node.JS, is a web-server (nginx) or varnish necessary as a reverse proxy?

For a SaaS running on Node.JS, is a web-server necessary?
If yes, which one and why?
What would be the disadvantages of using just node? It's role is to just handle the CRUD requests and serve JSON back for client to parse the date (like Gmail).
"is a web-server necessary"?
Technically, no. Practically, yes a separate web server is typically used and for good reason.
In this talk by Ryan Dahl in May 2010, at 37'30" he states that he recommends running node.js behind a reverse proxy or web server for "security reasons". To elaborate on that, hardened web servers like nginx or apache have had their TCP stacks evolve for a long time in terms of stability and security. Node.js is not at that same level yet. Thus, since putting node.js behind nginx is easy, doesn't have many negative consequences, and in theory increases the security of your deployment somewhat, it is a good choice. At some point in time, node.js may be deemed officially "ready for live direct Internet connections" but wait for Ryan/Joyent to make some announcement to that effect.
Secondly, binding to sub-1024 ports (like 80 and 443) requires the process to be root. nginx and others automatically handle binding as root and then dropping privileges to a safer user account (www-data or nobody typically). Although node.js has system call wrappers in the process module to drop root privileges with setgid and setuid, AFAIK other than coding this yourself the node community hasn't yet seen a convention emerge for doing this. More on this topic in this discussion.
Thirdly, web servers are good at virtual hosting and in general there are convenient things you can do (URL rewriting and such) that require custom coding in node.js to achieve otherwise.
Fourthly, nginx is great at serving static files. Better than node.js (at least by a little as of right now). Again as time goes forward this point may become less and less relevant, but in my mind a traditional static file web server and a web application server still have distinct roles and purposes.
"If yes, which one and why"?
nginx. Because it has great performance and is simpler to configure than apache.

Relay WCF Service

This is more of an architectural and security question than anything else. I'm trying to determine if a suggested architecture is necessary. Let me explain my configuration.
We have a standard DMZ established that essentially has two firewalls. One that's external facing and the other that connects to the internal LAN. The following describes where each application tier is currently running.
Outside the firewall:
Silverlight Application
In the DMZ:
WCF Service (Business Logic & Data Access Layer)
Inside the LAN:
Database
I'm receiving input that the architecture is not correct. Specifically, it has been suggested that because "a web server is easily hacked" that we should place a relay server inside the DMZ that communicates with another WCF service inside the LAN which will then communicate with the database. The external firewall is currently configured to only allow port 443 (https) to the WCF service. The internal firewall is configured to allow SQL connections from the WCF service in the DMZ.
Ignoring the obvious performance implications, I don't see the security benefit either. I'm going to reserve my judgement of this suggestion to avoid polluting the answers with my bias. Any input is appreciated.
Thanks,
Matt
I do think the remarks made are valid, and in such a case I would probably also try and use as many "defense-in-depth" layers I could possibly come up with.
Plus, the amount of work to achieve this might be less than you're afraid of - if you're on .NET 4 (or can move to it).
You could use the new .NET 4 / WCF 4 routing service to do this quite easily. As an added benefit: you could expose a HTTPS endpoint to the outside world, but on the inside, you could use netTcpBinding (which is a lot faster) to handle internal communications.
Check out how easy it is to set up a .NET 4 routing service:
What's new in WCF4 Routing Service - or: "Look ma: Just one service to talk to!"
Creating Routing Service using WCF 4.0, .NET Framework 4.0 and Visual Studio 2010 RC

WCF - WSDL or pre-compiled Proxy

this is B2B scenario, one client (at least for now).
Server environment:
WCF service, IIS6, .NET v3.5
Client environment:
dev shop is .NET 2.0/VS2005. Will be calling my WCF service.
Question: should i
(a) open WSDL gen for the client(not desirable for security reasons)
(b) send a WSDL file(s) to the client
(c) pre-compile Proxy into dll (on my side) and send it to the client
(d) ???
?
Any suggestions on what would be the best practice for this scenario, any pros/cons?
Thanks in advance,
Igor
Why is a publicly available WSDL not desirable for security reasons?
I may be willing to admit that publishing an API (which is basically what you are doing with WSDL) makes you a bit more vulnerable than if you didn't, but it would be wrong to assume that hiding the WSDL constitutes any kind of security. This is ironically called security by obscurity, and it will be broken by any determined attacker.
The web service should be secure in itself. WCF offers many security features, but that is perpendicular to your question.
I'd prefer publishing the WSDL. If you don't want to do that, or if there is a policy in place that says that you can't do that, then send the WSDL to the client team so they can use it as they wish.
Precompiling the proxy will only enforce your coding conventions on the client team, and they may not appreciate that - for example, I often prefer my proxies to be generated with the /i switch that makes the generated classes internal. I also like to be able to specify the .NET namespaces so that they fit the rest of my code. That would not be possible if I got a precompiled assembly (I would be able to use it anyway, but it would just annoy me).
If you don't want to actually publish the WSDL and make it available online for calling clients, then I would prefer the "send me the WSDL and XSD" approach.
That way, you still give the client calling you the ability and flexibility of creating the proxy the way they see fit.
I would only consider using a pre-compiled proxy in an assembly if the calling party was unable or unwilling to create the proxy themselves, and only if they asked me to supply that code in assembly form.
Marc
In order of preference I would be inclined to:
Have the service expose the WSDL (with security enabled)
Send a WSDL file to the service consumer
I was going to list option 3 as sending a proxy DLL but on second thought I wouldn't even list it as an option. It seems to me that shipping your client a proxy DLL opens up a big can of worms that I would not want to deal with.
The main problem is that you end up having to support executable code that is deployed at the client site. The proxy code could be generated by svcutil but if there is some sort of problem invoking the service I can just see the client calling you for support and telling you that your proxy is not working. Now, their claim is probably not correct but it's hard for you to prove it since you don't know what they are doing on their side. e.g.
Maybe they didn't install the proxy DLL?
Maybe there is some permission problem?
Maybe they don't know what they are doing (yeah, I know that never happens. :) ).
Maybe a .NET upgrade on their side affected your proxy?
You might even run into some versioning headaches when sending them new proxies.
If your customer is not that savvy instead of trying to help them by creating proxy DLL perhaps putting some time and effort into assisting them in getting the correct configuration and usage of your service would be a better approach?

How to put up an off-the-shelf https to http gateway?

I have an HTTP server which is in our internal network and accessible only from inside it. I would like to put another server that would listen to an HTTPS port accessible from outside, and forward the requests to that HTTP server (and send back the responses via HTTPS). I know that there are several ways to do this with some programming involved (and I myself made a temporary solution with Tomcat and a very simple servlet I wrote), but is there a way to do the same just plugging parts already made (like Apache + modules)?
This is the sort of use-case that stunnel is designed for. There is a specific example of using stunnel to wrap an HTTP server.
You should consider whether this is really a good idea, though. Web applications designed for use inside a corporate firewall are often fairly lax about security. Merely encrypting the connections prevents casual eavesdropping, but does not secure the site. If an attacker finds your outward facing server and starts connecting to it, they can still try to find exploitable flaws in the web service (SQL injection, cross-site scripting, etc).
With Apache look into mod_proxy.
Apache 2.2 mod_proxy docs
Apache 2.0 mod_proxy docs