Publically exposing a WCF service which is behind firewall - wcf

Enviroment
Consider the following production environment setup for a web application:
End user --Internet--> web server in DMZ --Firewall--> WCF hosted on app server --> DB Server
Constraint:
Also consider that we cannot change anything from the infrastructure point of view. For example, open ports, change any firewall setting etc.
Problem:
We want to expose the WCF, which is hosted on the app server, to external clients. We are trying to solve this as follows:
End user --Internet--> Router WCF in DMZ --Firewall--> WCF hosted on app server --> DB Server
Please note that we cannot establish a db connection from the DMZ environment where the WCF needs to be hosted so that the external clients can consume it. We have developed a "Router WCF" which passes through all messages to the internal WCF and vice-versa.
This solution adds an unnecessary overhead of serializing and de-serializing data. There must a better and proper way of doing this. We are looking forward to the community for guidance. Thank you.

In DMZ the bibliography tells you: always create an intermediate layer. This means another machine on the internet will be the point of connection and it will proxy the connection back to WCF.
The machine is the web server you seem to mention, that is stupid, has no data, and (to be a proper DMZ) has a firewall between it and all the machines (WCF and the others) it serves that permits only IP:PORTS used on such machines.
In this scenario, usually Apache on the public web server with a URL-rewrite rule (i.e if it is /x/y send it to servera.internal.com:9900 - if it is /x/z send it to serverb.internal.com:9901 etc...) is enough, but there are plenty of solutions of course.
It seems you are doing exactly this, why do you say it is not the proper solution?
DMZs could seem a bit dated as protection mechanism (I agree) but you have to think when servers like your WCF machine had dozens of ports opened, and you wanted to lower the risk of random ports on web-facing machines, a giant attack surface. Nowadays everything can work with couple of ports opened, so it can seem "silly" to do all of this just to forward a TCP port. But it is still valuable as (for example) if servers behind the web server in DMZ do not have internet access, even when WCF is compromised, the attacker cannot use its own reverse shell to deploy what it is nowadays called an APT (yesterday backdoor). The attacker "won't see" his own machine from WCF as the DMZ provides the connection to the external world.

Related

Taking a server from development to production

I have created a service (WCF) that acts as a backend for a DB. For now it does basic operations such as INSERT, SELECT etc. I have run it locally and now it is time to expose her to the internet and enter 'production'. Is there a best practice to doing so? Bear in mind this service will be hosted on a PC as a Windows Service (not IIS). This is the first time I am putting a Windows Service into production so I am hazy on the details but I think this is the main idea:
On the service: Check for 'rookie' errors such as SQL Injection. Set maximum message sizes to ones marginally higher than the largest message that should be transmitted by my service. Also upgrade self signed X.509 certificate to one issued by a CA. (Where does one store this certificate? Locally on the PC?)
On the PC: Fully patched software (OS etc) and windows firewall with a specific set of rules that allows traffic only on the ports being used (I suppose the safest way to do this is to use the windows tool Allow a program or feature through Windows Firewall ?). Furthermore an updated antivirus running.
On the Network: For the network router, port forward the respective ports being used (the base address is declared as http://localhost:8080 so I guess port 80 for HTTP and 443 for HTTPS? I am using message level Security.)
General precautions: Full message logging on the service to analyze traffic and potential attackers. Also run a Network intrusion detection system such as Snort so that I can sleep a bit better at night.
Am I missing anything obvious? Also should I be hosting in IIS, on security exchange someone said that I would be vulnerable to HTTP attacks if I did not put the code behind a web server. However I have not read this anywhere else

Hosting Slim Framework Rest API in Windows

I created an api using SLIM framework, but the services are not accessible to public as they are limited to localhost. how to host the services on a realtime server, so that, they can be accessible from anywhere?
please some one help me
This question requires more detail in order to answer properly.
If you are hosting your API on a windows server, then it is likely you have configured some kind of "WAMP" stack, correct? Or maybe serving PHP through IIS? This are important questions because we need to know what port you have bound your web application server to, which leads us to the next question...
Where are you hosting the server which is running the application which bound to what port?
Ultimately, a public, external IP will need to be either:
a. NAT'ed to the internal IP of your web server instanced
b. Port-forwarded to the internal IP of the server running your web application
Still, we are making a lot of assumptions here because getting a web application "accessible from anywhere" will require different work depending on your environment.
Here is the most basic example:
You are at home, running this API on your Windows workstation and will like to be able to hit it from a remote location.
Ensure Windows firewall allows inbound traffic to the port on which your application is running (probably port 80/HTTP, maybe 443/HTTPS).
Log into your ISP's router and configure port-forwarding to ensure inbound traffic on, say, port 80, is routed to the internal IP of the workstation running the API.
That's all there is to it.
Keep in mind that this also assumes that your ISP even allows you to expose your own web server to the internet on port 80 (or 443). Also, since we know nothing about your environment, this is all pure conjecture. Please provide more information you would like a real answer.
The most traditional way to host Slim Framework, would be through Apache. Install Apache and be sure you have the proper network settings to allow inbound connections, but more information about your setup could be needed for proper guidance.
http://httpd.apache.org/docs/2.4/platform/windows.html
When Apache is installed and working, you need to set Rewrite rules on the URL, information on that can be found on http://docs.slimframework.com/routing/rewrite/.
Your question on the verge of off topic, it probaly is, but read up on what questions can be asked and not, here on Stackoverflow, hope i could help.

Delphi / WCF SOAP connectivity and Virtual Machine (VMWare) settings

I've got a working WCF service and a working Delphi client. On a normal PC, they work nicely. On a VM that's "Bridged" they work nicely if I log onto the domain (but not if I logon locally to the VM as administrator). If the VM is NATed, the connection attempt times out.
I would love to hear people's thoughts on what could be making such a difference to whether the client can successfully connect to the WCF service. Bear in mind I'm connecting with basicHttpBinding with no security.
The service is setup to use System Account (interact with desktop is NOT checked), and it starts automatically. The service URI doesn't change, the port is open, and can be telnet'd to in all scenarios.
Any ideas or pointers?
Within the VM, open Internet Explorer and verify that you can view the WSDL of the WCF service. If you can't, then your issue is connectivity and has nothing to do with your Delphi code.
Group Policies and Enterprise Security solutions that swap certificates or require certificates to be registered (we're using a UTM called CyberRoam) make a difference.
Also when Virtual Machines join a domain, their ComputerNames are added to a list maintained by the Domain Controller. When the same Virtual Machine is "moved" or "copied", its ComputerName should be changed to avoid DNS resolution issues.
I'm not claiming this as the definitive answer, however it does explain the issues I noticed in this instance.

Any known issues resolving a hostname from an IIS hosted service

Summary:
Does anybody know if there are known issues or configuration gotchas with an IIS service connecting to an Azure based service?
Scenario:
I currently have a scenario that requires me to host two web-services, one in Azure, and one on a server running IIS. The IIS hosted service (a WCF service) connects to the Azure hosted service (actually the Azure storage API) in order to fetch certain information. This information is manipulated and returned to the client.
Client -> IIS Service -> Azure Storage Service
Issue:
I'm running into issues with the IIS service connecting to the Azure Service. The hostname cannot be resolved. I'm using the Azure Storage client from my code, but have actually tried this using the azure API calls, and they also do not work from IIS. I captured the requests using Fiddler (on a different machine), they match the azure REST API calls, as expected. These requests, when made outside of IIS on the host machine execute properly. It is only when they are issued by the IIS service that they fail.
In my research other people have been running into this issue when there's a firewall problem, but since I can hit the service properly from the machine, that doesn't seem to fit the bill. My hunch is that there's a configuration issue I need to sort out in IIS, but I've failed to find anything useful with my searches.
Does anyone have any information on why this might be occuring (known bugs, gotchas etc)? Any workarounds? From a SOA perspective, this seems fairly critical to understand.
Any assitance anyone has would be helpful. Thank you.
Sounds like a proxy configuration issue. Check how your IIS server connected to Internet. If you are using some sort of proxy to get to Internet, that connection has to be configured correctly.
Specifically, if your proxy servers are Microsoft ISA server, or Microsoft Forefront TMG, then you need to check two things:
ISA server client or Forefront TMG client software is installed on the server
The account used by IIS application pool is domain user. ISA Server/TMG are designed to work only with user account, not service account. Alternative workaround for this limitation is using "defaultProxy" configuration in web.config, however it only wokrs for HTTP/HTTPS.
If you use different proxy server, then other issues might be involved, for example proxy might require authentication.

Relay WCF Service

This is more of an architectural and security question than anything else. I'm trying to determine if a suggested architecture is necessary. Let me explain my configuration.
We have a standard DMZ established that essentially has two firewalls. One that's external facing and the other that connects to the internal LAN. The following describes where each application tier is currently running.
Outside the firewall:
Silverlight Application
In the DMZ:
WCF Service (Business Logic & Data Access Layer)
Inside the LAN:
Database
I'm receiving input that the architecture is not correct. Specifically, it has been suggested that because "a web server is easily hacked" that we should place a relay server inside the DMZ that communicates with another WCF service inside the LAN which will then communicate with the database. The external firewall is currently configured to only allow port 443 (https) to the WCF service. The internal firewall is configured to allow SQL connections from the WCF service in the DMZ.
Ignoring the obvious performance implications, I don't see the security benefit either. I'm going to reserve my judgement of this suggestion to avoid polluting the answers with my bias. Any input is appreciated.
Thanks,
Matt
I do think the remarks made are valid, and in such a case I would probably also try and use as many "defense-in-depth" layers I could possibly come up with.
Plus, the amount of work to achieve this might be less than you're afraid of - if you're on .NET 4 (or can move to it).
You could use the new .NET 4 / WCF 4 routing service to do this quite easily. As an added benefit: you could expose a HTTPS endpoint to the outside world, but on the inside, you could use netTcpBinding (which is a lot faster) to handle internal communications.
Check out how easy it is to set up a .NET 4 routing service:
What's new in WCF4 Routing Service - or: "Look ma: Just one service to talk to!"
Creating Routing Service using WCF 4.0, .NET Framework 4.0 and Visual Studio 2010 RC