I have a WCF service that uses HTTP (and a specific port, of course) to communicate over a network. Whenever this service is installed on computers with the "application" Hao123 (and also other types such as TermBlazer, Ad-Search, etc), it does not work right, because all requests made to him not even reach contracts created.
What could I do to protect my application of this type of malware? I believe be unfeasible to create a "removal tool" for each type of pest such that I can find in the user's machine. Failing to install my service is not an option.
Related
Enviroment
Consider the following production environment setup for a web application:
End user --Internet--> web server in DMZ --Firewall--> WCF hosted on app server --> DB Server
Constraint:
Also consider that we cannot change anything from the infrastructure point of view. For example, open ports, change any firewall setting etc.
Problem:
We want to expose the WCF, which is hosted on the app server, to external clients. We are trying to solve this as follows:
End user --Internet--> Router WCF in DMZ --Firewall--> WCF hosted on app server --> DB Server
Please note that we cannot establish a db connection from the DMZ environment where the WCF needs to be hosted so that the external clients can consume it. We have developed a "Router WCF" which passes through all messages to the internal WCF and vice-versa.
This solution adds an unnecessary overhead of serializing and de-serializing data. There must a better and proper way of doing this. We are looking forward to the community for guidance. Thank you.
In DMZ the bibliography tells you: always create an intermediate layer. This means another machine on the internet will be the point of connection and it will proxy the connection back to WCF.
The machine is the web server you seem to mention, that is stupid, has no data, and (to be a proper DMZ) has a firewall between it and all the machines (WCF and the others) it serves that permits only IP:PORTS used on such machines.
In this scenario, usually Apache on the public web server with a URL-rewrite rule (i.e if it is /x/y send it to servera.internal.com:9900 - if it is /x/z send it to serverb.internal.com:9901 etc...) is enough, but there are plenty of solutions of course.
It seems you are doing exactly this, why do you say it is not the proper solution?
DMZs could seem a bit dated as protection mechanism (I agree) but you have to think when servers like your WCF machine had dozens of ports opened, and you wanted to lower the risk of random ports on web-facing machines, a giant attack surface. Nowadays everything can work with couple of ports opened, so it can seem "silly" to do all of this just to forward a TCP port. But it is still valuable as (for example) if servers behind the web server in DMZ do not have internet access, even when WCF is compromised, the attacker cannot use its own reverse shell to deploy what it is nowadays called an APT (yesterday backdoor). The attacker "won't see" his own machine from WCF as the DMZ provides the connection to the external world.
I have a slightly complicated setup that, of course, works fine in XP, but chokes on Windows 7. It may seem like madness, but it made sense at the time!
I have a WPF application that launches and then launches another application that communicates with an external device. After launching it establishes communications with the new process using WCF (hosted by the new process) via a named pipe (net.pipe). This seems to work fine on either OS.
I wanted to make some of the functionality of the WPF application externally available to a command line program, so I set up another WCF service, this time hosted by the WPF application and again exposed it via named pipes. Again, this seems to work.
Next, I wanted to make the functionality of the WPF application available via the web. Now, it's important that the WPF application be runnable from a regular user account, so I thought the best way to make this work on Windows 7 would be to create a windows service that would provide the web service part and have it communicate back to the WPF application via the same named pipe that works fine for the command line. I implemented this and it runs fine on XP, but it chokes on Windows 7. The problem seems to be with trying to establish the named pipe connection between the windows service and the WPF application.
If I run the WPF app as an administrator, it works fine. So it seems to be a problem with the account that the windows service is running in can't communicate with a regular user account that is hosting the WCF service via named pipes. Is there a way to make this work? It seems a WCF service running in a regular user account can communicate using named pipes to another app running in the same account, but it seems it can't do the same thing with a different account.
Oddly, the reverse seems to work. The windows service does, in fact, also expose a service with a named pipe binding (it's used as an activation function since the service is running all the time). I can connect from the WPF app to this service without any problems.
My knowledge of security is somewhat limited. Can anybody shine a light on what's going on?
This question has been asked several times previously on SO. For example, see Connecting via named pipe from windows service to desktop app
The problem is that your user session applications don't possess the SeCreateGlobalPrivilege security privilege necessary to allow them to create objects in the global kernel namespace visible to other sessions, but only in the local namespace which is only visible within the session. Services, on the other hand, which run with this privilege by default, can do so.
It is not the named pipe object itself which is constrained to the local namespace in this way, but another named kernel object, a shared memory section, on which the WCF named pipe binding relies in order to publish to its clients the actual name of the pipe, which is a GUID which changes each time the service is started.
You can get round this constraint by reversing the roles - make the windows service application the WCF Service, to which your user session apps connect. The windows service has no problem publishing its service to your session. And connecting things up this way round makes more sense because the windows service is always running, whereas your session and its apps comes and goes as you log in and out. You'll want to define the service with a duplex contract, so that once the connection is established, the essential flow of communication over the WCF service can still happen in the same direction you originally intended.
The applications (WPF/Console) are creating locally scoped named pipes (this happens by default when they are unable to create globally scoped pipes). My guess is that they can communicate with each other because they can see each others named pipes because they are running under the same account.
The windows service has higher privileges and can therefore create a globally scoped named pipe for the client applications to see.
You can check out a discussion on Christian Weyer's Blog.
I have an app with a self-hosted WCF service.
My WCF service gets published under the URI "net.tcp://localhost:8004/DocumentService". When I run the service on a remote machine and try to discover the service with the new .NET 4 class DiscoveryClient, the found services all have the URI "net.tcp://localhost:8004/DocumentService" too without any information about the actual machine where the service is hosted.
Obviously this is useless if I want to access the service on the remote machine. But I can't find any reference to the actual remote machine (IP address or server name) in the arguments passed to FindProgressChanged.
Is there a way to get the information about the remote machine or do I have to publish my service with the machine name of the remote machine? Or is DiscoveryClient just broken?
I hope this make sense.
I spent a lot of time investigating this problem. Building base addresses in the code was not acceptable for me, as it implies hardcoding transport scheme and port (the latter, of course, can be stored in a separate config section, but then why not just to use the existing section?). I wanted to have an ability to just configure the base address in config as usual. And it turns out that a base address like <add baseAddress="net.tcp://*:8731/"/> will perfectly work. I think the same is true for programmatic configuration.
Similar to the Visual Studio development web server (Cassini) limitation that it only servers on localhost, I have a WCF Service implementation that is only needed on localhost.
I wouldn't mind other machines having access, except that the Windows Firewall prompts to allow the program to listen on the externally-facing NIC. Since this is only needed internally, I would rather restrict the WCF Server-side configuration so that it doesn't trip the firewall detector.
Is binding.HostNameComparisonMode = HostNameComparisonMode.Exact the right solution? I don't see how this is enough.
====
Like Cassini, this Service implementation is a stand-in for something else which DOES require network communication. The client can be configured to connect to the real server or the fake implementation running on localhost.
I think that you are approaching it the wrong way. You should be using the named pipe binding, which should support whatever message exchange pattern you are using (it supports request-response, as well as the same concurrency and session state modes that WS supports).
From the section of MSDN titled "Choosing a Transport" (emphasis mine):
When to Use the Named Pipe Transport
A named pipe is an object in the
Windows operating system kernel, such
as a section of shared memory that
processes can use for communication. A
named pipe has a name, and can be used
for one-way or duplex communication
between processes on a single machine.
When communication is required between different WCF applications on
a single computer, and you want to
prevent any communication from another
machine, then use the named pipes
transport. An additional restriction
is that processes running from Windows
Remote Desktop may be restricted to
the same Windows Remote Desktop
session unless they have elevated
privileges.
This satisfies your exact requirements and should be no more than a configuration change.
It depends on how you are hosting it. If you are in IIS7 or WAS, then WCF uses IIS's mode of matching. Otherwise, if you use HostNameComparisonMode.Exact, then yes, the host name will always be a critical factor in matching. If the host name does not match, dispatch will generally fail.
It should be noted that exact is not 100% perfectly exact...it still allows some variation in the host name. If you have both a NetBios host name and a full DNS name, matching will still occur, as WCF treats those two as one and the same.
System.ServiceModel.BasicHttpBinding.HostNameComparisonmode
Kindly help me in architecting a solution which is required for my ongoing project.
I have developed some WCF Services hosted as windows services which I did and working fine so far. Now I am asked to develop a master WCF type of service which should be intelligent enough to manage all other WCF service for possible corruption/errors and can repair them and restart.
Thanks in advance.
As we have written a custom host and took us years to make it a real application server, I will share some of the challenges that we had. Creating a custom host that manages WCF services as an NT service is a very challenging task if you want to manage all of the details and treat the NT service as a real service host. The challenges start from managing multiple Appdomains ( one for each Service ), managing the statuses of the services, startup times, deployments from the IDE and the worst of all is activation. Have you considered how to implement that? If you do not have this feature, it means that all of your services will be active and in memory at all times. IIS and Appfabric do that very well and trust me , it is noty easy to implement. The other part that was challenging was a UI to manage this host and offcourse a UI that can manage multiple hosts ( NT services running on different boxes ). Do you need a discovery proxy implementation? And at last how about if you want to manage services running in your custom host , IIS and App fabric the same way?
Think about before doing such an implementation because the scope may crypt on you as you do it.
I do something similar here.
Create a Dictionary<key, ApplicationDomain> collection into your main program
Key: something unique for each application domain, like a Guid or a System.Type.
That ApplicationDomain class exposes a internal property to access your AppDomain proxy (that which inherits a MarshallByRef class)
Load your WCF host into main program, so you'll get access to that collection
Every time your service get some access, you just need to take that key, access your proxy and do anything you want within your service hoster.
Keypoint: Your service must have access to all your service hosts.