NServiceBus and one way communication - nservicebus

I would like to use NServiceBus in my application. Unfortunately there are some security limitations in my infrastructure. I have the following scenario:
I have web server, proxy server and application server. The main problem is that communication from web server to proxy server and from proxy server to application server is blocked by firewall. Reverse communication is allowed so my application server can fire proxy server and then my proxy server can fire web server. Is there any way to support this scenario with NServiceBus (eg. with Gateway that will be periodically check proxy queue and web server queue) or maybe I must write my own solution.

Related

Fiddler Running on Windows Server 2008

I have an ARM embedded processor that talks to a .net WCF SOA (SOAP) web service application. The ARM device is remotely located and web service is hosted in a WS2k8 cloud server. I am having some protocol issues with the ARM code and would like to run Fiddler on my WS2k8 machine to observer the SOAP exchange between the embedded device and the web service application. I installed Fiddler Web Debugger V4.4.8 on the server but it does not capture any http requests. I know the ARM device is talking to my web services as it responds to several good SOAP exchanges. Anyone know how to set up Fiddler to work in the configuration I have explained?
Best Regards,
Steve Mansfield
Fiddler is a proxy, it captures any requests sent to it. If your ARM device supports a proxy, then point its proxy settings at the Fiddler endpoint on the Windows Server, port 8888. Also tick Allow Remote Clients to Connect in the Tools > Fiddler Options > Connections tab, and restart Fiddler.
If the client doesn't support configuring a proxy, you need to Use Fiddler as a Reverse Proxy.
They way I fixed it was to set in the client application to use a proxy (http://127.0.0.1:8888) so now the calls are redirected to fiddler and fiddler call the services, so I can see the traffic. Hope it helps someone

how HTTPS request is handled by IIS server

I am trying to understand the working of HTTP , IIS and sql server.
I am having an IIS 7 server in my environment which is interacting with a sql database server.
The architecture is
Apache ----> IIS -----> SQL server
The Apache is a reverse proxy server which is sending the HTTP request to the IIS server and from there IIS server is interacting with the SQL database connecting by different application pools for different applications.
My query is if some request has been forwarded by the Apache server and it has reached the IIS ; after that, if the network between the Apache and IIS is having any packet drops ;
Will that have any affect to the performance of IIS and database server?
Will there be any long running queries in the worker process of the IIS? Because my concern is what will happen after the queries that has successfully executed in the Database server. But since the network between the IIS and Apache is broken how can they be forwarded further to the Apache and further to the end user.
Will these queries keep on holding the resources till they are forwarded from IIS to Apache? Because they have successfully completed their tasks but because of network issue they are not being forwarded further. Or are such request stacked up somewhere by the IIS to free up the resources for the upcoming requests??
Once a request has reached IIS it will go and do the required actions, and format the reply. It will try to send the reply to the requestor but if the link has gone, it will be unable to. It will then abandon the request. Resources it is holding for that request will be freed.
To get the data it had for you, Apache has to repeat the request.

Why aren't HTTP Headers from Oracle Access Manager passing through to WebSphere from IHS?

I have a IBM HTTP Web Server setup as a reverse proxy for a WebSphere application server. We use Oracle Access Manager for user authentication. There is also a Oracle Webgate running on the IHS server to intercept the requests and check them against the Oracle policy.
I can see the authentication going through and Oracle passes back the value needed in an HTTP Header, OAM_REMOTE_USER. The problem is, at some point in the process, that header is not passed on to the WebSphere Application Server.
The Oracle Webgate is monitoring port 8443, but I am not understanding if that means for the Web Server or the App Server since both are on the same physical machine and have the same server name. If I just create a virtual host on the Web Server for 8443 and do not create the port on the App Server, the headers are going through correctly. The problem with this is that I have to use PreserveProxyHeader for the request to go through the WebGate 8443 port, so after authentication it comes back looking for my Application on port 8443, which does not exist on the Web Server. The only way it can find my application on port 8443 is if I also add a port on the App server for that port, which contains the application.
I guess the main thing I am struggling to understand is if I need to define the port Webgate monitors on the HTTP Server and App Server, or if it should only be on the HTTP Server side. It seems like no matter what I do, at some point the request gets redirected from the HTTP Server to the App Server and strips out any OAM HTTP headers that were there. I've managed to prevent them from dropping by removing the 8443 port from the app server, but now my app cannot be mapped to.
This is WebSphere App Server 8.0 and IBM HTTP Server 8.0.0.5.
In the administrative console, click Servers > Server Types > Web servers > web_server_name > Plug-in properties > Request routing. Disable "Remove special headers". Regenerate your plugin configuration XML, and redistribute it.

WCF call back in load balancing server

I have a chat application developed using WCF call back contract.This use netTcp binding for the client server communication.
Client is a Windows Forms application will be running in the client machine(XP or Windows8 machine)
This WCF service hosted as a windows service in the server machine.I am maintaining a Client Session list in the service, this will store the details about each client connected to the server, this list is static variable.
The work flow is, whenever a client connect to the server using the connect operation,client details will be added to the client session list,this session list will be used by the server to send message back to the client whenever its needed.
Everything works fine in the single server environment,Now I want to know how can I handle this in the load balancing scenario, that means I have two server machine,at a time one server will be active.if Server 1 is fail, Server 2 will be active. In this scenario, How can I manage my client sessions share between two servers and working as usual with out disturbing my clients?
One option is to use a Session State store provider, which will provide the session state for both instances of server service.
As MSDN states: http://msdn.microsoft.com/en-us/library/z414bbk9(v=vs.100).aspx
for Web farm configurations, it can be stored out of process using
either the ASP.NET State service or a Microsoft SQL Server database.
The ASP.NET state service is quite well documented http://msdn.microsoft.com/en-us/library/ms178581(v=vs.100).aspx
As for the database solution... well... you have to analyse the added overhead due to database access.
Also, if you are hosting the service using IIS, you could consider using Out-of-Process session state (http://technet.microsoft.com/en-us/library/cc754032%28v=ws.10%29.aspx).
These are just some ideas. You can look into other web farm synchronization techniques made available for Microsoft technologies.

wsDualHttpBinding ClientBaseAddress & firewalls

I'm planning on using a wsDualHttpBinding for a WCF service with callbacks. The clients will be a windows form application communicating to the service over the internet. Obviously I have no control over the firewall on the client side, so I'm wondering what is the proper way to set the ClientBaseAddress on the client side?
Right now in my intiial testing I'm running the service and client on the same pc and i am setting the binding as follows
Dim binding As System.ServiceModel.WSDualHttpBinding = Struct.Endpoint.Binding
binding.ClientBaseAddress = New Uri("http://localhost:6667")
But I have a feeling this won't work when deploying over the internet because "localhost" won't translate to the machine address (much less worrying about NAT translation) and that port might be blocked by the clients firewall.
What is the proper way to handle the base address for callbacks to a remote client?
some one tell me if i do not specify ClientBaseAddress then WCF infratructure creates a default client base address at port 80 which is used for the incoming connections from the service. Since port 80 is usually open to firewalls, things should just work.
so just tell me when win form wcf client apps will run then how can i open my custom port like "6667" and also guide me what library or what approach i should use as a result response should come from client side router
to pc and firewall will not block anything. please discuss this issue with real life scenario how people handle this kind of situation in real life. thanks
The proper way is to use TCP transport instead of HTTP transport. Duplex communication over HTTP requires two HTTP connections - one opened from client to server (that's OK) and second opened from server to client. This can work only in scenarios where you have full control over both ends. There is simply too many complications which cannot be avoided just by guessing what address to use like:
Local Windows or third party firewall has to be configured
Permission for application to run - listening on HTTP is not allowed by default unless UAC is turned off or application is running as admin. You must allow listening on the port through netsh or httpcfg (windows XP and 2003) - that again requires admin permissions.
Port can be already used by another application. In case of 80 it can be used by any local web server - for example IIS.
Private networks and network devices - if your client machine is behind the NAT the port forwarding must be configured but what if you have two machines running your application on the same private network? You cannot forward from the same incoming port to two machines.
All these issues can be avoided mostly only when you have control over whole infrastructure. That is the reason why HTTP duplex communication is useful mostly for intranet scenarios and why for example Silverlight offers another implementation where the second connection is not created and Silverlight client instead polls server continuously to check if there is any callback available.
TCP transport requires only single connection from client to server because TCP protocol is natively duplex so the server can call back the client through the same connection. When you deploy a public service you usually have control over infrastructure on the server side so you can make necessary changes in configuration to make it work.
I think this also answers your previous question.