We have developed a cloud based POS system which running on more that 75 outlets and all the transaction data need to be posted every 5 minutes to the relevant database which sits on our head office (POS System is connecting to its own local database). For this we are using windows application which is directly connecting to our head office database through a VPN. But recently one of our clients raised a concern that our head office database is exposed because if anyone sneak to the network (obviously inside the VPN) he could see all CRUD Operations. So by this way he could do anything to head office database.
So we have decided to go for a WCF solution with encrypted JSON Call. If we use a web service can we eliminate this issueccompletely. Is it the best practice? Please advice.
There are several aspects of security to consider in your situation. If your current network topology doesn't limit the VPN client's visibility to just your database server, then yes, I agree that moving to publishing a web service endpoint and using HTTPS would improve security by blocking clients access to other servers on your internal network. However, the web service solution introduces some other considerations. Will you use firewall rules to limit which clients can access the web service? How are you authenticating clients, and how are you protecting those credentials from unauthorized users?
Related
I have a Web App (Azure App Service) and I have an Azure SQL Database that this Web App talks to. I have two questions regarding communication between the two.
When connecting from the Web App to the Database (using the connection string), does the communication go out to the internet and then back into Azure or does Azure know to keep the traffic locally in Azure?
I have been looking into V-Net Service Endpoints as a possible way to improve speed of communication between the two. It is said that when connecting from a VM on V-Net with Service Endpoints enabled to a SQL Database, that Azure knows to keep the traffic internal to the Azure network and not go out to the internet, is this the same for Azure App Services?
Is it possible to keep traffic between an App Service and SQL Database internal to Azure? If so, how do I go about doing this? Any guidance on this is greatly appreciated.
It knows to keep it local on the "Azure backbone" (as per Azure doco). It doesn't go out to the public internet
Yes
Yes. It is already internal to the "Azure Backbone"
Having said that.... networks are really complicated.
As I understand it the main benefit of V-Net is that you can define your own network and add things to it like firewalls, security groups, subnets, peering between networks. Also it helps when setting up a hybrid network - i.e. connecting Azure resources to an on-premises network. When you can set up the same kind of structures as on premise, it's easier to 'transparently' make it part of the on-premises network. Lastly (rereading the doco), you can remove any incoming public IP firewall rules. These are "Azure backbone" IP addresses but they are also "public internet" addresses
There may be a performance improvement if the App Service and Azure SQL are on the same V-Net.
Azure SQL service endpoints are a bit mysterious. They "connect" to the VNET but you still need to connect to a public address. They don't actually take a up a local IP adress.
Depending on what you are really doing, you might want to look into private endpoint, which actually assigns a private IP to your Azure SQL.
Yes, communication between Azure App Service and Azure SQL Database is "local" within the Azure Virtual Network and does not go out to the public internet.
I have a SaaS web application running on Azure as a Web app that uses the on-premises resources of the customer. To access his server I use Hybrid Connections, so they can link their server to my application using the Hybrid Connection Manager and an EndPoint connection string.
Recently Microsoft updated the Hybrid Connections and now there's a limit of how many connections a Web App can use at the same time. This is quite a problem for my SaaS architecture. You also need to log in on the Hybrid Connection Manager with your Azure credentials (which is not something you can hand over to your customers).
I was wondering if there's an alternative for the Hybrid Connections or a way you still can have as connections many as you want (without using the old connection with a BizTalk service)?
I was wondering if there's an alternative for the Hybrid Connections or a way you still can have as connections many as you want (without using the old connection with a BizTalk service)?
In short answer : no, azure web app has the limits of the hybrid connections now.
According to this article, the number of hybrid connections usable in the plan is according to the service plan tire.
The only hybrid connections you can now make are the new hybrid connections. They are only available in Basic, Standard, Premium and Isolated pricing SKUs. There are limits tied to the pricing plan.
Basic 5
Standard 25
Premium 200
Isolated 200
While there is a limit on the number of hybrid connection endpoints that can be used in an App Service Plan, each hybrid connection used can be used across any number of apps in that App Service Plan. That is to say that if I had 1 hybrid connection that I used in 5 separate apps in my App Service Plan, that is still just 1 hybrid connection.
Enviroment
Consider the following production environment setup for a web application:
End user --Internet--> web server in DMZ --Firewall--> WCF hosted on app server --> DB Server
Constraint:
Also consider that we cannot change anything from the infrastructure point of view. For example, open ports, change any firewall setting etc.
Problem:
We want to expose the WCF, which is hosted on the app server, to external clients. We are trying to solve this as follows:
End user --Internet--> Router WCF in DMZ --Firewall--> WCF hosted on app server --> DB Server
Please note that we cannot establish a db connection from the DMZ environment where the WCF needs to be hosted so that the external clients can consume it. We have developed a "Router WCF" which passes through all messages to the internal WCF and vice-versa.
This solution adds an unnecessary overhead of serializing and de-serializing data. There must a better and proper way of doing this. We are looking forward to the community for guidance. Thank you.
In DMZ the bibliography tells you: always create an intermediate layer. This means another machine on the internet will be the point of connection and it will proxy the connection back to WCF.
The machine is the web server you seem to mention, that is stupid, has no data, and (to be a proper DMZ) has a firewall between it and all the machines (WCF and the others) it serves that permits only IP:PORTS used on such machines.
In this scenario, usually Apache on the public web server with a URL-rewrite rule (i.e if it is /x/y send it to servera.internal.com:9900 - if it is /x/z send it to serverb.internal.com:9901 etc...) is enough, but there are plenty of solutions of course.
It seems you are doing exactly this, why do you say it is not the proper solution?
DMZs could seem a bit dated as protection mechanism (I agree) but you have to think when servers like your WCF machine had dozens of ports opened, and you wanted to lower the risk of random ports on web-facing machines, a giant attack surface. Nowadays everything can work with couple of ports opened, so it can seem "silly" to do all of this just to forward a TCP port. But it is still valuable as (for example) if servers behind the web server in DMZ do not have internet access, even when WCF is compromised, the attacker cannot use its own reverse shell to deploy what it is nowadays called an APT (yesterday backdoor). The attacker "won't see" his own machine from WCF as the DMZ provides the connection to the external world.
I have a chat application developed using WCF call back contract.This use netTcp binding for the client server communication.
Client is a Windows Forms application will be running in the client machine(XP or Windows8 machine)
This WCF service hosted as a windows service in the server machine.I am maintaining a Client Session list in the service, this will store the details about each client connected to the server, this list is static variable.
The work flow is, whenever a client connect to the server using the connect operation,client details will be added to the client session list,this session list will be used by the server to send message back to the client whenever its needed.
Everything works fine in the single server environment,Now I want to know how can I handle this in the load balancing scenario, that means I have two server machine,at a time one server will be active.if Server 1 is fail, Server 2 will be active. In this scenario, How can I manage my client sessions share between two servers and working as usual with out disturbing my clients?
One option is to use a Session State store provider, which will provide the session state for both instances of server service.
As MSDN states: http://msdn.microsoft.com/en-us/library/z414bbk9(v=vs.100).aspx
for Web farm configurations, it can be stored out of process using
either the ASP.NET State service or a Microsoft SQL Server database.
The ASP.NET state service is quite well documented http://msdn.microsoft.com/en-us/library/ms178581(v=vs.100).aspx
As for the database solution... well... you have to analyse the added overhead due to database access.
Also, if you are hosting the service using IIS, you could consider using Out-of-Process session state (http://technet.microsoft.com/en-us/library/cc754032%28v=ws.10%29.aspx).
These are just some ideas. You can look into other web farm synchronization techniques made available for Microsoft technologies.
Summary:
Does anybody know if there are known issues or configuration gotchas with an IIS service connecting to an Azure based service?
Scenario:
I currently have a scenario that requires me to host two web-services, one in Azure, and one on a server running IIS. The IIS hosted service (a WCF service) connects to the Azure hosted service (actually the Azure storage API) in order to fetch certain information. This information is manipulated and returned to the client.
Client -> IIS Service -> Azure Storage Service
Issue:
I'm running into issues with the IIS service connecting to the Azure Service. The hostname cannot be resolved. I'm using the Azure Storage client from my code, but have actually tried this using the azure API calls, and they also do not work from IIS. I captured the requests using Fiddler (on a different machine), they match the azure REST API calls, as expected. These requests, when made outside of IIS on the host machine execute properly. It is only when they are issued by the IIS service that they fail.
In my research other people have been running into this issue when there's a firewall problem, but since I can hit the service properly from the machine, that doesn't seem to fit the bill. My hunch is that there's a configuration issue I need to sort out in IIS, but I've failed to find anything useful with my searches.
Does anyone have any information on why this might be occuring (known bugs, gotchas etc)? Any workarounds? From a SOA perspective, this seems fairly critical to understand.
Any assitance anyone has would be helpful. Thank you.
Sounds like a proxy configuration issue. Check how your IIS server connected to Internet. If you are using some sort of proxy to get to Internet, that connection has to be configured correctly.
Specifically, if your proxy servers are Microsoft ISA server, or Microsoft Forefront TMG, then you need to check two things:
ISA server client or Forefront TMG client software is installed on the server
The account used by IIS application pool is domain user. ISA Server/TMG are designed to work only with user account, not service account. Alternative workaround for this limitation is using "defaultProxy" configuration in web.config, however it only wokrs for HTTP/HTTPS.
If you use different proxy server, then other issues might be involved, for example proxy might require authentication.