jasperserver strictly requires session affinity which is not possible in Microsoft Azure Load Balanced Virtual Machines - apache

I have deployed jasperserver with my web application in the environment like Clustered two Tomcat sharing same database in Microsoft Azure Load Balanced Virtual Machines. But the problem is jasperserver strictly requires session affinity which is not possible in Microsoft Azure Load Balanced Virtual Machines.
1) If you have any other solution/suggestion which is suitable for my environment, please guide me.
2) Which one is best suitable for my environment and why ? on comparing Azure load balanced Virtual Machines and Apache httpd load balancing.
Environment :
1) Jasperserver 5.5 Commercial edition with session replication.
2) Two apache-tomcat-6.0.36 clustered instance sharing same database (Mysql 5.5).
3) Linux Machine - Ubuntu 13.10 Server in Azure load balanced Virtual Machines.
Thanks in advance for reading and answering my question. Every comment/idea is highly appreciated.

Look at publishing a VM using the new (in preview) reserved public IP which avoids the cloud service / load balancing setup. This VM could run your own custom load balancing setup that would allow session affinity (Kemp also offer their load balancer in azure - http://kemptechnologies.com/au/solutions/loadmaster-azure). You could create a couple of these VMs and then use Azure Traffic Manager to front-end the setup.

Please tell what are the steps needs to be follow for Kemp Load balancer which replaces Azure load balancing. Thanks, Vasanth N

Related

Will there be support to establish a private connection to Azure AKS

My client is currently evulating AKS which seems to be really promising. Our current platform is based on Azure VM's we provision ourselves. We would like to create private communication between both our existing platform and the managed AKS cluster but so far that does not seem to be supported yet.
Some example use cases for us are:
- Proxying incoming HTTP traffic via our main entrypoint, a Varnish server, to the new AKS environment so we don't have to change url's
- Accessing non publically exposed API's from the AKS environment
Right now the AKS cluster is it's a different subscription and resource group than other parts of our platform. The main reason we we can't connect though seems to be that it's not possible to specify which private IP range should be used when creating an AKS cluster.
Is there support planned for this or is there a reliable workaround?
Thanks for the inquiry, there's a workaround for the stated case, it's through the use of ACS Engine, "ACS Engine, for Azure Container Service Engine, is a CLI tool that helps to generate Azure Resource Manager templates to deploy Docker enabled clusters on Microsoft Azure. It works with all the orchestrators supported by ACS: Docker Swarm, Mesosphere DC/OS and Kubernetes"
So using this solution will allow you to integrate Azure Container Service Cluster into an existing Virtual Network.More details and step by step guide can be found here: https://blogs.msdn.microsoft.com/jcorioland/2017/01/10/how-to-integrate-a-new-azure-container-service-cluster-into-an-existing-virtual-network-using-acs-engine/

Azure Container Services Port Load Balancer

While trying to port my application which is running on docker Swarm locally to Azure Container Service I am struck on the load balancer part of the Azure.
Locally I have a container instance of HAproxy running on Swarm Master and multiple web containers running.
Web containers have just exposed the ports and they are not mapped to machines on which they are running.
HAproxy container has mapped port to the master and internally is talking to my web containers for load balancing.
This gives me the leverage to run any number of containers with limited number of workers in Docker Swarm.
In azure container service I see that Azure load balancer will talk to only ports that are mapped, that means that I can only run 1 container per agent or I keep an internal load balancer in my containers, which implies that users will be going through 2 load balancers before hitting my application.
Not an ideal scenario when my application uses sticky sessions.
So Apparently Microsoft's statement "Everything works same in Azure containers" goes for a toss ?
what are the solutions available or am I doing something wrong here?
Regards,
Harneet
The solution in ACS is almost identical. Use HAProxy and have the Azure LB talk to that. The only difference is that you will not be running the proxy on the master, you will have Swarm deploy it to an agent for you.
You shouldn't really be running workloads on your masters. What would you do if you have a DDoS attack and can't reach your masters, for example. Having Swarm deploy the proxy for you means that you can also have swarm monitor the health of the proxy.
You could, if you really wanted to, run the proxy on the master as you do now. The solution would be the same, have the Azure LB provide a public connection to the proxy just as you currently do.

Integrate Azure Resource Manager (RM) VM with classic networking

I have a classic network setup in Azure, complete with VMs, vnets and site to site VPN.
I need to introduce a RM VM to integrate with this network. Are there any special considerations I need to make to ensure that the RM VM can integrate with the classic network?
Thanks
The only thing you have to do is to create a Vnet-Vnet connection between the ASM (Azure service manager or classic) and the ARM network. You can do this by creating a gateway and connect them . The only consideration is to use non overlapping subnets. The same consideration you have when creating a vpn between on-premise and Azure.

weblogic AS: application deployed on a cluster with two managed servers

I'm on Weblogic AS 10.3.5, I have two managed servers pointed to a cluster, so I have two url, one for the first managed server, the other one for the second.
I will deploy my application on the cluster, so will I reach it on both the url? will the application deployed and running on both the servers?
How does it work? Can you give me some references, please?
Is it clear?
Thanks a lot!
First, you shouldn't be using WebLogic 10.3.5 anymore since it has reached the last stage of support called Sustained Support. Consider to upgrade at least to WebLogic 10.3.6.
Related to your question, I believe you are talking about a Web Application and how to access it. First you need to read about Load Balancing in a Cluster. For the web part (JSP and Servlets) basically you have to options: setup a Web Server (like Apache HTTP) to make use of the WebLogic Plug-in, that will then be connected to the WebLogic cluster. The other easier option is to simply use an LBR (load balance router hardware).
These are the "software" solutions you have for Load Balancing your web application in a clustered WebLogic:
WebLogic Server supports the following Web servers and associated
proxy plug-ins:
WebLogic Server with the HttpClusterServlet
Netscape Enterprise Server with the Netscape (proxy) plug-in
Apache with the Apache Server (proxy) plug-in
Microsoft Internet Information Server with the Microsoft-IIS (proxy) plug-in
You can read more about this options at the Configure Proxy Plug-ins documentation page for WebLogic 10.3.6.

Memcached in a trusted shared environment?

We are a university IT organization that hosts all of the university's websites on several shared servers on our server room floor. We have several VMs, each running its own instance of Apache as a web server for each respective server.
If we were going to setup a memcached server, is it feasible to use it as a shared instance?
If shared by several servers, or even multiple web apps running on the same server, what's the best way to keep each app's cache stores separate? Prefix the key?
Would each VM require its own instance of memcached, or could we setup 1 memcached server and allow our multiple VMs to read/write to it?
We wrote bucket engine specifically to allow for a large number of memcached virtual instances running under a single process.