Verify Load balancing Azure Container Service - azure-container-service

I am using the Azure Container Service with Kubernetes orchestrator and have an app deployed on a cluster with 3 nodes. It has 5 replicas. How can I verify load balancing in action e.g. I want to be able to see that every time I hit the external IP I am being routed to perhaps a different node. Thanks.

The simplest solution is to connect (over ssh for example) to 3 nodes and run WinDump there. In order everything is working properly you will be able to see what happens on every node.
Also here is Microsoft documentation for testing a load balancer:
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/tutorial-load-balancer#test-load-balancer
The default Load Balancer which are available to your Windows Azure Web and Worker roles are software load balancers and not so much configurable however they do work in Round Robin setting. If you want to test this behavior this is what you need to do:
Create two (or more) instances of your service with RDP access
enabled so you can RDP to both instances
RDP to your both instances and run NETMON or any network monitor
solution in it.
Now access your Windows Azure web application from your desktop You
need to understand that when a network connection is made from your
desktop the connection is still alive based on network settings
(default 60 seconds) so you need to wait until default timeout is
passed to access your Windows Azure web application again.
When you will access your Windows Azure Web application again you can
verify that seconds time the request went to next instance. BE sure
to pass the connection timeout otherwise your request will be keep
handled by same instance.
Note: If you dont want to use RDP, you sure can also create a test ASP.NET page to write some special code based on your specific instance which will show you that this page is specific to certain instance. The best way to do is to read the Instance ID as below:
int instanceID = RoleEnvironment.CurrentRoleInstance.Id;
If you want to have more control over Windows Azure Load Balancing, i would suggest using the Windows Azure Traffic Manager which will help you to route the traffic to your site via Round-Robin, Performance or backup based scenario. More info on using Traffis Manager is in this article.

Related

Gridgain console load balance

I have Gridgain three node cluster and also running Gridgain web console agent and web console on all three nodes. It is all hosted on Windows Server.
I would like to load balance My web console. The problem is I don't know how to share user registration database which it stores in a work directory. Can I use external database to store all that information so that my cluster uses the same database?
There is a problem with Web Console Agent as well. How do I share tokens stored in default.properties?
There is no definitive guide on how to create a cluster for web console for high availability.
Can someone please guide me on how can I form a cluster for a Web console sharing its user store and tokens?
Thanks
If you are looking for multi-cluster support, take a look at documentation:
https://www.gridgain.com/docs/web-console/latest/multi-cluster-support
If you are looking for agent fault-tolerance: just start several agents. Fisrt agent will process all messages, other will be in the hot-stand-by mode.
If you are looking for connection fault-tolerance between agent and cluster (if cluster node failed that is a connection point for agent, Web Console will loose connection to cluster), just specify several nodes addresses as comma-separated list for "node-uri" parameter (in default.properties or as command-line argument).
For example:
node-uri=http://192.168.0.1:8080,http://192.168.0.2:8080;http://192.168.0.3:8080
Hope this helps.

how the server send message SSE worked in multiple server instance environments

I have a question on how to make SSE worked in multiple server environments.
In UI, there are two steps:
1. source = new EventSource('http://localhost:3000/stream');
source.addEventListener('open', function(e) {
$("#state").text("Connected")
}, false);
user in UI can post to api to update data
after user post to api, server is sending event to UI to udpate UI
In one server environement, this worked perfect fine, no problem at all.
But in multi server instance environments, this won't be working. For example, I have two server instance, and UI subscribed to server 1, then server 1 is remembering the connection, but data update is from server 2, when data is changed, there is no connection for SSE in server 2. Then in this senario, how can server 2 send SSE to UI?
In order to make SSE working in multiple server environments, do we need to adopt any saving solution to save the connection information so that any server instance can send SSE accurately to UI?
Let me clarify this more:
yes, both service 1 and service 2 are behind load balancer, they do not have to have same URL. UI is pure frontend end application, can even be mobile app. So, if UI is sending a eventSource request to LB of server1, then only this instance can use this connection to send event back to UI, right? But if we have multiple instance of server 1, that means any server 1 instance other than current one can NOT send event back to UI.
I believe this is the limitation of SSE unless the connection can be shared among all the instances. But how.
Thanks
If you have two servers, with different URLs, make one SSE connection (from each client) to each server.
Be aware of CORS restrictions, i.e. the same origin policy. (It works identically to xhr2 CORS, so fairly easy to google; my book also covers it in detail, chapter 9.)
If you have two servers behind a load balancer, which is presenting a single URL to the clients, then you just have to make sure the load balancer is configured correctly. I.e. to always pass through that socket to the correct server. If a back-end server dies, and needs replacing, the load balancer should close the SSE socket; the client will then auto-reconnect, and get a new back-end server.
The multiple servers behind a load balancer, should either be having their own data push socket connections to a master data source, or should all be polling the master data source.

Azure Virtual Machines not holding Logged in User Session

I am developing a MVC4 application . We have hosted our application on Windows Azure IAAS Model . Right now we have configured 2 virtual machines and everything is working good. But we have an issue with maintaining User Loging .
If i login in virtual machine 1 , its not getting carried over ,when the next request is coming from Virtual machine 2 . We have mapped two virtual machines over load balance .
Should i look into Cache solutions . Any input will be greatly helpful ...
Thanks,
Jaswanth
You're hitting two completely separate VMs (yes load balanced, but separate). This mandates the need for storing any type of session data external to the VMs (or you need to sync the session content and have it identical in both VMs).
Azure doesn't do anything to sync session data for you. That's on you, to build it into your app's architecture. You mentioned caching, which is certainly a viable solution (which you pick, though, is up to you). There are other solutions too such as database-based session storage. Again, that's up to you.
But bottom line: If you're going to scale an app beyond a single server (VM in this case), in a load-balanced way, you cannot store session data in a specific vm.
Use a durable session state store (like Redis or SQL Server, etc) or put your state in a cookie and read/write it on each request. If cookie includes sensitive content, encrypt it.

How can i view the UI of Elastic Load Balancer 2.1.0,

HI, just now i download the Elastic Load Balance 2.1.0 from WSO2 ,It
is running on terminal side of Linux ubuntu, but it is not showing the
Management console url. If it is not showing url where can i get UI
of Elastic Load Balance.
i have a multiple esb server with same configuration.if my a1 server
go down that time data load will shift to my a2 server .Is this use of
Elasticloadbalance will you explain me about this what is the exactly
use of this .
No, there is no UI component for ELB. Everything has to be done through configuring physical files.
Elastic LoadBalancer 2.1.0 is based on Hazlecast dependent clustering. This has two parts, one is load balancing and the other is elasticity. Load Balancing is simply distributing workload among a number of endpoints configured in a static or dynamic manner. Elasticity is simply scaling, ie monitoring load on worker nodes and starts or terminates nodes based on need on an IaaS environment.
Not only manages when a node goes down but also depending on load it can spawn new nodes to handle and if the load is low it can kill unwanted instances in an IaaS environment.

How to: can I test an application using both worker role AND VM role in Azure emulator?

I've looked but can't see an answer to this one:
I have an application that passes Azure messages between a VM role and a worker role. Before I load this into Azure I'd like to test that both work correctly by using the Azure emulator.
Does anyone know if the Azure emulator will accept messages that originate from the VM role and will it allow me to send messages to the VM? Is there a workaround or solution to this?
Both the emulator and the VM will be running on the same host server in my case.
The queues are accessed as HTTP endpoints, so you need to ensure that both components you want to test can access the queue.
If you want to test your application using the storage emulator (an HTTP endpoint provisioned on your local machine, normally http://127.0.0.1:1001/) then you will to ensure that the VM role can get to that address.
I would recommend testing with the real storage service. There are difference between the emulator and the actual service, so it's better to test the real deal (you can always create a test queue).
In this case the endpoint will be on the internet (i.e. http://myaccount.queue.core.windows.net/).