I use the load balance feature in digitalocean as shown below, can the load balance algorithm be changed?
Picture here
Not any more
There used to be an algorithm drop down to switch between round robin and least connections but it is not there anymore
Related
I read that the maximum number of domain name servers is 3.
Why are we given 4 from AWS and GCP respectively?
Is the priority automatically assigned?
Does it go in a round robin manner or it will go with the 2nd one only if the first one broke?
Lets say I want to reduce load balancer and SSL provisioning downtime. My current domain's nameservers are from AWS.
Example:
ns-2048.awsdns-64.com
ns-2049.awsdns-65.net
ns-2050.awsdns-66.org
ns-2051.awsdns-67.co.uk
I want to migrate to Google Cloud. The main issue is Load balancer and SSL provisioning time.
If I were to add Google Cloud's Name servers like the example below:
ns-2048.awsdns-64.com
ns-2049.awsdns-65.net
ns-cloud-a1.googledomains.com
ns-cloud-a2.googledomains.com
Will this allow Google Cloud to provision SSL without downtime to the live website?
I read that the maximum number of domain name servers is 3.
Which is false. There is no real definite maximum or more importantly the maximum depends on the names themselves and if the names are compressed or not.
Root has managed to go to 13 after renaming all nameservers, and with the constraint of fitting an UDP 512 bytes packet.
2 is standard and often the minimum or the only allowed value, 4 is often found for more reliability, as well as higher values, look at TLDs.
Is the priority automatically assigned?
There is no "priority" as DNS records are set, not lists, so there is no inherent order. The DNS by default does not work in a fail over fashion but in a load balancing fashion, with on average equal partition.
Does it go in a round robin manner or it will go with the 2nd one only if the first one broke?
Round robin. With fail over when one fails and if the client is prepared to retry queries (which recursive nameservers should do, but it is less clear for any generic application consuming DNS records).
I am running managed Instance groups whose overall c.p.u is always below 30% but if i check instances individually then i found some are running at 70 above and others are running as low as 15 percent.
Keep in mind that Managed Instance Groups don't take into account individual instances as whether a machine should be removed from the pool or not. GCP's MIGs keep a running average of the last 10 minutes of activity of all instances in the group and use that metric to determine scaling decisions. You can find more details here.
Identifying instances with lower CPU usage than the group doesn't seem like the right goal here, instead I would suggest focusing on why some machines have 15% usage and others have 70%. How is work distributed to your instances, are you using the correct strategies for load balancing for your workload?
Maybe your applications have specific endpoints that cause large amounts of CPU usage while the majority of them are basic CRUD operations, having one machine generating a report and displaying higher usage is fine. If all instances render HTML pages from templates and return the results one machine performing much less work than the others is a distribution issue. Maybe you're using a RPS algorithm when you want a CPU utilization one.
In your use case, the best option is to create an Alert notification that will alert you when an instance goes over the desired CPU usage. Once you receive the notification, you will then be able to manually delete the VM instance. As it is part of the Managed Instance group, the VM instance will automatically recreate.
I have attached an article on how to create an Alert notification here.
There is no metric within Stackdriver that will call the GCE API to delete a VM instance .
There is currently no such automation in place. It should't be too difficult to implement it yourself though. You can write a small script that would run on all your machines (started from Cron or something) that monitors CPU usage. If it decides it is too low, the instance can delete itself from the MIG (you can use e.g. gcloud compute instance-groups managed delete-instances --instances ).
I use VMs in Compute Engine in Google Cloud Platform. When i create instance, google says me, that it will cost me one price(near 5$), but in reality it charges me more.
In detailed billings, I found out, that instances cost for about 2$ per 2 weeks, and 5$ more for load balancing.
I know what load balancing is(only in general), but where is it user, if I use only 1 VM per time? Do I really need it? How can I avoid it?
As you mentioned, GCE load balancers are useful to distribute the load between a set of instances. There are some other advantages they can provide like autoscaling.
If you are working with only one VM, you can certainly have the VM connected to the internet by the use of an external IP. The only thing to have in mind is that external IPs can change if they are defined as ephemeral. To avoid this scenario that would imply to re-configure your applications to the new IP, you can define it as an Static IP.
^be keen on static IP though. do not reserve unless you will be using it. I read somewhere that reserved but unused static IP address will be charged like a cent or a fraction of a cent per hour
We're thinking about moving to the Elastic Load Balancer on Amazon. However, it turns out that since we use more than one domain name, we would have to rename some of our applications to limit to a single ELB. Another issue is we currently use free level one certificates, whereas moving to ELB would require moving up to level 2, although that's not a huge deal. Another issue is we don't have a lot of volume at this point, and don't really have a need for load-balancing in terms of traffic alleviation. Also, in the case of a failure of an amazon instance, which seems to be quite rare (have not experienced in several years), we can quickly be up and running by creating another instance and restoring.
Otoh, according to all I read about it, people are generally happy and recommend it, due to ease of setup and the value it brings.
Given the above, is it worth it?
since we use more than one domain name, we would have to rename some of our applications to limit to a single ELB
What makes you say this? There's nothing preventing you from launching multiple ELB's if you really want to. And if your application already manages multiple domains properly then there's no reason a single ELB can't handle that either. We currently have one ELB fronting an application on a bunch of EC2 instances that 11 different domains all point to.
Another issue is we currently use free level one certificates, whereas moving to ELB would require moving up to level 2, although that's not a huge deal.
Not sure what you mean by "level one" and "level 2". If you're using a self-signed SSL certificate then you'll need to switch to using certificate signed by a third party Certificate Authority, which will indeed cost you some money. Amazon supports all manner of certificates, including simple certs, EV certs, SAN certs, etc. You'll find more information on ELB and SSL certs in the AWS documentation.
Also, in the case of a failure of an amazon instance, which seems to be quite rare (have not experienced in several years), we can quickly be up and running by creating another instance and restoring.
Consider yourself lucky. We've had Amazon instances fail from time to time, and we also regularly get notifications from Amazon that instances need to be rebooted in order to migrate them off of faulty/old hardware.
If you really don't care about being down for a while and feel like you don't need the capacity that a load balancer and multiple appservers provides then there's no reason for you to move to using an ELB. However if you want the reliability of multiple appservers then moving to an ELB is indeed a good idea.
And if you anticipate your traffic level growing then you might want to consider using Amazon's Auto Scaling tools. Using Auto Scaling you basically tell Amazon the minimum number of application servers you want running behind an ELB, and some parameters to indicate when they should automatically launch additional instances if/when load increases.
Our Amazon account rep actually recommended to us that if we had even a single instance that we wanted to minimize downtime of (like a monitoring server, etc) that we should create an Auto Scaling group with a limit of exactly 1 instance in it. That way if the instance ever does die for any reason whatsoever, Amazon will automatically spin up a new replacement instance.
Agree with Bruce, just wanted to add my 5 cents about Auto Scaling(ASG) and " Amazon will automatically spin up a new replacement instance.".
This is really cool way to get robust hosting solution, but will need some challenge to create CloudFormation template and bash auto install script that will be called from CloudFormation template to install all server software and deploy your app code.
So if you will have 2 instances and ASG with Min/Max = 2, then if some instance will be crashed, ASG will recreate it automaticly with all software installed and code deployed and ready to go
Also if you need to handle some periodic traffic jumps automaticly, then you can change the ASG as (Min=2, Max=5), create 2 CloudWatch alarms:
1. if cpu usage is 90+ for 5 or 10 mins
2. if cpu usage is 30- for 5 or 10 mins
Then assign Alarm 1 to scale up 1 additional instance and assign alarm 2 to destroy any additional instance created by 1
I need to perform a load test using loadrunner to simulate load generated from external network (My home network) on servers placed in some organization in the same region.
The application which will be tested is a web site (Not Heavy one) which users can be logged into and get personal information.
I am very concerned that my home network bandwidth wouldn't be enough to generate the following load :
I need to simulate 250 Web concurrent users which will perform about 30,000 transactions in an hour.
My home network specs and statistics:
Download - 75M - 7.5 Megabyte/sec
Upload - 3.5 M - 350Kbyte / sec
From your experience is this would be enough to generate the desired load? If not what can be done to simulate load from external network?
One Load Generator is never enough from a process perspective. Consider at least three, two for primary load and one for a control set. So, right off of the bat you are likely to have issues.
Mentioned previously. Go to the cloud: Amazon, CloudAzure, GoDaddy, Rackspace, 1&1, etc... all have virtual machines that you can use for performance testing hosts running load generator software. More locations is better as this minimizes the influence of one host network over another if you are looking for representative experiences. Odds are your site will be on one backbone and some of your load generators may have to peer over from another backbone. This is not bad as this provides a more realistic view of your end user experiences from different locations.
Check your end user agreement from your home. Unless you have a business class agreement from your home such traffic may appear to be a DDOS event, setting off alarms at your service provider. Don't be surprised if you find yourself suddenly cut off from the internet without warning. I have seen this happen before with people attempting to generate load from their homes against a site.
As you can see in the comments, the amount of load you can generate is affected not only by the network bandwidth but also by the script itself and the LG machine specifications. What I mean is that there is no definitive answer to your question without taking all the parameters into account.
What you should do is create an account on one of the popular cloud providers (Amazon, Azure, HP) and create a machine with the exact specifications you need based on the parameters as you know them. Most of these services allow you to increase the machine size and the bandwidth if needed for some extra pay.
Good luck!