H/W requirements for Confluent single node - hardware

What are the hardware requirements for Single node Confluent installation?
I checked their official site, but it has specifications for multi-node: https://docs.confluent.io/platform/current/installation/system-requirements.html

Unclear what all you're trying to run. Zookeeper and Kafka alone have been made to run with limited resources on a Raspberry Pi and definitely can run on any modern laptop or computer.
If you're running a single node, it's not considered "production grade", so with at least 5 services (ZK, broker, Schema Registry, REST proxy, ksqlDB) at 2 GB max heap each, that'd require 10 GB RAM + overhead for the OS, so call it 16 GB of memory to be conservative
If you also want to (reasonably) run Control Center, it's suggested to have 6GB for that, increasing your memory requirements up to at least 24 GB on a single node if you want to include calculation for your own Kafka client applications
Of course, you can opt out of certain services and tune each JVM to how you want...
As far as disk space goes, really depends on how much data you plan on having. 500 GB would be a good starting point, but a single disk wouldn't be fault tolerant

Related

Azure VM disk attachment number is too low. Can this limit be increased?

Based on this blog post https://blogs.technet.microsoft.com/uspartner_ts2team/2015/08/26/azure-vm-drive-attachment-limits/ there is a limit on the disk attachment following the model of number of cpus x2. Is there a technical reason why this limit is in place? If you use kubernetes you may not be able to schedule a pod. The scheduler is not aware of this limit.
This was proposed as a workaround https://github.com/khenidak/dysk but I'm wondering why this very low limit exists in the first place.
The number of data disks are directly tied to the size of the VM. For example, if you go here https://learn.microsoft.com/en-us/azure/virtual-machines/windows/sizes you will see that each VM increasing in resources can handle more data disks.
This restraint is mainly built around performance. If you had a virtual machine with only 2 CPU cores and say 10 data disks you would likely run into performance issues as the CPU power and RAM needed to reach out to all those data disks at once could cause your VM to tap out.
The simple solution would be to use larger VM sizes if you need more disks. Or depending on how much space you have Azure can support up to 4TB data disks.

Resource usage of a static web server

I came across this question in a blog post. It was asked by Mozilla in their internship interview. (Blog Post)
You are running a HTTP server (nginx, Apache, etc) that is configured
to serve static files off the local filesystem of your modern,
multi-core server connected to a gigabit network. A handful of clients
start requesting the same 4kb static file as fast as they can. What
system resource do you think will be exhausted first?
a. CPU
b. Disk / I/O
c. Memory
d. Network
e. Other
According to me, none of this would be exhausted on a modern machine, with Nginx/Apache. Won't the web server cache such a small file and just keep serving that. Also, for repeated request it can easily send a Not-Modified header.
In case of Apache, I guess due to it handling multiple clients by spawning threads, CPU will be exhausted first, but for a "handful" of clients, that won't matter.
I wanted to know what others have to say about this question.
It reeeeeeeeally depends. 4k is that magical size that will fit into as good as all caches and buffers at their default settings, so it is easy (and fast) to pass around. memory is not a limiting factor here as webservers will operate on filehandles, not entire files. In this case I would assume they keep it right in memory, but that would be one file per worker instance which would usually come down to 4kb * (num_cores + 1) at most, which is not really an issue.
One could argue that either memory- or diskspeed were an issue. But former one were neglectable when methods like sendfile are properly configured, enabling for a zero-copy approach. Latter one would amortize over time once a copy of the file got loaded into memory.
Lastly, there's the interface and the CPU(s). Overall, CPU time tends to be a lot cheaper than network time, so I would expect the NIC to be the bottleneck long before the CPU - if at all.
The question is a bit unspecific on the location of the clients. If they are connected to the same GbE network, they could indeed have the power to saturate your NIC with their requests. If not, some intermediary could become the limiting factor.
Now let us assume those clients were in our network and we had a single-homed 10GbE NIC here, connected via 8 lanes (which is fairly standard IMHO): PCIe 3.0 x8 is specified with 7,877 MB/s. A Core i7 3770 has a bus speed of 5GT/s, which is translating to roughly 8 GB/s at 8 lanes. Assuming no other I/O-intensive workload, this CPU could easily saturate the NIC.
So in summary: Network/NIC saturation before CPU saturation before anything else.

What would be a good hardware configuration for a Redis dedicated server?

I am planning to configure Redis in Master/Slave configuration.
I have got three machines (8GB RAM, 8 cores), planing to to use one master and two slaves.
What would be the recommended hardware configuration for these machines?
Redis is not CPU intensive, so you should get at least 2 cores per server (one for redis, one for backups, maybe one more to do basic stuff on the server?), more is not really relevant. Redis is single-threaded.
Get as much RAM as you can as it defines the size of your store. Also making a dump consumes RAM so your true space size is less than you can think. Monitor your RAM usage to prevent surprises.
For RAM type, if it fails, redis fails and sometimes silently (consistency broken). If you need to be careful with your data always use ECC RAM, it is expensive but maybe less expensive than broken data in RAM accessed through redis causing unknown effects. Redis has no known checks against hardware errors from RAM, even if it is quite rare (less likely to happen than a broken hard drive) it does happen.

Hadoop cluster. 2 Fast, 4 Medium, 8 slower machines?

We're going to purchase some new hardware to use just for a Hadoop cluster and we're stuck on what we should purchase. Say we have a budget of $5k should we buy two super nice machines at $2500/each, four at around $1200/each or eight at around $600 each? Will hadoop work better with more slower machines or fewest much faster machines? Or, as like most things "it depends"? :-)
You're generally better off with Hadoop getting a few extra machines that are less beefy. You almost never see datanodes with more than 16GB ram and dual quad-core CPUs, and often they are smaller than that.
You always have to run one as the namenode (master), and generally you don't also run a datanode (worker/slave) on the same box, although you could since your cluster is small. Assuming you don't, though, getting 2 machines will leave you only 1 worker node, which somewhat defeats the purpose. (Not entirely, because you can still run 4-8 jobs in parallel on the slave, but still.)
At the same time, you don't want to have a cluster of 1000 486s. If your budget is $5k, I would strike a balance and do 4 $1200 machines. Those will provide a decent baseline in terms of individual performance, you'll have 3 datanodes to distribute work to, and you'll have room to grow your cluster if you need.
Things to keep in mind: you'll want to run multiple map or reduce tasks per datanode, and that means multiple JVMs running simultaneously. I would try to get at least 4GB, and preferably 8GB ram. CPU is less important as most MR jobs are IO bound. You could likely get a machine like this for your $1200 price target, so that's my vote.
In a nutshell, you want to max out the number of processor cores and disks. You can sacrifice reliability and quality, but don't get the cheapest hardware out there, as you will have too many reliability problems.
We went with Dell 2xCPU 4-core dell servers, so 8 cores per box. 16GB of memory per box, which is 2GB per core, a bit low as you need memory both for your tasks and for disk buffering. 5x500GB hard drives, and I wish we'd gone for terabyte or higher drives instead.
For drives, my opinion is to buy more cheap, slow, unreliable, high-capacity drives as opposed to more expensive, faster, smaller, reliable drives. If you're having problems with disk throughput, more memory will help with buffering.
This is probably a beefier configuration than you're looking at, but maxing out cores and drives versus buying more boxes is generally a good choice - less power costs, easier to administer, and faster for some operations.
More drives means more simultaneous disk throughput per core, so having as many drives as cores is a good thing. Benchmarking seems to indicate that RAID configurations are slower than JBOD configuration (just mounting the drives and having Hadoop spread load across them) and JBOD is also more reliable.
LAST! Be sure to get ECC memory. Hadoop pushes terabytes of data through memory, and some users have found that non-ECC memory configurations can occasionally introduce single bit errors in terabyte-sized datasets. Debugging these errors is a nightmare.
I recommend having a look at this presentation: http://www.cloudera.com/hadoop-training-thinking-at-scale
Here the various pro's and con's are described.
I think the answer also depends on Your expectations of the cluster grow and networking technology You are using. If you are ok with 1GB ethernet - then type of machines is less significant. In the same time - if you want 10GBit ethernet - you should opt to smaller number of better machines to reduce the cost of networking.
another reference : http://hadoopilluminated.com/hadoop_book/Hardware_Software.html
(disclaimer : I am a co-author of this free hadoop book)

Hardware requirements for a Virtual Server

We have decided to go with a virtualization solution for a few of our development servers. I have an idea of what the hardware specs would be like if we bought separate physical servers, but I have no idea how to consolidate that information into the specification for a generalized virtual server.
I know intuitively that the specs are not additive - I shouldn't just add up all the RAM requirements from each machine to get the RAM required for the virtual server. I can't really treat them as parallel systems either because no matter how good the virtualization software is, it can't abstract away two servers trying to peg the CPU at the same time.
So my question is - is there a standard method to estimating the hardware requirements for a virtualized system given hardware requirement estimations for the underlying virtual machines? Is there a +C constant for VMWare/MS Virtual Server overhead (and if so, what is C?)?
P.S. I promise to move this over to serverfault once it goes into beta (Promise kept)
Yes add 25% additional resources to manage the VM. So if I need 4 servers that are equal to single core 2 ghz machines with 2 gigs of ram I will need 10 ghz processing power plus 10 gigs of ram. This will allow all systems to redline and still be ok.
In the real world this will never happen though, all your servers will not always be running all the time. You can get a feel for usage by profiling your current servers and determine their exact requirements and then adding an additional 25% in resources.
Check out this software for profiling utilization http://confluence.atlassian.com/display/JIRA/Profiling+Memory+and+CPU+usage+with+YourKit
The requirements are in fact additive. You should add up the memory requirements for each VM, and the disk requirements, and have at least one processor core per VM. Then add on whatever you need for the host system.
VMs can share a CPU, to some extent, if you have really low performance requirements, but they cannot share disk space or memory.
Answers above are far too high, second (1 core per VM) is closer. You can either 1) plan ahead and probably over-purchase 2) add just-in-time. Do you have some reason that you must know well ahead (yearly budget? your chosen host platform doesn't cluster hosts, so you can't add later?)
Unless you have an incredible simple usage profile, it will be hard to predict before and you'll over purchase. The answer above (+25%) would be several times more than you need for an modern server virtualization software (VMware, Zen, etc) that manages resources smartly. It's accurate only for desktop products like VPC. I chose to rough it out on a napkin and profile my first environment (set of machines) on the host. I'm happy.
Examples of things that will confound your estimation
Disk space, Some systems (Lab
Manager) use only the difference in
space from the base template. 10
deployed machines with 10 GB drives
using about 10 GB (template) + 200MB.
Disk space: You'll then find you
don't like the deltas in specific
scenarios.
CPU / Memory: This is dev
shop - so you'll have erratic load.
Smart hosts don't reserve memory and CPU.
CPU / Memory: But then you'll
want to do perf testing, and want to
reserve CPU cycles (not all hosts can
do that)
We all virtualize for different reasons. Many of the guests
in our environment don't have much work. We want them there to see how something behaves with a cluster of 3 servers of type X. Or, we have a bundle of weird client desktops waiting around, being used one at time by a tester. They rarely consume many host resources.
So, if you are using something like that doesn't do delta disks, disk space might be somewhat calculable. If lab manager (delta disk), disk space is really hard to predict.
Memory and processor usage: You'll have to profile or over-purchase heavily. I have many more guest CPUs than host CPUS, and don't have perf problems - but that's because of the choppy usage in our QA environments.