We're going to purchase some new hardware to use just for a Hadoop cluster and we're stuck on what we should purchase. Say we have a budget of $5k should we buy two super nice machines at $2500/each, four at around $1200/each or eight at around $600 each? Will hadoop work better with more slower machines or fewest much faster machines? Or, as like most things "it depends"? :-)
You're generally better off with Hadoop getting a few extra machines that are less beefy. You almost never see datanodes with more than 16GB ram and dual quad-core CPUs, and often they are smaller than that.
You always have to run one as the namenode (master), and generally you don't also run a datanode (worker/slave) on the same box, although you could since your cluster is small. Assuming you don't, though, getting 2 machines will leave you only 1 worker node, which somewhat defeats the purpose. (Not entirely, because you can still run 4-8 jobs in parallel on the slave, but still.)
At the same time, you don't want to have a cluster of 1000 486s. If your budget is $5k, I would strike a balance and do 4 $1200 machines. Those will provide a decent baseline in terms of individual performance, you'll have 3 datanodes to distribute work to, and you'll have room to grow your cluster if you need.
Things to keep in mind: you'll want to run multiple map or reduce tasks per datanode, and that means multiple JVMs running simultaneously. I would try to get at least 4GB, and preferably 8GB ram. CPU is less important as most MR jobs are IO bound. You could likely get a machine like this for your $1200 price target, so that's my vote.
In a nutshell, you want to max out the number of processor cores and disks. You can sacrifice reliability and quality, but don't get the cheapest hardware out there, as you will have too many reliability problems.
We went with Dell 2xCPU 4-core dell servers, so 8 cores per box. 16GB of memory per box, which is 2GB per core, a bit low as you need memory both for your tasks and for disk buffering. 5x500GB hard drives, and I wish we'd gone for terabyte or higher drives instead.
For drives, my opinion is to buy more cheap, slow, unreliable, high-capacity drives as opposed to more expensive, faster, smaller, reliable drives. If you're having problems with disk throughput, more memory will help with buffering.
This is probably a beefier configuration than you're looking at, but maxing out cores and drives versus buying more boxes is generally a good choice - less power costs, easier to administer, and faster for some operations.
More drives means more simultaneous disk throughput per core, so having as many drives as cores is a good thing. Benchmarking seems to indicate that RAID configurations are slower than JBOD configuration (just mounting the drives and having Hadoop spread load across them) and JBOD is also more reliable.
LAST! Be sure to get ECC memory. Hadoop pushes terabytes of data through memory, and some users have found that non-ECC memory configurations can occasionally introduce single bit errors in terabyte-sized datasets. Debugging these errors is a nightmare.
I recommend having a look at this presentation: http://www.cloudera.com/hadoop-training-thinking-at-scale
Here the various pro's and con's are described.
I think the answer also depends on Your expectations of the cluster grow and networking technology You are using. If you are ok with 1GB ethernet - then type of machines is less significant. In the same time - if you want 10GBit ethernet - you should opt to smaller number of better machines to reduce the cost of networking.
another reference : http://hadoopilluminated.com/hadoop_book/Hardware_Software.html
(disclaimer : I am a co-author of this free hadoop book)
Related
Based on this blog post https://blogs.technet.microsoft.com/uspartner_ts2team/2015/08/26/azure-vm-drive-attachment-limits/ there is a limit on the disk attachment following the model of number of cpus x2. Is there a technical reason why this limit is in place? If you use kubernetes you may not be able to schedule a pod. The scheduler is not aware of this limit.
This was proposed as a workaround https://github.com/khenidak/dysk but I'm wondering why this very low limit exists in the first place.
The number of data disks are directly tied to the size of the VM. For example, if you go here https://learn.microsoft.com/en-us/azure/virtual-machines/windows/sizes you will see that each VM increasing in resources can handle more data disks.
This restraint is mainly built around performance. If you had a virtual machine with only 2 CPU cores and say 10 data disks you would likely run into performance issues as the CPU power and RAM needed to reach out to all those data disks at once could cause your VM to tap out.
The simple solution would be to use larger VM sizes if you need more disks. Or depending on how much space you have Azure can support up to 4TB data disks.
I'm thinking on starting a cluster of servers which will be running exclusively Z3 to solve SMT formulas.
Is there any way to clusterize several servers to join computational power and solve SMT formulas in a distributed fashion?
What are the recommendation characteristics of an system that will be running Z3 to be as fast as possible (regarding to hardware)?
Thank you!!
SAT/SMT solvers are usually very heavy on memory due to low cache hits. Therefore you can't run many processes on a CPU, otherwise they soon start degrading the performance of each other (i.e., running one process per core is not a good idea if you want to benchmark).
I can't give any specific recomendation, but I would choose CPUs that have fewer cores (say 4) and high memory bandwidth. These days CPUs have a fixed TDP and the fewer the CPUs the more powerful each one is -- and less contention for the memory.
Also you want to stick with little-endian architectures. At the moment, Z3 doesn't play well with big-endian archs (such as many ARMs, MIPS, SPARCs, etc). Moreover, for what I've seen, 64 bits usually helps.
I am planning to configure Redis in Master/Slave configuration.
I have got three machines (8GB RAM, 8 cores), planing to to use one master and two slaves.
What would be the recommended hardware configuration for these machines?
Redis is not CPU intensive, so you should get at least 2 cores per server (one for redis, one for backups, maybe one more to do basic stuff on the server?), more is not really relevant. Redis is single-threaded.
Get as much RAM as you can as it defines the size of your store. Also making a dump consumes RAM so your true space size is less than you can think. Monitor your RAM usage to prevent surprises.
For RAM type, if it fails, redis fails and sometimes silently (consistency broken). If you need to be careful with your data always use ECC RAM, it is expensive but maybe less expensive than broken data in RAM accessed through redis causing unknown effects. Redis has no known checks against hardware errors from RAM, even if it is quite rare (less likely to happen than a broken hard drive) it does happen.
I work with SQL Server 2005 and wonder, if not CPU, disk or network, what are users waiting for when SQL Server is working. The strange thing is that system monitor shows that the 4 processors are at an average of 5%, the disk (demonstrated 50MB/s write) works with about 5-8 MB/s, but the execution (inserts and selects) take up to 10 minutes. I'd be happy to install additional hardware, but I don't see what device is the bottleneck and how do I measure its capacity and current workload.
Any advice would be appreciated.
Thanks
additional info: RAM is constantly at about 70% capacity and I am running windows xp.
check your disk read and write 'wait' time. a heavy load database may just make a lot of read and write request with very small piece of data that saturates the IO.
As others mention, disks are rarely the bottleneck when it comes to bandwidth, but rather in the number of IO operations they can perform per second - commonly called IOPS.
The IOPS capabilities of your disks will vary according to disk type, cache and the RAID setup you have.
Another thing you may run into is locking. If you have a lot of concurrent access to the same data, especially inside of large transactions, you may see other transactions being blocked - causing no network, CPU nor disk usage while being blocked, just wasted time.
Probably the disks. If you are seeking all over the place, the throughput (MB/s) will be low even though the disks are running as fast as they can.
Generic advice: try increasing SQLServer's cache, tune your queries, check that the appropriate indexes exist and are used (and that you don't have too many).
We have decided to go with a virtualization solution for a few of our development servers. I have an idea of what the hardware specs would be like if we bought separate physical servers, but I have no idea how to consolidate that information into the specification for a generalized virtual server.
I know intuitively that the specs are not additive - I shouldn't just add up all the RAM requirements from each machine to get the RAM required for the virtual server. I can't really treat them as parallel systems either because no matter how good the virtualization software is, it can't abstract away two servers trying to peg the CPU at the same time.
So my question is - is there a standard method to estimating the hardware requirements for a virtualized system given hardware requirement estimations for the underlying virtual machines? Is there a +C constant for VMWare/MS Virtual Server overhead (and if so, what is C?)?
P.S. I promise to move this over to serverfault once it goes into beta (Promise kept)
Yes add 25% additional resources to manage the VM. So if I need 4 servers that are equal to single core 2 ghz machines with 2 gigs of ram I will need 10 ghz processing power plus 10 gigs of ram. This will allow all systems to redline and still be ok.
In the real world this will never happen though, all your servers will not always be running all the time. You can get a feel for usage by profiling your current servers and determine their exact requirements and then adding an additional 25% in resources.
Check out this software for profiling utilization http://confluence.atlassian.com/display/JIRA/Profiling+Memory+and+CPU+usage+with+YourKit
The requirements are in fact additive. You should add up the memory requirements for each VM, and the disk requirements, and have at least one processor core per VM. Then add on whatever you need for the host system.
VMs can share a CPU, to some extent, if you have really low performance requirements, but they cannot share disk space or memory.
Answers above are far too high, second (1 core per VM) is closer. You can either 1) plan ahead and probably over-purchase 2) add just-in-time. Do you have some reason that you must know well ahead (yearly budget? your chosen host platform doesn't cluster hosts, so you can't add later?)
Unless you have an incredible simple usage profile, it will be hard to predict before and you'll over purchase. The answer above (+25%) would be several times more than you need for an modern server virtualization software (VMware, Zen, etc) that manages resources smartly. It's accurate only for desktop products like VPC. I chose to rough it out on a napkin and profile my first environment (set of machines) on the host. I'm happy.
Examples of things that will confound your estimation
Disk space, Some systems (Lab
Manager) use only the difference in
space from the base template. 10
deployed machines with 10 GB drives
using about 10 GB (template) + 200MB.
Disk space: You'll then find you
don't like the deltas in specific
scenarios.
CPU / Memory: This is dev
shop - so you'll have erratic load.
Smart hosts don't reserve memory and CPU.
CPU / Memory: But then you'll
want to do perf testing, and want to
reserve CPU cycles (not all hosts can
do that)
We all virtualize for different reasons. Many of the guests
in our environment don't have much work. We want them there to see how something behaves with a cluster of 3 servers of type X. Or, we have a bundle of weird client desktops waiting around, being used one at time by a tester. They rarely consume many host resources.
So, if you are using something like that doesn't do delta disks, disk space might be somewhat calculable. If lab manager (delta disk), disk space is really hard to predict.
Memory and processor usage: You'll have to profile or over-purchase heavily. I have many more guest CPUs than host CPUS, and don't have perf problems - but that's because of the choppy usage in our QA environments.