Cloudstack NFS Primary Storage showing 49.22 GB - nfs

How do I increase the amount of Primary Storage available in CloudStack.
I followed the Quickstart guide for cloudstack4.2. I added the /primary and /seconday nfs and have a management and agent cloudstack service. All the primary folders are showing as 49.22 GB no matter what. I'm running this in Centos 6.4 on dual core Xeon computers with one master and several hosts. I have 1 TB hdds on all hosts and the master (which I would like to utilize fully).
How can I increase the size of the primary storage?

Related

Apache Tomcat Crashes In Google Compute Engine f1-micro

I am running Apache Guacamole on a Google Cloud Compute Engine f1-micro with CentOS 7 because it is free.
Guacamole runs fine for some time (an hour or so) then unexpectantly crashes. I get the ERR_CONNECTION_REFUSED error in Chrome and when running htop I can see that all of the tomcat processes have stopped. To get it running again I just have to restart tomcat.
I have a message saying "Instance "guac" is overutilized. Consider switching to the machine type: g1-small (1 vCPU, 1.7 GB memory)" in the compute engine console.
I have tried limiting the memory allocation to tomcat, but that didn't seem to work.
Any suggestions?
I think the reason for the ERR_CONNECTION_REFUSED is likely due to the VM instance falling short on resources and in order to keep the OS up, process manager shuts down some processes. SSH is one of those processes, and once you reboot the vm, resource will resume operation in full.
As per the "over-utilization" notification recommending g1-small (1 vCPU, 1.7 GB memory)", please note that, f1-micro is a shared-core micro machine type with 0.2 vCPU, 0.60 GB of memory, backed by a shared physical core and is only ideal for running smaller non-resource intensive applications..
Depending on your Tomcat configuration, also note that:
Connecting to a database is an intensive process.
Creating a Tomcat with Google Marketplace, the default VM setting is "VM instance: 1 vCPU + 3.75 GB memory (n1-standard-1) so upgrading to machine type: g1-small (1 vCPU, 1.7 GB memory) so should ideal in your case.
Why was g1 small machine type recommended. Please note that Compute Engine uses the same CPU utilization numbers reported on the Compute Engine dashboard to determine what recommendations to make. These numbers are based on the average utilization of your instances over 60-seconds intervals, so they do not capture short CPU usage spikes.
So, applications with short usage spikes might need to run on a larger machine type than the one recommended by Google, to accommodate these spikes"
In summary my suggestion would be to upgrade as recommended. Also note that, the rightsizing gives warnings when VM is underutilized or overutilized and in this case, it is recommending to increase your VM size due to overutilization and keep in mind that this is only a recommendation based on the available data.

how many virtual machine can be created in a machine with 16GB RAM

i want to create no. of virtual machines with Ubuntu operating system. I want to install hadoop, spark, yarn, cassandra, mongodb etc big data tools on each one of it. so, how many virtual machine can be created on a single machine with 16GB RAM?
other info. is given below
enter image description here
Considering your software requirement maximum of 2 virtual machines can be created..that too if you are not using cloudera-manager services

Galera cluster into Google cloud platform

We have a galera cluster with 3 nodes, on 3 different physical machines but all located in the same datacenter.
From what I understood, the reason they deployed this in the past was to increase availability and reliability.
Each node is installed on a VM using 12 cores and 4Gb RAM.
We are asked to migrate this to Google Cloup Platform in order to get rid of the ops tasks.
I could create 3 compute engine instances and deploy the galera cluster, but I have difficulties to see the added value compared to Cloud SQL instances with replication and backup. I am not very familiar in scaling heavy load systems.
The db hosted in these nodes is kind of critical and should ensure maximum availability and reliability.
What strategy should I adopt in order to migrate this architecture to GCP ?

I want to learn about virtualization

As a very beginner, I only know how to create VMs and install OS on these using Oracle VirtualBox. All the VMs created are dependent on the hardware resources (CPU, RAM etc.) of a single machine. If the machine goes down the VMs will go down. Need to know how VMs can be created using taking resources from different physical machines (manually or dynamically) to avoid failure of any VMs.
For example: There are 4 physical machines having 8 core and 16GB RAM each. Now, I want to create three VM having having 8 core and 16GB RAM taking from different physical machines. If one physical machine goes down, no VM will be down.
You can look up clustering solutions (e.g. VMware clusters, or Hyper-V failover clusters). In this model, if a physical host goes down, then the virtualization platform will power up the VMs on other hosts.
If you're looking for zero downtime, then VMware has something called Fault Tolerance in which a shadow copy of a VM is running on a different host and is continuously synchronized with the primary copy. If the primary host goes down, the shadow copy can take over with zero downtime (e.g. you don't have to boot from the shadow copy because it's already running). This feature, while cool, has a lot of real-world limitations in how it inter-operates with other features of VMware. For example, as of vSphere 6.0, you cannot do various kinds of migrations for such VMs, etc. I believe it also requires a more expensive license.
These solutions generally require some shared resources between the physical hosts (most notably storage). Otherwise they will not work (or at the very least, performance will greatly suffer).

Installation of openstack on virtual machines (multi-node architecture)

Can I install openstack on 3 different virtual machines with the configurations as listed:
Controller Node: 1 processor, 4 GB memory, and 10 GB storage
Network Node: 1 processor, 4 GB memory, and 20 GB storage
Compute Node: 1 processor, 4 GB memory, and 30 GB storage
I wanted to know if having a physical machine with visualization enabled processor is essential for openstack deployment or one can proceed with virtual machines only. I am asking this because in almost all the documents i have read it suggests to have physical nodes.
I also want to know what difference will it make if i install on a virtual machine(assuming it is possible to install on a VM) & why can i not install openstack on virtual machines(assuming i cannot install openstack on virtual machines)
Please bear in mind that i dont want to install devstack.
I guess one can install controller and the neutron on VMs.
However, for the compute node you require a physical VM.
A simple configuration could be (as suggested in the openstack docs)
Controller Node: 1-2 CPU, 8GB RAM, 100GB Storage 1 NIC
Neutron Node: 1-2 CPU 2 GB RAM, 50 GB Storage 3 NIC
Compute Node: 2-4 CPU , 8GB RAM, 100+ GB Storage 2 NIC
However i guess(though unsure ) that the compute if has CPU virtualisation enabled, then the compute could also be a VM.
Someone who could specify the implications of running these nodes on a VM compared to physical nodes???