oVirt Multipathing MPIO Fibre Channel how to? - channel

I have a question regarding oVirt and multipathing.
I have a cluster with 4 hosts and a storage system (Dell EMC) connected via fibre channel. At the moment I have a SAN Switch between the hosts and the storage system, but I want to attach the hosts and storage system directly via two fibre channel paths on each host.
Therefore, I need multipathing. the hosts run centos 7 minimal and multipath is installed and active. do i need to change the multipath.conf file, or does centos recognize the two paths automatically? Is it active/passiv or active/active with loadbalancing? The documentation of oVirt does only explain very little and more about iSCSI.
I am new to this topic so bear with me please. :)

Why don't you want to set up another SAN switch and configure the second fabric instead of crushing existing one? Having SAN with redundant fabrics (so called dual-fabric configuration) is preferable to direct-attachment because of scalability, flexibility, manageability, etc. Multipathing must be configured on hosts as well.
What is the model of your DELL/EMC storage? The most modern storage systems that are able to run in FC-SAN environments are active/active or at least support Asymmetric Logical Unit Access (ALUA). So yes, again, multipathing is in the list of best practices.
And obviously, it's not a complete answer because I know nothing about oVirt virtualization platform, but I have too few reputation points to post a comment.

Related

NFS setup for file transfer to SAP PI middleware?

I am trying to set up a new architecture for Middleware using SAP PI/PO. The problem is to determine a right mechanism for pulling file from other servers (Linux/Windows etc..)
Broadly, 2 different approach are reviewed i.e. using a managed file transfer (MFT) tool like Dazel vs using NFS mounts. In NFS mount all the boundary application machines will act as server and middleware machine will be client. In the MFT approach a agent will be installed at boundary servers which will push files to middleware. We are trying to determine advantage and disadvantage of each approach
NFS advantages:
Ease of development. No need for additional tool related to managed file transfer
NFS disadvantages:
We are trying to understand if this approach creates any tight coupling between middleware and boundary applications
How easy it will be to maintain 50+ NFS mount points?
How does NFS behave in case any boundary machine goes down or hangs?
We want to develop a reliant middleware, which is not impacted by issue at 1 boundary application
My 2 c on NFS based on my non administrator experience (I'm a developer / PI system responsible).
We had NFS mounts on AIX which were based on SAMBA basis told#
Basis told that SAMBA could expose additional security risks
We had problems getting the users on windows and AIX straight, resulting in non working mounts (probably our own inability to manage the users correctly, nothing system inherent)
I (from an integration poin of view) haven't had problems with tight coupling. Could be that I was just a lucky sod but normally PI would be polling the respective mounts. If they will be errorneous at the time the polling happens, that's just one missed poll which will be tried next poll intervall
One feature, an MFT will undoubtly give you NFS can't is an edge file platform where third partys can put files to (sFTP, FTPS).
Bottom line would be:
You could manage with NFS when external facing file services are not
needed
You need to have some organisational set of rules to know which users which shares etc
You might want to look into security aspects enabling such mounts (if things like SAMBA are involved)

How to setup HPC cluster on one server

i'm working on an application that needs to be tested in a HPC cluster.
i'm thinking about using xcat as a resource manager.
i don't have much hardware resources, i have one HP desktop and MacBook laptop.
the question: is it possible to set up a virtual cluster (using virtualBox or KVM) on one hardware resource
thanks,
The short answer here is yes, depending on how much memory and disk you have available on your one machine. I've done this numerous times on a MacBook Pro with 8 GB of RAM.
The long answer is that there is absolutely nothing magical about an HPC cluster. All you need to test basic parallel applications in a simulated cluster environment are two or more VMs which meet these criteria:
Same OS, as identical as possible.
Passwordless authentication (ssh key based auth).
Same software stack in same location on all nodes (See #4 or use rsync).
At least one shared filesystem, e.g. NFS mounted $HOME
Shared network with name resolution configured (correct /etc/hosts on all nodes)
None of this requires job schedulers, provisioning tools or any complex networking. You can find many NFS setup howtos to help get one node set up to share $HOME to the others, this might be the most complicated part. VirtualBox does a good job of setting up local networking.
On top of this you can layer setting up a job scheduler like SLURM (highly recommended), provisioning tools like Warewulf or xCat, parallel filesystems across the VMs (BeeGFS is easy to set up and a great introduction), etc. I have had a full featured stateless cluster simulated on my Macbook Pro a number of times using tools from this list and VirtualBox VMs. It's a great way to learn about setting up an HPC cluster.

To virtualize or not to virtualize a bare metal server for a kubernetes deployment

I'd like to deploy kubernetes on a large physical server (24 cores) and I'm uncertain as to a number of things.
What are the pros and cons of creating virtual machines for the k8s cluster other than running on bare-metal.
I have the following considerations:
Creating vms will allow for work load isolation. New vms for experiments can be created and assigned to devs.
On the other hand, with k8s running on bare metal a new NAMESPACE can be created for each developer for experimentation and they can run their code in it. After all their code should be running in docker containers.
Security:
Having vms would limit the amount of access given to future maintainers, limiting the amount of damage that could be done. While on the other hand the primary task for any future maintainers would be adding/deleting nodes and they would require bare metal access to do that.
Authentication:
At the moment devs would only touch the server when their code runs through the CI pipeline and their running deployments are deployed. But what about viewing logs? Could we setup tiered kubectl authentication to allow devs to only access whatever namespaces have been assigned to them (I believe this should be possible with the k8s namespace authorization plugin).
A number of vms already exist on the server. Would this be an issue?
128 cores and doubts.... That is a lot of cores for a single server.
For kubernetes however this is not relevant:
Kubernetes can use different sized servers and utilize them to the maximum. However if you combine the master server processes and the node/worker processes on a single server, you might create unwanted resource issues. You can manage those with namespaces, as you already mention.
What we do is use continuous integration with namespaces in a single dev/qa kubernetes environment in which changes have their own namespace (So we run many many namespaces) and run full environment deployments in those namespaces. A bunch of shell scripts are used to manage this. This works both with a large server as what you have, as well as it does with smaller (or virtual) boxes. The benefit of virtualization for you could mainly be in splitting the large box in smaller ones so that you can also use it for other purposes then just kubernetes (yes, kubernetes runs except MS Windows, no desktops, no kernel modules for VPN purposes, etc).
I would separate dev and prod in the form of different vms. I once had a webapp inside docker which used too many threads so the docker daemon on the host crashed. It was limited to one host luckily. You can protect this by setting limits, but it's a risk: one mistake in dev could bring down prod as well.
I think the answer is "it depends!" which is not really an answer. Personally, I would split up the machine using VM's and deploy that way. You've got better flexibility as to how much of the server's resources you carve out and you can easily create new environments, then destroy easily.
Even if these vms are really big, I think it's still easier to manage also given that you have existing vm's on the machine.
That said, there's not a technical reason that you can't run a single node server, but you may run into problems with downtime with upgrades (if that's an issue), as well as if that server needs patched or rebooted, then your entire cluster is down.
I would look at your environment needs for HA and uptime, as well as how you are going to deploy VM's (if you go that route), and decide what works the best for you.

REDIS server requirements for BlueDomino hosting

I have a Python / REDIS service running on my desk that I want to move to my Blue-Domino-hosted site. I've got Python available on the server, but not REDIS. They don't give me root access to my Debian VM so I can't git, extract, and install myself from a Unix prompt.
Their tech support might do the install for me, but they need me to point them to server requirements, which I don't see on the REDIS download page.
I could probably FTP binaries to the site if they were available, but that's dicey.
Has anyone dealt with this?
Installing Redis is actually quite easy, from source. It doesn't have any dependencies, so just download the tarball, unzip it, and follow the install instructions. I'm always afraid of doing that sort of stuff, but with Redis it really was a breeze. If you don't dare to do it their tech support should be able to do it.
If it is Intel/AMD server, you can compile the Redis somewhere (32 bit version for example), and upload it as binary. Then start it with Python. I did this myself couples weeks ago.
For port you will need to use something over 1000. I don't recommend to use default port. Remember to change LogLevel too. Daemonize works well as non-root too.
Some servers blocks all external ports, so you will not be able to connect to Redis from outside, but this will be a problem only if you connect from different machine. For same machine should be OK, since is "internal".
However, I am unsure how hosting administrator will react when he sees the process running :) I personally will kill it immediately.
There is other option as well - check service like Redis4you.com . But their free account is small, you probably will need to spend some money for more RAM.
Is your hosting provider looking for a minimum set of system requirements for running Redis? This is indeed not listed on the Redis website. Probably because there aren't many exotic requirements. Also it depends a lot on your use case. Basically what you need to run Redis is:
Operating system: Unix like, Linux is recommended (one reason to favor Linux I've heard of is the performance of its TCP/IP stack)
Tools: GCC, make, (git).
Memory: lots (no seriously this depends on your use case, but because Redis keeps everything in-memory you need a least more RAM than the size of your dataset).
Disk: disk access for making snapshots.
The problem seems to be dealing with something non-traditional with my BlueDomino hosting. Since this project is a new venture, I think the best course for me is to rent a small Linux VM from rackspace and forget about the BD hosting.

productizing around EC2

I would like to be able to give someone a "bundle" of software to be able to host anywhere. Is there a way to do this so that the person is charged by Amazon for the amount of time that they used but does not have to deal with setting up an EC2 account and installing an image?
Also, it doesn't have to be EC2. What I am looking for a way for people to host their own cloud service.
I'll split your question into two parts:
Not requiring your customer to set up an account on EC2 (or whatever)
Not having to install an image
For the first, I don't think you'll be able to find a solution where the person who is to be charged by the host provider can be charged without setting up an account!
For the second, I think you are mainly concerned about the annoyance of your customer having to do a full image install and then configure your software package into that image... VM/VPS setup can be a chore.
If this is the case, you can simplify this a lot by pre-preparing a full VM that has an OS installed as well as your fully configured software and support packages. Then simply give your "someone"/customer the entire VM as one .iso file and they can host it at http://www.elastichosts.com quite easily. ElasticHosts lets you set up completely arbitrary VMs, unlike most (all?) other VPS providers that currently limit you to a selection of OS images that you then need to install your software into.
I don't think you can get any easier, unless I completely misunderstand your question.