I am trying to set up a new architecture for Middleware using SAP PI/PO. The problem is to determine a right mechanism for pulling file from other servers (Linux/Windows etc..)
Broadly, 2 different approach are reviewed i.e. using a managed file transfer (MFT) tool like Dazel vs using NFS mounts. In NFS mount all the boundary application machines will act as server and middleware machine will be client. In the MFT approach a agent will be installed at boundary servers which will push files to middleware. We are trying to determine advantage and disadvantage of each approach
NFS advantages:
Ease of development. No need for additional tool related to managed file transfer
NFS disadvantages:
We are trying to understand if this approach creates any tight coupling between middleware and boundary applications
How easy it will be to maintain 50+ NFS mount points?
How does NFS behave in case any boundary machine goes down or hangs?
We want to develop a reliant middleware, which is not impacted by issue at 1 boundary application
My 2 c on NFS based on my non administrator experience (I'm a developer / PI system responsible).
We had NFS mounts on AIX which were based on SAMBA basis told#
Basis told that SAMBA could expose additional security risks
We had problems getting the users on windows and AIX straight, resulting in non working mounts (probably our own inability to manage the users correctly, nothing system inherent)
I (from an integration poin of view) haven't had problems with tight coupling. Could be that I was just a lucky sod but normally PI would be polling the respective mounts. If they will be errorneous at the time the polling happens, that's just one missed poll which will be tried next poll intervall
One feature, an MFT will undoubtly give you NFS can't is an edge file platform where third partys can put files to (sFTP, FTPS).
Bottom line would be:
You could manage with NFS when external facing file services are not
needed
You need to have some organisational set of rules to know which users which shares etc
You might want to look into security aspects enabling such mounts (if things like SAMBA are involved)
Related
We want to start working with liferay. But the server is too heavy and the developpers computer don't have enought RAM. We want to centralize the server instance.
In other words, we want to build a development server where all developpers can connect and directly develop in their web browser, compile, view the result and push the code to git repository.
I found some good cloud IDE like eclipse CHE and a good maven archetype for liferay projet. So i can build the projet with maven. But now i want to know if it is possible to configure Liferay like every developpers can work without troubling another. And if possible, How ?
The developpers can share the same database and can use different port. Maybe, the server can generate tempory URL like some online cloud editor.
I found this post Liferay With Multiple Server Instances, but i don't think is the best way because he create one server per project. I think is too heavy.
If necessary, We have kubernetes in our IS.
Liferay's tomcat bundle, by default, is configured to take a maximum of 2.5G for the process, but it can run with far less - the default only recently was bumped up, because many people never change the default and then wonder why production systems run out of memory. For 1 concurrent user (the sole developer) on a machine, I guess that the previous default of 1G heap space is enough. Are you saying that that's too much for your developers' machines?
Having many developers on a shared server poses one problem: Yes, you may deploy different code from different machines, but: How about setting a breakpoint? Can you connect with multiple debuggers? If something fails, how do you know whos recent deployment caused the failure?
Sharing a server is an integration technique, not a development technique. If your developers don't have enough memory available for running their own Liferay server next to their IDE, it's a lot cheaper to upgrade their machines than to slow them down when everybody is accessing the same server and they can't properly debug. You pay the memory once, but your waiting developers by the hour.
Is it possible to share one server? Sure it is.
Is it possible to share one server without troubling each other? I doubt.
When you say: You think it's too heavy: What are you basing that assumption on? What does the actual developer machine look like and what keeps you from investing in the extra memory?
It's trivial to share some infrastructure - i.e. have all of them connect to the same database server (and give everyone their own schema). But just the extra effort and setup might require you to pay the developers by the hour as much as you'd otherwise pay for a couple of memory chips.
And yet another option is: Run Liferay on a remote server, but keep 1 instance per developer. This way you don't need the local memory, but can have the memory in the cloud. Calculate if you pay more for remote cloud machines than for local memory - that decision is up to you.
I have a question regarding oVirt and multipathing.
I have a cluster with 4 hosts and a storage system (Dell EMC) connected via fibre channel. At the moment I have a SAN Switch between the hosts and the storage system, but I want to attach the hosts and storage system directly via two fibre channel paths on each host.
Therefore, I need multipathing. the hosts run centos 7 minimal and multipath is installed and active. do i need to change the multipath.conf file, or does centos recognize the two paths automatically? Is it active/passiv or active/active with loadbalancing? The documentation of oVirt does only explain very little and more about iSCSI.
I am new to this topic so bear with me please. :)
Why don't you want to set up another SAN switch and configure the second fabric instead of crushing existing one? Having SAN with redundant fabrics (so called dual-fabric configuration) is preferable to direct-attachment because of scalability, flexibility, manageability, etc. Multipathing must be configured on hosts as well.
What is the model of your DELL/EMC storage? The most modern storage systems that are able to run in FC-SAN environments are active/active or at least support Asymmetric Logical Unit Access (ALUA). So yes, again, multipathing is in the list of best practices.
And obviously, it's not a complete answer because I know nothing about oVirt virtualization platform, but I have too few reputation points to post a comment.
I'd like to deploy kubernetes on a large physical server (24 cores) and I'm uncertain as to a number of things.
What are the pros and cons of creating virtual machines for the k8s cluster other than running on bare-metal.
I have the following considerations:
Creating vms will allow for work load isolation. New vms for experiments can be created and assigned to devs.
On the other hand, with k8s running on bare metal a new NAMESPACE can be created for each developer for experimentation and they can run their code in it. After all their code should be running in docker containers.
Security:
Having vms would limit the amount of access given to future maintainers, limiting the amount of damage that could be done. While on the other hand the primary task for any future maintainers would be adding/deleting nodes and they would require bare metal access to do that.
Authentication:
At the moment devs would only touch the server when their code runs through the CI pipeline and their running deployments are deployed. But what about viewing logs? Could we setup tiered kubectl authentication to allow devs to only access whatever namespaces have been assigned to them (I believe this should be possible with the k8s namespace authorization plugin).
A number of vms already exist on the server. Would this be an issue?
128 cores and doubts.... That is a lot of cores for a single server.
For kubernetes however this is not relevant:
Kubernetes can use different sized servers and utilize them to the maximum. However if you combine the master server processes and the node/worker processes on a single server, you might create unwanted resource issues. You can manage those with namespaces, as you already mention.
What we do is use continuous integration with namespaces in a single dev/qa kubernetes environment in which changes have their own namespace (So we run many many namespaces) and run full environment deployments in those namespaces. A bunch of shell scripts are used to manage this. This works both with a large server as what you have, as well as it does with smaller (or virtual) boxes. The benefit of virtualization for you could mainly be in splitting the large box in smaller ones so that you can also use it for other purposes then just kubernetes (yes, kubernetes runs except MS Windows, no desktops, no kernel modules for VPN purposes, etc).
I would separate dev and prod in the form of different vms. I once had a webapp inside docker which used too many threads so the docker daemon on the host crashed. It was limited to one host luckily. You can protect this by setting limits, but it's a risk: one mistake in dev could bring down prod as well.
I think the answer is "it depends!" which is not really an answer. Personally, I would split up the machine using VM's and deploy that way. You've got better flexibility as to how much of the server's resources you carve out and you can easily create new environments, then destroy easily.
Even if these vms are really big, I think it's still easier to manage also given that you have existing vm's on the machine.
That said, there's not a technical reason that you can't run a single node server, but you may run into problems with downtime with upgrades (if that's an issue), as well as if that server needs patched or rebooted, then your entire cluster is down.
I would look at your environment needs for HA and uptime, as well as how you are going to deploy VM's (if you go that route), and decide what works the best for you.
I am curious about relaying messages from one app on one machine to another machine. I have a shared network storage readily available to me. My thought is that I want to run an app on a single machine that runs intranet uploads. I cannot control anything about the domain or the shared network storage other than creating files/folders.
I want this app on its own machine to be able to report somehow to a completely different application installed on completely different user machines (so in case of errors lets say, users could intervene) and at some point, across platforms (vb.net/Access/etc).
The first thing that hit me was to stream write the upload app's status to a text file, and then have a timer on my users end app that monitors the file that the upload app writes to.
However, before implementing, I am wondering if I am reinventing the wheel, and theres a better way to do this. I am seeking simple solutions, and eventually I would like to integrate this into VBA/Access. What does SO think fits the bill? What is the downside to streaming a "log"?
You're reinventing the wheel. This is Message Queuing. There are many existing solutions to do this, including MSMQ (built into Windows) and RabbitMQ. There are also cloud based services like Azure AppFabric and Amazon Simple Queue Service.
Currently, I am writing an application that utilizes WMI to scan all the computers on our Active Directory network.
I'm interested in testing the program against all flavors of Windows machines in a testing environment.
Is there a way to similuate this environment in VMware or something?
Any ideas?
VMWare works well and can host many virtual computers on a single physical computer. You can also put the virtual computers on your active directory network.
If your goal is to set up a separate large network for testing that has it's own AD server you can look into Amazon EC2 for testing. The advantage here is once you setup your set of servers, you can turn them on and off as needed and only pay for the time actually used ($0.12 per hour).
http://aws.amazon.com/
You can use network simulation: http://en.wikipedia.org/wiki/Network_simulation
and good GPL tool is http://www.nsnam.org/
You have two options.
You probably have it right, with VMWare this is easy, try looking for cloning tools. If you plan on copying and pasting the image, you will get several problems (computer Guids repeated, Network Computer Names repeated, etc)
You can also "mock" the WMI response by wrapping the WMI methods that you want to call and implementing an interface, using Rhino Mock or NMock if you are working in .NET (which I assume you are).