Infinispan reports Degraded without any error s - infinispan

I have an infinispan setup, which works well on most of my environment. But recently one of them started to report that it is Degraded, on the management console.
There are no errors in the log files. And both of the hosts seem to be connected to each other. Domain controller reports both of the hosts connected.
I see some warnings in the log file, mostly related to the kernel memory limits.
I am out of ideas how to troubleshoot this thing. Can someone shed some light on it.
My environment is two host, one domain, running in docker files, on ubuntu.

Related

Liferay Cloud IDE, Multiple developpers working on same liferay server

We want to start working with liferay. But the server is too heavy and the developpers computer don't have enought RAM. We want to centralize the server instance.
In other words, we want to build a development server where all developpers can connect and directly develop in their web browser, compile, view the result and push the code to git repository.
I found some good cloud IDE like eclipse CHE and a good maven archetype for liferay projet. So i can build the projet with maven. But now i want to know if it is possible to configure Liferay like every developpers can work without troubling another. And if possible, How ?
The developpers can share the same database and can use different port. Maybe, the server can generate tempory URL like some online cloud editor.
I found this post Liferay With Multiple Server Instances, but i don't think is the best way because he create one server per project. I think is too heavy.
If necessary, We have kubernetes in our IS.
Liferay's tomcat bundle, by default, is configured to take a maximum of 2.5G for the process, but it can run with far less - the default only recently was bumped up, because many people never change the default and then wonder why production systems run out of memory. For 1 concurrent user (the sole developer) on a machine, I guess that the previous default of 1G heap space is enough. Are you saying that that's too much for your developers' machines?
Having many developers on a shared server poses one problem: Yes, you may deploy different code from different machines, but: How about setting a breakpoint? Can you connect with multiple debuggers? If something fails, how do you know whos recent deployment caused the failure?
Sharing a server is an integration technique, not a development technique. If your developers don't have enough memory available for running their own Liferay server next to their IDE, it's a lot cheaper to upgrade their machines than to slow them down when everybody is accessing the same server and they can't properly debug. You pay the memory once, but your waiting developers by the hour.
Is it possible to share one server? Sure it is.
Is it possible to share one server without troubling each other? I doubt.
When you say: You think it's too heavy: What are you basing that assumption on? What does the actual developer machine look like and what keeps you from investing in the extra memory?
It's trivial to share some infrastructure - i.e. have all of them connect to the same database server (and give everyone their own schema). But just the extra effort and setup might require you to pay the developers by the hour as much as you'd otherwise pay for a couple of memory chips.
And yet another option is: Run Liferay on a remote server, but keep 1 instance per developer. This way you don't need the local memory, but can have the memory in the cloud. Calculate if you pay more for remote cloud machines than for local memory - that decision is up to you.

To virtualize or not to virtualize a bare metal server for a kubernetes deployment

I'd like to deploy kubernetes on a large physical server (24 cores) and I'm uncertain as to a number of things.
What are the pros and cons of creating virtual machines for the k8s cluster other than running on bare-metal.
I have the following considerations:
Creating vms will allow for work load isolation. New vms for experiments can be created and assigned to devs.
On the other hand, with k8s running on bare metal a new NAMESPACE can be created for each developer for experimentation and they can run their code in it. After all their code should be running in docker containers.
Security:
Having vms would limit the amount of access given to future maintainers, limiting the amount of damage that could be done. While on the other hand the primary task for any future maintainers would be adding/deleting nodes and they would require bare metal access to do that.
Authentication:
At the moment devs would only touch the server when their code runs through the CI pipeline and their running deployments are deployed. But what about viewing logs? Could we setup tiered kubectl authentication to allow devs to only access whatever namespaces have been assigned to them (I believe this should be possible with the k8s namespace authorization plugin).
A number of vms already exist on the server. Would this be an issue?
128 cores and doubts.... That is a lot of cores for a single server.
For kubernetes however this is not relevant:
Kubernetes can use different sized servers and utilize them to the maximum. However if you combine the master server processes and the node/worker processes on a single server, you might create unwanted resource issues. You can manage those with namespaces, as you already mention.
What we do is use continuous integration with namespaces in a single dev/qa kubernetes environment in which changes have their own namespace (So we run many many namespaces) and run full environment deployments in those namespaces. A bunch of shell scripts are used to manage this. This works both with a large server as what you have, as well as it does with smaller (or virtual) boxes. The benefit of virtualization for you could mainly be in splitting the large box in smaller ones so that you can also use it for other purposes then just kubernetes (yes, kubernetes runs except MS Windows, no desktops, no kernel modules for VPN purposes, etc).
I would separate dev and prod in the form of different vms. I once had a webapp inside docker which used too many threads so the docker daemon on the host crashed. It was limited to one host luckily. You can protect this by setting limits, but it's a risk: one mistake in dev could bring down prod as well.
I think the answer is "it depends!" which is not really an answer. Personally, I would split up the machine using VM's and deploy that way. You've got better flexibility as to how much of the server's resources you carve out and you can easily create new environments, then destroy easily.
Even if these vms are really big, I think it's still easier to manage also given that you have existing vm's on the machine.
That said, there's not a technical reason that you can't run a single node server, but you may run into problems with downtime with upgrades (if that's an issue), as well as if that server needs patched or rebooted, then your entire cluster is down.
I would look at your environment needs for HA and uptime, as well as how you are going to deploy VM's (if you go that route), and decide what works the best for you.

Serving static files slow only for windows devices

We have some kind of strange behavior on our Openstack based servers. There is an nginx as loadbalancer with two apache web servers. This setup is automated by ansible and worked fine on our previous cloud.
After moving to another cloud we have the problem that all static files are served extremly slow. Slow means 35 seconds for a 140k JS file. And what makes it realy strange is that only windows devices have this problem.
When you download that static file on a Windows VM on a Mac it has the normal speed (100ms). Downloading it on a Linux VM on a Windows Host is very slow (35s). So it depends on the real system.
We have no idea where to start searching. Every tipp is welcome.

Instability on Worklight Server

I'm using websphere liberty profile v8.5.5.0 and worklight 6.2.
The full version of my WL and runtime is:
Server version: 6.2.0.00.20140922-2259
Project WAR version: 6.2.0.00.20140922-2259
I've noticed that sometimes I have troubles getting into the worklightconsole, the server takes a too big of a time to answer and most of the time it just gives me a time out.
Regarding JVM Heap its at 60 - 70% of the total heap, most likkely 1,5 Gb or something like that.
On the FFDC, sometimes I get a error saying something close to an
FFDC Incident has been created: "javax.naming.ServiceUnavailableException: ldap.example.com:389; socket closed; remaining name 'o=example' com.ibm.ws.wim.adapter.ldap.LdapConnection 1670" at ffdc.log
I have my LDAP connected to this websphere via VPN, and I know that webspheres historically have trouble dealing with LDAP.
However I don't see any more errors on the logs; the machine eventually recovers and is able to work correctly, but for some time is 'down'.
If I enable tracing, the verbosity overwhelms the machine and I can't even start the worklightconsole, neither continue to work with worklight like calling an adapter from an application.
There is one more thing, it seems that this happens more frequently after updates on existing application versions or adapters. Does this ring a bell with anyone?
If i ask for a restart when the machine is sluggish, the stoping of the websphere takes quite some time but eventually stops normally and when I start it, everything is fine right out of the bat.
Before asking for a PMR, I would like to know if there is something else I could do to troubleshoot this problem.
Thanks in advance.
My initial "smell" of the problem is that sometimes your VPN connection with LDAP is very slow or your LDAP server is taking too long to respond.
My suggestion is that you try using WAIT(wait.ibm.com), it's a non-invasive easy to use diagnostic tool, to further investigate. If you find out the call to LDAP is getting hang then I suggest you try tuning Liberty LDAP cache, this should help.

Apache hangs/times out when backing up website with gzip or zip?

I'm running some websites on a dedicated Ubuntu web server. If I'm remembering correctly, it has 8 cores, 16GB memory, and running as a 64 bit Ubuntu. Content and files are delivered quickly to web browsers. Everything seems like a dream... until I run gzip or zip to backup an 8.6GB sized website.
When running gzip or zip, Apache stops delivering content. Internal server error messages are delivered until the compression process is complete. During the process, I can login via ssh without delays and run the top command. I can see that the zip process is taking about 50% CPU (I'm guessing that's 50% of a single CPU, not all 8?).
At first I thought this could be a log issue, with Apache logs growing out of control and not wanting to be messed with. Log files are under 5MB though and being rotated when they hit 5MB. Another current thought is that Apache only wants to run on one CPU and lets any other process take the lead. Not sure where to look to address that yet.
Any thoughts on how to troubleshoot this issue? Taking out all my sites while backups occur is not an option, and I can't seem to reproduce this issue on my local machines (granted, it's different hardware and configuration). My hopes are that this question is not to vague. I'm happy to provide additional details as needed.
Thanks for your brains in advance!
I'd suggest running your backup script under the "ionice" command. It will help prevent starving httpd from I/O.