Found error 'CRASH REPORT Process' on RabbitMQ in every 10 mins - rabbitmq

I found error on RabbitMQ in every 10 mins. Please help me to investigate this problem.
Error message.
021-09-09 13:25:30.084 [error] <0.14464.32> CRASH REPORT Process <0.14464.32> with 0 neighbours crashed with reason: no function clause matching rabbit_mgmt_wm_node:find_type(rabbit#controller1, []) line 79
2021-09-09 13:25:30.085 [error] <0.14457.32> Ranch listener rabbit_web_dispatch_sup_15672, connection process <0.14457.32>, stream 1 had its request process <0.14464.32> exit with reason function_clause and stacktrace [{rabbit_mgmt_wm_no

I had the same issue with Zabbix monitoring the RabbitMQ server every minute which generated a crash-error with the same frequency.
The URL used by Zabbix to monitor contained a domain part to the node name ie. rabbit#my_host.zzz.aws instead of the actual node name as displayed by the console: rabbit#my_host. this explains why rabbit_mgmt_wm_node:find_type failed and crashed.
This was verified using curl as shown below:
curl -v -u user:passwd 'http://127.0.0.1:15672/api/nodes/rabbit#my_host?memory=true'
which returned a valid response, HTTP/1.1 200 OK, when the node name matched and the crash/error when it did not.
please refer to this thread:
https://groups.google.com/g/rabbitmq-users/c/N0EgrLn55XQ

Related

failed retrieving file from mirror.erickochen.nl

When I run pacman -Syu to update, it first shows no error, I normally update everything and after that, I run pacman -Syu again, it shows this, what is the reason and any solution?
:: Synchronizing package databases...
core is up to date
extra is up to date
community is up to date
error: failed retrieving file 'core.db' from mirror.erickochen.nl : Failed to connect to mirror.erickochen.nl port 443 after 5241 ms: Connection timed out
error: failed retrieving file 'extra.db' from mirror.erickochen.nl : Failed to connect to mirror.erickochen.nl port 443 after 5202 ms: Connection timed out
error: failed retrieving file 'community.db' from mirror.erickochen.nl : Failed to connect to mirror.erickochen.nl port 443 after 5202 ms: Connection timed out
warning: too many errors from mirror.erickochen.nl, skipping for the remainder of this transaction
:: Starting full system upgrade...
there is nothing to do
Sometimes mirrors go offline, it's recommended to have multiple mirrors so you don't have a single point of failure, as well as keeping mirrors updated. Using reflector is recommended since it also finds fast candidates based on your location.
For the time being, edit /etc/pacman.d/mirrorlist and uncomment a couple of mirrors, then try updating again.

Apache/PHP7.3 running in Docker randomly drops connection with empty response

I have found several similar questions:
APACHE, PHP Server return randomly empty response
https://serverfault.com/questions/66662/apache-gives-empty-reply
and others
However these does not seem to help to find the cause. I can replicate the behaviour when reloading a specific page ~20 times.
Running current apache2 (= 2.4.38-3+deb10u4). I tried to disable opcache, remove MaxRequestsPerChild with no effect.
Apache log does not show any error. The request is not even logged.
The USE_ZEND_ALLOC=0 seem to have no effect and the problem persists.
I tried to install mod_forensic which shows that the request came in. No error or finished request is then logged.
The container is running in Kubernetes and I cannot replicate the issue locally running directly with Docker machine, that is why I think this might be caused by some memory setting. However I couldn't find what might be causing this as there is no single error message.
Can you think of any reason why this might be happening?
Edit1:
I tried to set log level to trace:
https://gist.github.com/knyttl/861e8a0fe5651408df37cd5c3874946b
The request is handled and then you can see:
[Tue Oct 20 08:37:55.825454 2020] [core:trace4] [pid 1] mpm_common.c(536): mpm child 388 (gen 2/slot 4) exited
With no error and no response.
Edit2:
I updated to php7.4 and the issue persists.
I finally found it:
The process is silently killed by OOM killer of the host machine:
[4019392.626796] Memory cgroup out of memory: Kill process 4178127 (apache2) score 1137 or sacrifice child
[4019392.636520] Killed process 4178127 (apache2) total-vm:143960kB, anon-rss:22856kB, file-rss:10472kB, shmem-rss:28228kB
This is never logged within the container so it was hard to find.
Why don't you use Jorge's answers ?
Finally solved by adding to /etc/apache2/envvars:
export USE_ZEND_ALLOC=0
https://serverfault.com/a/66759

Issue with Open Shift Origin Mongo DB service

I have installed OpenShift Origin V3 on aws ec2(Fedora19) using oo-install.The set up is One Broker +One Node.
I was making some modifications to the security groups to make it more restrictive -
and it ended up some issues in the mongo service.
1.service mongod does not start up and the status shows failed.
The /var/log/mongodb/mongodb.log says
Thu Mar 6 11:24:08.189 [initandlisten] ERROR: listen(): bind() failed errno:99 Cannot assign requested address for socket: :27017
Thu Mar 6 11:24:08.189 [initandlisten] now exiting
Running oo-accept-broker -v says
FAIL: error logging into mongo db: MOPED: Retrying connection to primary for replica set :27017">]>: MOPED: Retrying connection to primary for replica set :27017">]>/MOPED: --username Retrying, exit code: 1
Any pointers on how to resolve this will be greatly appreciated.
Thanks
Shabna
I would try rolling back your changes to the security groups first and then make the changes one by one and see which one causes the issue, then post that to stack and see if anyone can comment on the specific change that is affecting mongodb.

WSO2 Samples failing

I'm working on setting up and understaind WSO2 ESB and i was going through the samples and set up.
I was looking at this sample, but well, any i tried from the first four failed:
http://docs.wso2.org/wiki/display/ESB451/Sample+3%3A+Local+Registry+Entry+Definitions%2C+Reusable+Endpoints+and+Sequences
So, i start the ESB (Management Console is running fine), that works fine. I can build the SimpleStockQuoteService and i can start the sample AXIS2 server. I can open the wsdl from the browser, so that piece looks fine.
When i run the client code from command line
ant stockquote -Daddurl=http://localhost:9000/services/SimpleStockQuoteService -Dtrpurl=http://localhost:8280/
It gets to the axis2 server (i can see it in the logs: "Mon Mar 11 16:53:37 CET 2013 samples.services.SimpleStockQuoteService :: Generating quote for : IBM")), it gets to the ESB (i can see it in the log, too), but suddenly, when it is trying to forward (?) or pass (?) the message, the connection is suddenly dropped. This is what i see in the log:
[2013-03-11 16:53:37,701] INFO - LogMediator Text = Sending quote request, version = 0.1, direction = incoming
[2013-03-11 16:53:37,830] ERROR - SourceHandler I/O error: A l├®tezo kapcsolatot a t├ívoli ├íllom├ís k├®nyszer├¡tetten bez├írta
java.io.IOException: A l├®tezo kapcsolatot a t├ívoli ├íllom├ís k├®nyszer├¡tetten bez├írta
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:25)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)
at sun.nio.ch.IOUtil.read(IOUtil.java:206)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
at org.apache.http.impl.nio.reactor.SessionInputBufferImpl.fill(SessionInputBufferImpl.java:93)
at org.apache.http.impl.nio.codecs.AbstractMessageParser.fillBuffer(AbstractMessageParser.java:113)
at org.apache.http.impl.nio.DefaultNHttpServerConnection.consumeInput(DefaultNHttpServerConnection.java:150)
at org.apache.http.impl.nio.DefaultServerIOEventDispatch.inputReady(DefaultServerIOEventDispatch.java:154)
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:158)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:340)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:318)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:278)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:542)
at java.lang.Thread.run(Thread.java:619)
"A l├®tezo kapcsolatot a t├ívoli ├íllom├ís k├®nyszer├¡tetten bez├írta
java.io.IOException: A l├®tezo kapcsolatot a t├ívoli ├íllom├ís k├®nyszer├¡tetten bez├írta" This piece is having bad encoding and is in hungarian, it means something like :"Connection was closed forcefully by remote host"
I don't really know what is going wrong... Any ideas?
I'm on Windows 7. I've downloaded the latest WSO2 ESB (wso2esb-4.6.0.zip)

Apache, mod_ssl "request failed: error reading the headers" for a specific user

Currently we have an Apache 2.2.3 server with mod_ssl 2.2.3 running Django, with users authenticating by using a x509 certificate.
So far the system is running perfectly except for a single user, who when trying to upload a file receives 400 Bad Request error, and the contents of the ssl_error_log regarding this operation are:
[<date>] [error] [client <client ip>] request failed: error reading the headers, referer: <referrer url>
The contents of the ssl_access_log are:
<client ip> - - [<date>] "POST <target page> HTTP/1.1" 400 321
Also, the user's browser is Firefox as far as I know.
I am completely unable to reproduce this bug and so far none of the other users have experienced it. Could you point out some reasons for this to happen?
I've experienced connectivity that stops the upstream after an X amount of bytes is sent. X was a pretty low value, as in enough to request some simple pages, but not to deal with ajax requests much less upload files. As far as I recall, this connectivity problem occurred only when tethering (from a specific Android phone, but I didnt even test other phones).
So if the upstream gets interrupted and the upload stalls, it makes sense apache would return this error, according to this post: "Apache waits a time equal to the Timeout directive (defaults to 5 minutes if not defined) for a response from the client. It is likely Apache is waiting for the CRLF that indicates the end of the headers, yet it is never received.."