Pacemaker resource stopping and starting between during state change - heartbeat

Observing resource being stopped and started while changing state from managed to unmanaged and reverse.
mysql[20932]: 2012/09/01_11:17:03 INFO: MySQL started
Is it normal or I need to look into any spcific config on my cluster. Running heartbeat 3.0.3 and pacemaker 1.0.11.

Changing from managed to unmanaged shouldn't start or stop resources. But have a look at the changelog of pacemaker 1.0.12: https://github.com/ClusterLabs/pacemaker-1.0/blob/master/ChangeLog
"High: PE: Ensure role is preserved for unmanaged resources"
So maybe you hit a bug. I recommend upgrading to 1.0.12, maybe even to the current 1.1.x version of pacemaker.

Related

testcontainers-python hanging while showing "waiting to be ready...", then fails

I'm running my unit testing code for neo4j.
My environment:
Ubuntu 20.04LTS server
1Gb Memory
1CPU
Here is what is displayed in the console:
====================================== test session starts ======================================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: ~/morsvq, configfile: pytest.ini
plugins: mock-3.8.2
collected 2 items
---------------------------------------- live log setup -----------------------------------------
INFO testcontainers.core.container:container.py:52 Pulling image neo4j:latest
INFO testcontainers.core.container:container.py:63 Container started: ad7963ed01
INFO testcontainers.core.waiting_utils:waiting_utils.py:46 Waiting to be ready...
INFO testcontainers.core.waiting_utils:waiting_utils.py:46 Waiting to be ready...
ERROR neo4j:__init__.py:571 Failed to read from defunct connection IPv4Address(('localhost', 49153)) (IPv4Address(('127.0.0.1', 49153)))
The same code runs successfully on a faster virtual machine with 8Gb Memory. So the code itself shouldn't be faulty. My suspision is that there is something to do with my configuration, so that it now consumes to much memory?
I've checked the official websites' documentation, but it doesn't mention the memory problem. I wonder if someone has encountered similar problem? How to fix this?
Disclaimer: I am a maintainer of tc-java, so I have only some basic experience with tc-python. However, some facts and constraints are universal across Testcontainers language implementations.
As you already wrote, the code runs fine on a more powerful machine, while it fails on an extremely limited machine. 1GB of RAM is not much, I would expect it is generally not enough to successfully start a Neo4j Docker container without memory swapping. Swapping would make the startup and interactions very slow, hence the startup timeout triggers.
For further debugging, you can run the Neo4j container directly using Docker CLI on your environment and see how it behaves.

Airflow Broken DAG error - The version of cryptography does not match the loaded shared object

Airflow 1.10.12 Seeing this error in the UI:
Broken DAG: [/home/airflow/dags/something.py] The version of cryptography does not match the loaded shared object. This can happen if you have multiple copies of cryptography installed in your Python path. Please try creating a new virtual environment to resolve this issue. Loaded python version: 2.9.2, shared object version: b'2.9'
The dags compile on the machine with no errors, but these messages appear for almost all the dags.
I have also recreated the virtualenv multiple times, but the error persists.
Anyone seen this before?
Turns out that a celery host had a scheduler running that was inserting the errors in the database. Stopped the extra scheduler and the messages went away

difference between hybris 5.5.1.1 vs 5.5.1.2?

Can anyone tell me what are the new features added in hybris 5.5.1.2 release when compared to hybris 5.5.1.1 release in hybris?Any one throw some light on it...
There are usually no features added in minor releases (anything from third qualifier on), usually those releases only contain critical security fixes and blocker/critical issues.
You can find more details for 5.5.1.2 here (scroll down a bit and you'll see it):
https://wiki.hybris.com/display/downloads/Archived+5.5.1+Release
If you have a hybris jira account you can look at the patch ticket: https://jira.hybris.com/browse/PATCH-2070
The issues mentioned in there are:
ECP-388: Running Initialization or Update from hAC causing SQLException on weblogic
ECP-537: Cronjobs in cluster: allow to set nodeGroup/nodeID without recreating the trigger
ECP-540: synchronousOM extension dump in 5.5.1.1 during build process
ECP-552: Datahub 5.5.1 -mysql 5.6 (linux) Could not load extension for raw type RawHybrisCustomer
Hope that helps!

mule-deploy.properties over written when I choose Run As "Mule Application" Anypoint Studio July 2014 Release Build Id: 201407311443

Strange event is happening in a Mule project. I have the application xml which is JPC.xml. This normally appears in the mule-deploy.properties as follows
redeployment.enabled=true
encoding=UTF-8
config.resources=JPC.xml
domain=default
When I choose Run As, Mule Application Which kicks off the build in the background prior to the deploy and run. During that time the mule-deploy.properties becomes:
redeployment.enabled=true
encoding=UTF-8
config.resources=
domain=default
And when the application runs it says it is missing the mule-config.xml
What is erasing it?
I think I may have found the root of this issue.
It seems to be a bug related to jdk_1.7.0_45 having to do with xml parsing. see: What's causing these ParseError exceptions when reading off an AWS SQS queue in my Storm cluster
I noticed several errors logged in eclipse/anypoint as:
!ENTRY org.mule.tooling.core 4 0 2014-11-19 14:16:41.081
!MESSAGE Error opening resource measurement_scheduler.xml
!STACK 0
javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1]
Message: JAXP00010001: The parser has encountered more than "64000" entity expansions in this document; this is the limit imposed by the JDK.
I also noticed that after restarting Anypoint, I would be able to build with maven successfully and my mule-deploy.properties file would again have content. Until...at some point after several edits to things in Anypoint, I would again get mvn build that wiped out the contents of mule-deploy.properties.
I further noticed that once this problem started to happen in one project in Anypoint, it would ALSO start happening in ANY project I built in Anypoint...until restart of Anypoint.
It seems this bug in jdk 1.7.0_45 mistakenly applies this limit in the xml parser to all opened files cumulatively, instead of per file. I suspect this causes Anypoint to not finish parsing all of the xml docs that make up my app and therefore couldnt re-create the mule-deploy.properties...leaving it blank.
Upgrading to newer jdk would fix this.
Another way to work around it is to override this limit for xml parser by adding the following to ${JAVA_HOME}/jre/lib/jaxp.properties:
jdk.xml.entityExpansionLimit=0
jdk.xml.maxGeneralEntitySizeLimit=0
I am not certain that both limits need set to work-around this. Possibly only entityExpansionLimit is needed.
After making this change I am now happily able to use Anypoint again. Beware that using this work-around possibly opens you up to a denial-of-service attack through the xml parser if your same jre is used for other not-so-trusted processes.

Weblogic forces recompile of EJBs when migrating from 9.2.1 to 9.2.3

I have a few EJBs compiled with Weblogic's EJBC complient with Weblogic 9.2.1.
Our customer uses Weblogic 9.2.3.
During server start Weblogic gives the following message:
<BEA-010087> <The EJB deployment named: YYY.jar is being recompiled within the WebLogic Server. Please consult the server logs if there are any errors. It is also possible to run weblogic.appc as a stand-alone tool to generate the required classes. The generated source files will be placed in .....>
Consequently, server start takes 1.5 hours instead of 20 min. The next server start takes exactly the same time, meaning Weblogic does not cache the products of the recompilation. Needless to say, we cannot recompile all our EJBs to 9.2.3 just for this specific customer, so we need an on-site solution.
My questions are:
1. Is there any way of telling Weblogic to leave those EJB jars as they are and avoid the re-compilation during server start?
2. Can I tell Weblogic to cache the recompiled EJBs to avoid prolonged restarts?
Our current workaround was to write a script that does this recompilation manually before the EAR's creation and deployment (by simply running java weblogic.appc <jar-name>), but we would rather avoid this solution being used in production.
I FIXED this problem by spending a great deal of time researching
and decompiling some classes.I encountered this when migrating from weblogic8 to 10
by this time you might have understood the pain in dealing with oracle weblogic tech support.
unfortunately they did not have a server configuration setting to disable this
You need to do 2 things
Step 1.You if you open the EJB jar files you can see
ejb-jar.xml=3435671213
com.mycompany.myejbs.ejb.DummyEJBService=2691629828
weblogic-ejb-jar.xml=3309609440
WLS_RELEASE_BUILD_VERSION_24=10.0.0.0
you see these hascodes for each of your ejb names.Make these hadcodes zero.
pack the jar file and deploy it on server.
com.mycompany.myejbs.ejb.DummyEJBService=0
weblogic-ejb-jar.xml=0
This is just a Marker file that weblogic.appc keeps in each ejb jar to trigger the recompilation
during server boot up.i automated this process of making these hadcodes to zero.
This hashcodes remain the same for each ejb even if you execute appc for more than once
if you add a new EJB class or delete a class those entries are added to this marker file
Note 1:
how to get this file?
if you open domains/yourdomain/servers/yourServerName/cache/EJBCompilerCache/XXXXXXXXX
you will see this file for each ejb.weblogic makes the hashcodes to zero after it recompiles
Note 2:
When you generate EJB using appc.generate them to a exploded directory using -output C:\myejb
instead of C:\myejb.jar.This way you can play around with the marker file
Step2.
Also you need a PATCH from weblogic.When you install the patch you see some message like this
"PATH CRXXXXXX installed successfully.Eliminate EJB recomilation for appc".
i dont remember the patch number but you can request weblogic for that.
You need to use both steps to fix the problem.The patch fixes only part of the problem
Goodluck!!
cheers
raj
the Marker file in EJBs is WL_GENERATED
Just to update the solution we went with - eventually we opted to recompile the EJBs once at the Customer's site instead of messing with the EJBs' internal markers (we don't want Oracle saying they cannot support problems derived from this scenario).
We created two KSH scripts - the first iterates over all the EJB jars, copies them to a temp dir and then re-compiles them in parallel by running several instances of the 2nd script which does only one thing: java -Drecompiler=yes -cp $CLASSPATH weblogic.appc $1 (With error handling of course :))
This solution reduced compilation time from 70min to 15min. After this we re-create the EAR file and redeploy it with the new EJBs. We do this once per several UAT environment creations, so we save quite a lot of time here (55min X num of envs per drop X num of drops)