difference between hybris 5.5.1.1 vs 5.5.1.2? - e-commerce

Can anyone tell me what are the new features added in hybris 5.5.1.2 release when compared to hybris 5.5.1.1 release in hybris?Any one throw some light on it...

There are usually no features added in minor releases (anything from third qualifier on), usually those releases only contain critical security fixes and blocker/critical issues.
You can find more details for 5.5.1.2 here (scroll down a bit and you'll see it):
https://wiki.hybris.com/display/downloads/Archived+5.5.1+Release
If you have a hybris jira account you can look at the patch ticket: https://jira.hybris.com/browse/PATCH-2070
The issues mentioned in there are:
ECP-388: Running Initialization or Update from hAC causing SQLException on weblogic
ECP-537: Cronjobs in cluster: allow to set nodeGroup/nodeID without recreating the trigger
ECP-540: synchronousOM extension dump in 5.5.1.1 during build process
ECP-552: Datahub 5.5.1 -mysql 5.6 (linux) Could not load extension for raw type RawHybrisCustomer
Hope that helps!

Related

Removing "TooLongFrameException" restrictions (http)

I am using selenium with browsermob-proxy, ultimately powered by "netty-all", to access a site (outside my control) which offers up enormous headers as part of its authentication process. Proxy fails with a netty error:
io.netty.handler.codec.TooLongFrameException: HTTP header is larger than 16384 bytes., version: HTTP/1.1
I need to remove all such limits from netty-alljar that my browsermob-proxy depends on, scalability, performance and memory conservation are not relevant in this use case.
Having cloned the repo, I changed:
DEFAULT_MAX_FRAME_SIZE in WebSocket00FrameDecoder (io.netty.handler.codec.http.websocketx)
HttpObjectDecoder default constructor in io.netty.handler.codec.http
to Integer.MAX_VALUE where appropriate.
However, even with these new settings it keeps throwing out "HTTP header is larger than 16384 bytes" in use.
Where else could this 16384 limit be coming from?
How does one remove it while retaining full functionality (at the acceptable cost to efficiency/memory usage etc)
Arrived at solution, its far from elegant but it works - my use case is inefficiency/fault tolerant so use with care.
I wont pollute this answer with Maven shenanigans as they are not strictly relevant, however, please note that netty-all by default pulls all of its components from the Maven repo. To change netty-all internals you will need to produce a jar of a required component (handler.codec.http in this case), then change pom.xml to pull in your modified jar. There are several methods to do this, the only one that worked for me was using mvn install to place the jar in the local .m2 repo:
mvn install:install-file -Dfile=netty-codec-http-4.1.25.Final-SNAPSHOT.jar -DgroupId=io.netty -DartifactId=netty-codec-http -Dversion=4.1.25.Final-SNAPSHOT -Dpackaging=jar
Then build netty-all to get the final jar, which you then use in your own project instead of the original.
Files modified to remove size limits from http operation:
all/pom.xml
codec-http/pom.xml
codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java
codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocket00FrameDecoder.java
codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java
codec-http/src/test/java/io/netty/handler/codec/http/HttpResponseDecoderTest.java
Aside from setting various size restrictions to Integer.MAX_VALUE, I commented out relevant tests to ensure that Maven "package" command succeeds in producing the jar.
The git diff of the changes is available here:
https://gist.github.com/granite-zero/723fa55ae628494ff9b833dde1973a00
You could apply it as a patch to netty commit 04fac00c8c98ed26c5a75887c8e7e53b1e1b68d0

Alfresco Upgrade v4.1.2 to v4.2.2.12 - Lucene full index rebuild impossible

I did Alfresco upgrade from v4.1.2 to v4.2.2.12. So first I did upgrade from v4.1.2 to v4.1.9.4 (to apply last patch) and then from v4.1.9.4 to v.4.2.2.12. Everything passed well in the logs and I didn't noticed any exceptions.
But when I wanted to do FULL Lucene index rebuild I end up with issue. So when I delete lucene-indexes folder, increase logging for Lucense and set index.recovery.mode=FULL and restart Alfresco, related to index rebuild, I see just following in the logs:
11:39:29,170 DEBUG [org.alfresco.repo.node.index.FullIndexRecoveryComponent] [http-bio-443-exec-17] Performing index recovery for type: FULL
11:39:39,953 INFO [org.alfresco.repo.node.index.FullIndexRecoveryComponent] [http-bio-443-exec-17] Index recovery started: 268'330 transactions.
11:39:43,978 INFO [org.alfresco.repo.management.subsystems.ChildApplicationContextFactory] [indexTrackerThread2] Starting 'Transformers' subsystem, ID: [Transformers, default]
11:39:44,383 INFO [org.alfresco.repo.management.subsystems.ChildApplicationContextFactory] [indexTrackerThread2] Startup of 'Transformers' subsystem, ID: [Transformers, default] complete
I left Alfresco for 12h to do the re-index. But even after 12h neither 10% is done of Lucene indexes. The content store is 177GB larger and on the test server I did re-index in 2h max.
Does anybody has idea why this happening and how to fix this issue?
Thanks in advance...
You may also refer to this Alfresco support article - How to disable or completely remove Solr, and enable lucne
I had the same issue with 'some' instances.
BTW: you've got Enterprise for **** sake, so this is typically an issue the Alfresco Support can answer.
Instead starting with indexes, try starting with search indexing subsystem as NOINDEX instead of Lusene or Solr.
If this doesn't work, then copy your 4.1. indexes to the 4.2 Alfresco and set index recovery mode to NONE.
When Alfresco eventually starts, open the Enterprise admin console and set the index recovery mode to FULL and save. You'll see that Alfresco will do a full reindex in runtime and it won't have any problems.
I've had a similar issue, apparently it was caused by a PdfBox bug that has been fixed in 4.2.3 enterprise release

Mule EE jars command populate_m2_repo needs %MULE_HOME% env variable set in windows

When running populate_m2_repo C:\Users\me.m2\repository I get asked for %MULE_HOME% to be set. Typically on my computer I have more than one standalone version available for testing so its inconvenient to keep changing this variable every time a new standalone comes out.
Can someone explain why populate_m2_repo needs %MULE_HOME% to be set?
thanks
Ever since mule 3.1.0 it is recommended not to set system wide the %MULE_HOME%, since the mule.bat script set it for you when not available.
That said the populate_m2_repo needs it to know where to find the artifact that need to be installed.
HTH

Pacemaker resource stopping and starting between during state change

Observing resource being stopped and started while changing state from managed to unmanaged and reverse.
mysql[20932]: 2012/09/01_11:17:03 INFO: MySQL started
Is it normal or I need to look into any spcific config on my cluster. Running heartbeat 3.0.3 and pacemaker 1.0.11.
Changing from managed to unmanaged shouldn't start or stop resources. But have a look at the changelog of pacemaker 1.0.12: https://github.com/ClusterLabs/pacemaker-1.0/blob/master/ChangeLog
"High: PE: Ensure role is preserved for unmanaged resources"
So maybe you hit a bug. I recommend upgrading to 1.0.12, maybe even to the current 1.1.x version of pacemaker.

Weblogic forces recompile of EJBs when migrating from 9.2.1 to 9.2.3

I have a few EJBs compiled with Weblogic's EJBC complient with Weblogic 9.2.1.
Our customer uses Weblogic 9.2.3.
During server start Weblogic gives the following message:
<BEA-010087> <The EJB deployment named: YYY.jar is being recompiled within the WebLogic Server. Please consult the server logs if there are any errors. It is also possible to run weblogic.appc as a stand-alone tool to generate the required classes. The generated source files will be placed in .....>
Consequently, server start takes 1.5 hours instead of 20 min. The next server start takes exactly the same time, meaning Weblogic does not cache the products of the recompilation. Needless to say, we cannot recompile all our EJBs to 9.2.3 just for this specific customer, so we need an on-site solution.
My questions are:
1. Is there any way of telling Weblogic to leave those EJB jars as they are and avoid the re-compilation during server start?
2. Can I tell Weblogic to cache the recompiled EJBs to avoid prolonged restarts?
Our current workaround was to write a script that does this recompilation manually before the EAR's creation and deployment (by simply running java weblogic.appc <jar-name>), but we would rather avoid this solution being used in production.
I FIXED this problem by spending a great deal of time researching
and decompiling some classes.I encountered this when migrating from weblogic8 to 10
by this time you might have understood the pain in dealing with oracle weblogic tech support.
unfortunately they did not have a server configuration setting to disable this
You need to do 2 things
Step 1.You if you open the EJB jar files you can see
ejb-jar.xml=3435671213
com.mycompany.myejbs.ejb.DummyEJBService=2691629828
weblogic-ejb-jar.xml=3309609440
WLS_RELEASE_BUILD_VERSION_24=10.0.0.0
you see these hascodes for each of your ejb names.Make these hadcodes zero.
pack the jar file and deploy it on server.
com.mycompany.myejbs.ejb.DummyEJBService=0
weblogic-ejb-jar.xml=0
This is just a Marker file that weblogic.appc keeps in each ejb jar to trigger the recompilation
during server boot up.i automated this process of making these hadcodes to zero.
This hashcodes remain the same for each ejb even if you execute appc for more than once
if you add a new EJB class or delete a class those entries are added to this marker file
Note 1:
how to get this file?
if you open domains/yourdomain/servers/yourServerName/cache/EJBCompilerCache/XXXXXXXXX
you will see this file for each ejb.weblogic makes the hashcodes to zero after it recompiles
Note 2:
When you generate EJB using appc.generate them to a exploded directory using -output C:\myejb
instead of C:\myejb.jar.This way you can play around with the marker file
Step2.
Also you need a PATCH from weblogic.When you install the patch you see some message like this
"PATH CRXXXXXX installed successfully.Eliminate EJB recomilation for appc".
i dont remember the patch number but you can request weblogic for that.
You need to use both steps to fix the problem.The patch fixes only part of the problem
Goodluck!!
cheers
raj
the Marker file in EJBs is WL_GENERATED
Just to update the solution we went with - eventually we opted to recompile the EJBs once at the Customer's site instead of messing with the EJBs' internal markers (we don't want Oracle saying they cannot support problems derived from this scenario).
We created two KSH scripts - the first iterates over all the EJB jars, copies them to a temp dir and then re-compiles them in parallel by running several instances of the 2nd script which does only one thing: java -Drecompiler=yes -cp $CLASSPATH weblogic.appc $1 (With error handling of course :))
This solution reduced compilation time from 70min to 15min. After this we re-create the EAR file and redeploy it with the new EJBs. We do this once per several UAT environment creations, so we save quite a lot of time here (55min X num of envs per drop X num of drops)