Can not find Gemfire Region using Spring Data - gemfire

I was able to run spring-gemfire-examples-master/spring-cache project successfully. However, when I try to connect my local locator, it tells me that region1 could not be found in GemfireCache. However, I can see a connection has been setup in Pulse.
My steps:
Open a window command, start a locator1, start a server1,create a region1
Change spring-cache...cache-context.xml in sample folder as
Change cacheManager accordingly.
Run sample as gradlew -q run-spring-cache
I am newbie for Gemfire.
<util:properties id="gemfire-props">
<prop key="log-level">warning</prop>
</util:properties>
<gfe:client-cache id="client-cache" pool-name="my-pool"></gfe:client-cache>
<gfe:pool id="my-pool" subscription-enabled="true">
<gfe:locator host="localhost" port="10334"></gfe:locator>
</gfe:pool>
<gfe:lookup-region id="Region1" name="Region1" cache-ref="client-cache">
</gfe:lookup-region>

In this case, you need to use
<gfe:client-region id="Region1" name="Region1" cache-ref="client-cache"/>
lookup-region is associated with cache peers; client-region is used with client caches. Also, modify Main.java accordingly to just load cache-context.xml.

Related

what is the use of custom-artifact in spinnaker, it always gives error - Custom references are passed on to cloud platforms to handle or process 500

i am trying to use custom-artifact account in spinnaker.
I have a pipeline, where i want to pull a http file (a deployment manifest) as an artifact, and use it in deployment.
i use custom-artifact and put the url - (https://raw.githubusercontent.com/sdputurn/flask-k8s-inspector/master/Deployment.yaml) in reference.
I have tried running this pipeline multiple times, but i always fails with the error (Internal Server Error",“message”:“Custom references are passed on to cloud platforms to handle or process”,“status”:500)
i saw some tutorials where they just use custom artifact and put some http url to get files for deploy stage.
steps to re-produce:
1. create a new pipeline --> in configuration stage --> add artifact --> choose "custom-artifact" --> update reference with (https://raw.githubusercontent.com/sdputurn/flask-k8s-inspector/master/Deployment.yaml) --> check "use default artifact" and fill the same details -- > add one more stage Deploy --> use the artifact template from configuration stage --> run the pipeline
spinnaker version - 1.16.1
For the Spinnaker version 1.17.1 the custom-artifact is deprecated. If possible use the embedded-artifact>produce an artifact and use the artifact in another execution.

How do I kill a YARN container to test failure scenarios

I'm building an application on AWS EMR using YARN (and Dask) version Hadoop 2.7.3-amzn-1. I'm trying to test various failure scenarios and I'm wanting to simulate a container failure. I can't seem to find an easy way to kill a YARN container - only the whole application. Is there a command-line utility for this?
[root#node1 lillcol]# yarn container -help
20/04/24 15:04:14 INFO client.AHSProxy: Connecting to Application History server at node1/127.0.0.1:10200
usage: container
-help Displays help for all commands.
-list <Application Attempt ID> List containers for application
attempt.
-signal <container ID [signal command]> Signal the container. The
available signal commands are
[OUTPUT_THREAD_DUMP,
GRACEFUL_SHUTDOWN,
FORCEFUL_SHUTDOWN] Default
command is OUTPUT_THREAD_DUMP.
-status <Container ID> Prints the status of the
container.
Through the command yarn container -signal [container-ID] GRACEFUL_SHUTDOWN to achieve.
i've tried and int works,I hope that will be helpful.
YARN has no CLI or REST API that kills a container.
The simplest way to create a container failure is to login to a NodeManager host and kill the process (which would be a container) spawned by the NodeManager.
Seems like it's exposed in API starting from version 2.8.0
https://hadoop.apache.org/docs/r2.8.0/api/org/apache/hadoop/yarn/client/api/YarnClient.html#signalToContainer(org.apache.hadoop.yarn.api.records.ContainerId,%20org.apache.hadoop.yarn.api.records.SignalContainerCommand)

yarn usercache dir not resolved properly when running an example application

I am using Hadoop 3.2.0 and trying to run a simple application in a docker container and I have made the required configuration changes both in yarn-site.xml and container-executor.cfg to choose LinuxContainerExecutor and docker runtime.
I use the example of distributed shell in one of the hortonworks blog. https://hortonworks.com/blog/trying-containerized-applications-apache-hadoop-yarn-3-1/
The problem I face here is when the application is submitted to YARN it fails with a reason related to directory creation issue with the below error
2019-02-14 20:51:16,450 INFO distributedshell.Client: Got application
report from ASM for, appId=2, clientToAMToken=null,
appDiagnostics=Application application_1550156488785_0002 failed 2
times due to AM Container for appattempt_1550156488785_0002_000002
exited with exitCode: -1000 Failing this attempt.Diagnostics:
[2019-02-14 20:51:16.282]Application application_1550156488785_0002
initialization failed (exitCode=20) with output: main : command
provided 0 main : user is myuser main : requested yarn user is
myuser Failed to create directory
/data/yarn/local/nmPrivate/container_1550156488785_0002_02_000001.tokens/usercache/myuser
- Not a directory
I have configured yarn.nodemanager.local-dirs in yarn-site.xml and I can see the same reflected in YARN web ui localhost:8088/conf
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/data/yarn/local</value>
<final>false</final>
<source>yarn-site.xml</source>
</property>
I do not understand why is it trying to create usercache dir inside the nmPrivate directory.
Note : I have verified the permissions for myuser to the directories and also have tried clearing the directories manually as suggested in a related post. But no fruit. I do not see any additional information about container launch failure in any other logs.
How do I debug why the usercache dir is not resolved properly??
Really appreciate any help on this.
Realized that this is all because of the users the services were started with and the permissions to the directories the services work on.
After making sure the required changes are done, I am able to seamlessly run the examples and other applications..
Thanks Hadoop user community for the direction. Adding the link here for more details.
http://mail-archives.apache.org/mod_mbox/hadoop-user/201902.mbox/browser

Increase memory allocated to application deployed to payara micro

Am running my application from a payara micro UberJar and would like to increase the memory allocated to the application. How can I do this at the point of creating the uberJar?
There are a couple of ways you can do this. The first way I'll mention is the preferred way:
1. Use asadmin commands
The latest edition of Payara Micro introduces an option called --postbootcommandfile which allows you to run asadmin commands against Payara Micro. Your file should include something like this:
delete-jvm-options -Xmx=512m
create-jvm-options -Xmx=1g
create-jvm-options -Xms=1g
You will need to make sure you delete the existing options before applying new ones.
You can then use the file similar to this:
java -jar payara-micro.jar --postbootcommandfile myCommands.txt --deploy myApp.war --outputuberjar myPayaraMicroApp.jar
Your settings should now persist in the resulting Uber JAR.
2. Supply a custom domain.xml
The alternative to this would be modifying a domain.xml of your own and overriding the in-built domain.xml with your own.
You can use the --rootdir option to get Payara Micro to output its configuration to a directory so you can make changes there. This process is outlined in this blog:
http://blog.payara.fish/working-with-external-configuration-files-in-payara-micro
If you already have a custom domain.xml to hand, you can use the --domainconfig property to supply it, as follows:
java -jar payara-micro.jar --domainconfig myCustomDomain.xml --deploy myApp.war --outputuberjar myPayaraMicroApp.jar
After following either of these methods, you can simply start the resulting JAR and all the settings and configuration will be applied:
java -jar myPayaraMicroApp.jar
Payara Micro uber JAR is a plain JAR and it doesn't start a new JVM like Payara Server does. Therefore there's no way to modify JVM memory settings from within the JAR as the JVM is already started. Although it's possible to add the JVM settings into the Payara Micro configuration, they are ignored and not applied. Those configuration values are only used within Payara Server.
With Payara Micro uber JAR, you need to specify the JVM options on the command line, like this:
java -Xmx=1g -Xms=1g -jar myPayaraMicroApp.jar
If you need to specify JVM arguments in the uber JAR, you need to use a solution like capsule.io to wrap the JAR into a launcher JAR that would spawn a separate JVM for Payara Micro and pass the arguments to it.

Weblogic forces recompile of EJBs when migrating from 9.2.1 to 9.2.3

I have a few EJBs compiled with Weblogic's EJBC complient with Weblogic 9.2.1.
Our customer uses Weblogic 9.2.3.
During server start Weblogic gives the following message:
<BEA-010087> <The EJB deployment named: YYY.jar is being recompiled within the WebLogic Server. Please consult the server logs if there are any errors. It is also possible to run weblogic.appc as a stand-alone tool to generate the required classes. The generated source files will be placed in .....>
Consequently, server start takes 1.5 hours instead of 20 min. The next server start takes exactly the same time, meaning Weblogic does not cache the products of the recompilation. Needless to say, we cannot recompile all our EJBs to 9.2.3 just for this specific customer, so we need an on-site solution.
My questions are:
1. Is there any way of telling Weblogic to leave those EJB jars as they are and avoid the re-compilation during server start?
2. Can I tell Weblogic to cache the recompiled EJBs to avoid prolonged restarts?
Our current workaround was to write a script that does this recompilation manually before the EAR's creation and deployment (by simply running java weblogic.appc <jar-name>), but we would rather avoid this solution being used in production.
I FIXED this problem by spending a great deal of time researching
and decompiling some classes.I encountered this when migrating from weblogic8 to 10
by this time you might have understood the pain in dealing with oracle weblogic tech support.
unfortunately they did not have a server configuration setting to disable this
You need to do 2 things
Step 1.You if you open the EJB jar files you can see
ejb-jar.xml=3435671213
com.mycompany.myejbs.ejb.DummyEJBService=2691629828
weblogic-ejb-jar.xml=3309609440
WLS_RELEASE_BUILD_VERSION_24=10.0.0.0
you see these hascodes for each of your ejb names.Make these hadcodes zero.
pack the jar file and deploy it on server.
com.mycompany.myejbs.ejb.DummyEJBService=0
weblogic-ejb-jar.xml=0
This is just a Marker file that weblogic.appc keeps in each ejb jar to trigger the recompilation
during server boot up.i automated this process of making these hadcodes to zero.
This hashcodes remain the same for each ejb even if you execute appc for more than once
if you add a new EJB class or delete a class those entries are added to this marker file
Note 1:
how to get this file?
if you open domains/yourdomain/servers/yourServerName/cache/EJBCompilerCache/XXXXXXXXX
you will see this file for each ejb.weblogic makes the hashcodes to zero after it recompiles
Note 2:
When you generate EJB using appc.generate them to a exploded directory using -output C:\myejb
instead of C:\myejb.jar.This way you can play around with the marker file
Step2.
Also you need a PATCH from weblogic.When you install the patch you see some message like this
"PATH CRXXXXXX installed successfully.Eliminate EJB recomilation for appc".
i dont remember the patch number but you can request weblogic for that.
You need to use both steps to fix the problem.The patch fixes only part of the problem
Goodluck!!
cheers
raj
the Marker file in EJBs is WL_GENERATED
Just to update the solution we went with - eventually we opted to recompile the EJBs once at the Customer's site instead of messing with the EJBs' internal markers (we don't want Oracle saying they cannot support problems derived from this scenario).
We created two KSH scripts - the first iterates over all the EJB jars, copies them to a temp dir and then re-compiles them in parallel by running several instances of the 2nd script which does only one thing: java -Drecompiler=yes -cp $CLASSPATH weblogic.appc $1 (With error handling of course :))
This solution reduced compilation time from 70min to 15min. After this we re-create the EAR file and redeploy it with the new EJBs. We do this once per several UAT environment creations, so we save quite a lot of time here (55min X num of envs per drop X num of drops)