WSO2 Hive analyser script result set - hive

I am using WSO2 ESB 4.5.1 and WSO2 BAM 2.0.0. In my Hive script I am attempting to get a single value and assign it to a variable so I can later use it in SQL statements. I can use a variable using hiveconf but I'm not sure how to assign a single value from result set to it.
any ideas?
Thanks.

You can extend the AbstractHiveAnalyzer and write you own class which executes the query and set the hive conf value, similar to this summarizer. Here you can see the execute() method should be implemented and this will be called by BAM. Here you can add your preferred query and assign the hive conf with 'setProperty("your_hive_conf", yourResult.string());'.
You can build your java application as typical '.jar' file or osgi bundle. If you have packaged as just a '.jar' file, then you should place the jar in $BAM_HOME/repository/components/lib. If you packaged the application as osgi bundle, then place the file in $BAM_HOME/repository/components/dropins folder. And restart BAM server.
And finally in your hive script that you add in BAM, you should include you extended class as 'class your.package.name.HiveAnalyzerImpl;', so that BAM will run the execute() method which you have implemented in your class and your hive conf will be set. And then, value you have set for your hive conf can be used in the hive script as ${hiveconf:your_hive_conf}.
Hope this helps.

Related

Adding SSL parameters to pepper_box config in jmeter

I'm trying to test a kafka stream on jmeter using the pepper box config, but each time I try adding java request parameters it goes back to the default parameters without saving the ones I have added. I have tried the recommendations on here of adding the underscore, so _ssl.enabled, but the params are still disappearing. Any recommendations? Using jmeter5.3 and pepper-box1.0
I believe you need to put your SSL properties to the PepperBoxKafkaSampler directly, there are pre-populated placeholders which you can change and the changes persist.
The same behaviour is for Java Request Defaults
It might be the case your installation got corrupt somehow or there is a conflict with another JMeter Plugin, check jmeter.log file for any suspicious entries
In the meantime you may find Apache Kafka - How to Load Test with JMeter article useful
I had the same issue. I got around this issue by cloning the pepperbox repository https://github.com/GSLabDev/pepper-box and made changes to the PepperBoxKafkaSampler.java file, updated the setupTest() method with your props. You can also add the parameters making use of the .addArgument() method (used in PepperBoxKafkaSampler.java) to make the parameters available in jmeter.
Rebuild the repo using maven mvn clean install replace the old pepperbox jar in jmeter/lib/ext with your new built jar.

Need to prevent dw-buffer-122.temp (storing inside machine Temp folder) creation while running mule application?

While I am running mule application which contains dataweave component , creating Temp file like : dw-buffer-3756726.tmp . Another thing is , application Zip package which is running under the server ,same temp (I mean zip package name and Temp file name same ) folder gets created inside Temp folder .
But I don't want temp files for dataweave or mule Application.
Anyone please suggest me how I can escape this situation.
That is a known issue fixed in bugfixes releases of Mule Runtime. If you are using any release previous to 3.8.5 you should upgrade. Otherwise wait for 3.9.0 to be released.

DbUnit Tests with SilkCentral

I´m trying to execute DbUnit Tests with SilkCentral in a remote Virtual Machine that works like a execution server. The Alltests.class directory is in \\p6621va\ucd\ucdmain_TEST\bin\es\bde\aps\ucdmain\ias\tests\AllTests.class and it contents the suiteTests.
I need to specify the AllTests.class Classpath , I`ve done like this according the documentation:
It returns the following error:
How can I specify the Classpath?
Thanks in advance.
You need to specify the class with its package and the classpath without.
Classpath: \p6621va\ucd\ucdmain_TEST\bin.
Test class: es.bde.aps.ucdmain.ias.tests.AllTests

How to add JAR for Hive custom UDF so it is available permanently on the HDInsight cluster?

I have created a custom UDF in Hive, it's tested in Hive command line and works fine. So now I have the jar file for the UDF, what I need to do so that users will be able to create temporary function pointing to it? Ideally from command prompt of Hive I would do this:-
hive> add jar myudf.jar;
Added [myudf.jar] to class path
Added resources: [myudf.jar]
hive> create temporary function foo as 'mypackage.CustomUDF';
After this I am able to use the function properly.
But I don't want to add jar each and every time I want to execute the function. I should be able to run this function while:-
executing Hive query against HDInsight cluster from Visual Studio
executing Hive query from command line through SSH(Linux) or
RDP/cmd(Windows)
executing Hive query from Ambari (Linux) Hive view
executing Hive query from HDinsight Query Console Hive
Editor(Windows cluster)
So, no matter how I am executing the query the JAR should be already available and added to the path. What's the process to ensure this for Linux as well as Windows cluster?
may be you could add the jar in hiverc file present in hive etc/conf directory. This file will be loaded every time when hive starts. So from next time you need not to add jar separably for that session.

Weblogic forces recompile of EJBs when migrating from 9.2.1 to 9.2.3

I have a few EJBs compiled with Weblogic's EJBC complient with Weblogic 9.2.1.
Our customer uses Weblogic 9.2.3.
During server start Weblogic gives the following message:
<BEA-010087> <The EJB deployment named: YYY.jar is being recompiled within the WebLogic Server. Please consult the server logs if there are any errors. It is also possible to run weblogic.appc as a stand-alone tool to generate the required classes. The generated source files will be placed in .....>
Consequently, server start takes 1.5 hours instead of 20 min. The next server start takes exactly the same time, meaning Weblogic does not cache the products of the recompilation. Needless to say, we cannot recompile all our EJBs to 9.2.3 just for this specific customer, so we need an on-site solution.
My questions are:
1. Is there any way of telling Weblogic to leave those EJB jars as they are and avoid the re-compilation during server start?
2. Can I tell Weblogic to cache the recompiled EJBs to avoid prolonged restarts?
Our current workaround was to write a script that does this recompilation manually before the EAR's creation and deployment (by simply running java weblogic.appc <jar-name>), but we would rather avoid this solution being used in production.
I FIXED this problem by spending a great deal of time researching
and decompiling some classes.I encountered this when migrating from weblogic8 to 10
by this time you might have understood the pain in dealing with oracle weblogic tech support.
unfortunately they did not have a server configuration setting to disable this
You need to do 2 things
Step 1.You if you open the EJB jar files you can see
ejb-jar.xml=3435671213
com.mycompany.myejbs.ejb.DummyEJBService=2691629828
weblogic-ejb-jar.xml=3309609440
WLS_RELEASE_BUILD_VERSION_24=10.0.0.0
you see these hascodes for each of your ejb names.Make these hadcodes zero.
pack the jar file and deploy it on server.
com.mycompany.myejbs.ejb.DummyEJBService=0
weblogic-ejb-jar.xml=0
This is just a Marker file that weblogic.appc keeps in each ejb jar to trigger the recompilation
during server boot up.i automated this process of making these hadcodes to zero.
This hashcodes remain the same for each ejb even if you execute appc for more than once
if you add a new EJB class or delete a class those entries are added to this marker file
Note 1:
how to get this file?
if you open domains/yourdomain/servers/yourServerName/cache/EJBCompilerCache/XXXXXXXXX
you will see this file for each ejb.weblogic makes the hashcodes to zero after it recompiles
Note 2:
When you generate EJB using appc.generate them to a exploded directory using -output C:\myejb
instead of C:\myejb.jar.This way you can play around with the marker file
Step2.
Also you need a PATCH from weblogic.When you install the patch you see some message like this
"PATH CRXXXXXX installed successfully.Eliminate EJB recomilation for appc".
i dont remember the patch number but you can request weblogic for that.
You need to use both steps to fix the problem.The patch fixes only part of the problem
Goodluck!!
cheers
raj
the Marker file in EJBs is WL_GENERATED
Just to update the solution we went with - eventually we opted to recompile the EJBs once at the Customer's site instead of messing with the EJBs' internal markers (we don't want Oracle saying they cannot support problems derived from this scenario).
We created two KSH scripts - the first iterates over all the EJB jars, copies them to a temp dir and then re-compiles them in parallel by running several instances of the 2nd script which does only one thing: java -Drecompiler=yes -cp $CLASSPATH weblogic.appc $1 (With error handling of course :))
This solution reduced compilation time from 70min to 15min. After this we re-create the EAR file and redeploy it with the new EJBs. We do this once per several UAT environment creations, so we save quite a lot of time here (55min X num of envs per drop X num of drops)