External script (R) not working - sql

When I try to run the external script (R Script) from kognitio console. I'm getting the below error message.
Error:external script vfork child: No such file or directory
Can someone please help me to understand what it is!

This will be because you have not replicated the script environment to all the DB nodes which are eligible to run the script.
Chapter 10 of the Kognitio Guide (downloadable from http://www.kognitio.com/forums/viewtopic.php?t=3), explains in section 10.2 how the script environment myst be identically installed in the same location on all nodes which will be used in processing, and section 10.6 explains how you can contrain this to a subset of nodes if for some reason you do not want the script environment to be on all nodes (e.g. if it has an expensive per-node licence).
You can use the wxsync tool to synchronise files across all nodes, or a remote deployment tool, such as HP's RDP, to ensure that the script environment is installed identically on all nodes.

Related

Pentaho 5.4.0 Community Edition Remote Execution

am using Pentaho community edition 5.4.0 ,I explain My requirement very Simply,
1) I have my jobs and transformation in my local windows machine and i like to execute those in my client machine ,So that i installed same Pentaho community version 5.4.0 on his machine. For Remote Execution i heard about Carte.bat service,I searched the installation procedure and configuration settings for remote execution,but i didn't get a clear idea about that,Please help me a clear step by step procedure for how to run remotely in my client machine .
2) Is there possible for Schedule those jobs and transformation in Pentaho Community edition 5..4.0 ? Is it possible please explain the same.
Thanks and Regards
Dhamodharan.
Install jenkins
https://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins
At least read what variables are available in Jenkins. It is pretty handy to know them.
Download PDI KETTLE from http://pentaho.com unzip in any suitable directory.
Configure executables and PDI variables as in here
How to configure Database connection for production environment in Pentaho data integration Kettle transformation
Start jenkins and login into admin panel. Create an new job,
in paragraph Build add Execute shell inside input text area add lines:
cd $WORKSPACE
kitchen.sh -file=main.kjb
Done.
There are a lot of jenkins plugins.
You can add post-build actions:
notice by email
archive publish result
.... so on
Worth to use Jenkins if it is used for some other functionalities, means it is already exists in infrastructure, otherwise carte will be enought.
Variable configured in .bashrc and .bash_profile (User should be same as used for Jenkins)
#.bashrc
export KETTLE_HOME=/opt/R1/data-integration
export KETTLE_JNDI_ROOT=$KETTLE_HOME/simple-jndi
export PATH=$PATH:$KETTLE_HOME
To force evaluate .bashrc on ssh login add to .bash_profile
#.bash_profile
if [ -f .bashrc ]; then
. ~/.bashrc
fi
Then
source .bashrc
After restart Jenkins (not from admin panel)

How to run scripts automatically after deployment in AWS using EB CLI?

I am trying to make a Django server on AWS. My django app depends on some mathematical python libraries like numpy, scipy, sklearn etc. However there is an issue for which I need to this after every deployment
sudo nano /etc/httpd/conf.d/wsgi.conf
---------------------------------------
add this line in the file
WSGIApplicationGroup %{GLOBAL}
---------------------------------------
sudo /etc/init.d/httpd reload
Basically I need "WSGIApplicationGroup %{GLOBAL}" in my wsgi.conf file otherwise I get 504. I am using a Custom AMI built on top of Amazon Linux 2014 and I am using EB CLI for deployment. However whenever I deploy the wsgi.conf is reset and it does not contain the line that I have added previously and I need to manually SSH into the EC2 instance and do this task myself. It gives a overhead on every deployment and its also not feasible once we scale up (cloning or creating instances also resets it). So is there a way that this will be automatically done after every deployment ?
The content of the wsgi.conf is fixed, so basically I can make a script easily to create it but the issue is how to trigger the script automatically ?
PS:I am new to AWS
You need to use AWS Elastic Beanstalk feature called .ebextensions: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
In your case you can't use Files or Commands sections, because:
The commands are processed in alphabetical order by name, and they run
before the application and web server are set up and the application
version file is extracted.
You need to use Container_commands section:
They run after the application and web server have been set up and the
application version file has been extracted, but before the
application version is deployed.
Example .ebextensions/01wsgi.config (not tested :-))
container_commands:
apache_reload:
command: |
echo "WSGIApplicationGroup %{GLOBAL}" >> /etc/httpd/conf.d/wsgi.conf
/etc/init.d/httpd reload
Feel free to tweak my example as you want, for example you can copy your temporary wsgi.conf file somewhere and then replace original in Container_commands section.

SSIS Execute Process Task Can't Find executable

I am using the 7zip standalone .exe to unzip a file. I am using the Execute Process task for this. I have tested this over and over again on multiple machines and I know it works (at least in debug mode/visual studio). I have uploaded this package the server. I have created a job that calls said package from the Package Store. The package is not able to find the .exe no matter where I put it.
My first thought was to put the .exe on the C:\ drive, which failed. I have also failed in my attempts to place the .exe on a network location that the account the package is running under has full control over.
Basically, has anybody else had issues getting the Execute Process Task to find an executable when the package is uploaded to the server?
The error message is
Can't find 7za.exe in directory C:\7zip
I'll risk a downvote for being wrong, but I believe you have a permission issue.
You say it runs fine on other servers from BIDS, try it without BIDS. Call it from a command-line on a box that it works on.
dtexec.exe /file C:\HereComesTheUnzipper.dtsx
If that works, then repeat the step on the troublesome server. RDC into the box and try again
dtexec.exe /ser localhost /sq HereComesTheUnzipper
If that still works, then you are looking at an issue with the job. What account is the SQL Agent service running as? Is the SSIS job step running as a particular set of credentials? If so, is it a SQL Server login (which wouldn't map to anything on the physical box)? Regardless of what your answer is, the resolution will be to ensure the account has access to
7z.exe
whatever scratch area 7zip may use while unpacking files (I assume %temp%)
the output folder (C:\bin\7z.exe -e e:\data\MyThing.7z)

Weblogic forces recompile of EJBs when migrating from 9.2.1 to 9.2.3

I have a few EJBs compiled with Weblogic's EJBC complient with Weblogic 9.2.1.
Our customer uses Weblogic 9.2.3.
During server start Weblogic gives the following message:
<BEA-010087> <The EJB deployment named: YYY.jar is being recompiled within the WebLogic Server. Please consult the server logs if there are any errors. It is also possible to run weblogic.appc as a stand-alone tool to generate the required classes. The generated source files will be placed in .....>
Consequently, server start takes 1.5 hours instead of 20 min. The next server start takes exactly the same time, meaning Weblogic does not cache the products of the recompilation. Needless to say, we cannot recompile all our EJBs to 9.2.3 just for this specific customer, so we need an on-site solution.
My questions are:
1. Is there any way of telling Weblogic to leave those EJB jars as they are and avoid the re-compilation during server start?
2. Can I tell Weblogic to cache the recompiled EJBs to avoid prolonged restarts?
Our current workaround was to write a script that does this recompilation manually before the EAR's creation and deployment (by simply running java weblogic.appc <jar-name>), but we would rather avoid this solution being used in production.
I FIXED this problem by spending a great deal of time researching
and decompiling some classes.I encountered this when migrating from weblogic8 to 10
by this time you might have understood the pain in dealing with oracle weblogic tech support.
unfortunately they did not have a server configuration setting to disable this
You need to do 2 things
Step 1.You if you open the EJB jar files you can see
ejb-jar.xml=3435671213
com.mycompany.myejbs.ejb.DummyEJBService=2691629828
weblogic-ejb-jar.xml=3309609440
WLS_RELEASE_BUILD_VERSION_24=10.0.0.0
you see these hascodes for each of your ejb names.Make these hadcodes zero.
pack the jar file and deploy it on server.
com.mycompany.myejbs.ejb.DummyEJBService=0
weblogic-ejb-jar.xml=0
This is just a Marker file that weblogic.appc keeps in each ejb jar to trigger the recompilation
during server boot up.i automated this process of making these hadcodes to zero.
This hashcodes remain the same for each ejb even if you execute appc for more than once
if you add a new EJB class or delete a class those entries are added to this marker file
Note 1:
how to get this file?
if you open domains/yourdomain/servers/yourServerName/cache/EJBCompilerCache/XXXXXXXXX
you will see this file for each ejb.weblogic makes the hashcodes to zero after it recompiles
Note 2:
When you generate EJB using appc.generate them to a exploded directory using -output C:\myejb
instead of C:\myejb.jar.This way you can play around with the marker file
Step2.
Also you need a PATCH from weblogic.When you install the patch you see some message like this
"PATH CRXXXXXX installed successfully.Eliminate EJB recomilation for appc".
i dont remember the patch number but you can request weblogic for that.
You need to use both steps to fix the problem.The patch fixes only part of the problem
Goodluck!!
cheers
raj
the Marker file in EJBs is WL_GENERATED
Just to update the solution we went with - eventually we opted to recompile the EJBs once at the Customer's site instead of messing with the EJBs' internal markers (we don't want Oracle saying they cannot support problems derived from this scenario).
We created two KSH scripts - the first iterates over all the EJB jars, copies them to a temp dir and then re-compiles them in parallel by running several instances of the 2nd script which does only one thing: java -Drecompiler=yes -cp $CLASSPATH weblogic.appc $1 (With error handling of course :))
This solution reduced compilation time from 70min to 15min. After this we re-create the EAR file and redeploy it with the new EJBs. We do this once per several UAT environment creations, so we save quite a lot of time here (55min X num of envs per drop X num of drops)

Debugging Solaris OS crash

I have access to a remote Solaris terminal which crashes occasionally, and I have to ask someone with physical access to boot the machine up, which it does successfully. I would like to know which tools/files should I look at to find out the cause of the crash so that I can make the necessary configuration changes and avoid it in the future.
What tools you can use will depend on what version of solaris you have running and what the actual problem
is. The first thing to do is check the system console (which it sounds like you don't have access to) and the /var/adm/messages file. This file is updated with system messages and the newest will appear at the end.
Next, you can look for a system core file. If a core file is created, it would be in /var/crash/hostname where "hostname" is the name of the machine.
If you have an actual core file in the /var/crash/hostname directory, this set of commands will give you a good
string to search google with:
# cd /var/crash/hostname
Replace "hostname" with the hostname of your machine.
# mdb -k unix.0 vmcore.0
If you have multiple core files, select the most recent version.
> ::status
This should give you a panic message, cut and paste that into google and see what you can find.
For more core file analysis read this:
http://cuddletech.com/blog/pivot/entry.php?id=965