How to execute script in compute nodes using nrpe? - scripting

I want to execute script using nrpe in selected compute nodes and output the result
I want to run scripts on compute nodes. I can select the compute nodes and input command to execute. Executed result will be given as output. I want to provide the compute nodes where the scripts are to be executed.

Related

how can I pass the result in a shell script to a variable in Job

Now I have a pentaho job using shell script to process some data.
But I found if I want to use the result in the script, I had to write that into a file and read the file to asign variables.
Is there an esaier way to use the result of a script step in the following steps?
This is the Script content.
Here is the whole process.
In pentaho you cannot create a variable and used in the same place.
Basically you just need to create one ktr and one job:
the first one is in charge of perform some task and save the variable with set-variable step (root job level option)
variables created in the first ktr are also available at job level
If you want to use the variable in another ktr
the second ktr, at the beginning should use the get-variable step to retrieve the variable created in previous transformation
transformations should be executed sequentially using a job
In your case, you should run the shell in the first ktr, transform the result into a variable and save it using set-variable. Your job which invokes the ktr, are able to use the variable create in the previous ktr

How do I repeatedly run a Hive query using each line of a multi line input as the parameter?

Using Hue, I've got a Hive query that will take an input (eg. an ID number) and return a record based on that. I need to handle multiple numbers to look up in one go (in serial or parallel) and collate the results (i.e. list the records for each, one after the other) so input might be:
1234567890
45345353
32423422
1323122
etc...
I've got access to Hue (which I'm supposed to use), Hive, Oozie and Beeline. How do I:
1.) extract the number for each line
2.) repeatedly call my HiveQL query passing in each number in turn
3.) supply the total output to the user in one go
I don't know Python if that's relevant but could attempt a shell script.
I'm guessing one way might be to get the multi-line user input via Oozie (can it prompt a user for input?), then pass that to a shell script which extracts the number from each line and uses beeline to repeatedly run my Hive query with the next number as the parameter?
Thanks

Is there a way to pass multiple values of the same variable into a Hive job in Hue?

I have a Hive query in Hue with one input variable, a string (for example a date like '20160117').
I'd like to execute this Hive query in Hue and pass it multiple values for that single variable.
Is it possible? If yes, how would you guys do it?
Oozie runs Direct Acyclic Graphs (DAG). And Acyclic comes down to no loop, ever. But of course there are workarounds.
So, if you must run the same HQL script exactly N times with a different parameter value...
either copy/paste the Hive Action N times, in a chain, with a different param value (quick and dirty)
or build a Sub-Workflow with just the Hive action and call it N times, in a chain, with a different param value
On the other hand, if you must adapt dynamically the number and the value of executions, then you must work out the "loop" logic outside of Oozie proper...
for instance, start with a Shell action that creates an empty HQL file, then adds N queries in a loop, then uploads the file to HDFS; next, a Hive action that executes the HQL script as-is (quick and dirty, but not ideal for exception handling)
or develop a Java program that connects to HiveServer2 via JDBC, submits a PreparedStatement with 1 bind variable, and executes the statement N times in a loop with different values of the variable.
And maybe, someday, Hive will support some kind of procedural language similar to PL/SQL, T-SQL, PgSQL etc. and you will be able to pass a comma-separated list of values and process it inside of Hive.

STORE output to a single CSV?

Currently, when I STORE into HDFS, it creates many part files.
Is there any way to store out to a single CSV file?
You can do this in a few ways:
To set the number of reducers for all Pig opeations, you can use the default_parallel property - but this means every single step will use a single reducer, decreasing throughput:
set default_parallel 1;
Prior to calling STORE, if one of the operations execute is (COGROUP, CROSS, DISTINCT, GROUP, JOIN (inner), JOIN (outer), and ORDER BY), then you can use the PARALLEL 1 keyword to denote the use of a single reducer to complete that command:
GROUP a BY grp PARALLEL 1;
See Pig Cookbook - Parallel Features for more information
You can also use Hadoop's getmerge command to merge all those part-* files.
This is only possible if you run your Pig scripts from the Pig shell (and not from Java).
This as an advantage over the proposed solution: as you can still use several reducers to process your data, so your job may run faster, especially if each reducer output few data.
grunt> fs -getmerge <Pig output file> <local file>

Is it possible to execute pentaho step in sequence?

I have a pentaho transformation which is consist of, for example, 10 steps. I want to start this job for N input parameters but not in parallel, each job evaluation should start after previous transformation are fully completed(process done in transaction and commited or rollbacked). Is it possible with Pentaho?
You can add 'Block this step until steps finish' from Flow to your transformation. Or you can mix 'Wait for SQL' component from Utility with loop on your job.
Regards
Mateusz
Maybe you must do it using jobs instead of transformations. Jobs only run on sequence while transformations run on parallel. (Truly, a transformation has a initialize phase whose run is in parallel and then the flow runs sequentially).
If you can't use jobs, you always can do what Matusz said.