What is the difference between quit and exit in hive - hive

Is there any difference between 'quit' and 'exit' when I exit hive?

From hive documentation, it seems that quit and exit execute the exact same function of exiting the hive CLI.
Further, according to the source code the two commands are functually identical.
if (cmd_trimmed.toLowerCase().equals("quit") || cmd_trimmed.toLowerCase().equals("exit")) {
// if we have come this far - either the previous commands
// are all successful or this is command line. in either case
// this counts as a successful run
ss.close();
System.exit(0);
} else if (...

Related

How to return exit code in UniData?

Is there a way to return an exit code to the operating system in UniBasic or ECL ?
For example I want to capture a user-programmable exit code that can be retrieved with Bash's $? variable.
From what I see, the UniBasic STOP and ABORT commands only return execution to the UniData system level (eg. udt) (rather than exit to shell).
I am using Rocket UniData version 7.3.7.

when beeline partially executed the list of commands then how to get the exit code status?

I have a beeline query where I'm passing (-f) a file named as "some.sql" which is having multiple queries to be executed. But one of them failed then does it return 0 or some non zero value? please help me with this. I would like to capture and handle this situation.
The return code will be a non-zero value if atleast one of the queries in the file fails. Beeline will not execute other queries in the script after the failed one, if there are any. It is better to have one query per file.
A sample bash script.
#!/bin/bash
beeline -u $url -f queries.sql
rc=$?
if [ $rc -ne 0 ]
then
echo "return code is $rc. One or more queries in the file failed"
else
echo "return code is $rc. All queries executed successfully"
fi
You can also add printf statements after each query in the queries file to know the queries that executed successfully.

how to see if a process by name is running in tcl

I want to use the pidof by a process given by name in tcl. I have used [exec pidof $proc_name ], but it always returns an error: child process exited abnormally.
I read somewhere exec always treat non-zero return as error as pidof return the process id number. Does anyone know if there is a workaround? Thanks in advance!
I want to use pidof is that i want to see if that process is running if not i will restart the process.
The problem is that pidof does strange things with exit codes:
Exit Status
At least one program was found with the requested name.
No program was found with the requested name.
This interacts badly with exec which treats a non-zero exit code as indicating that it should tell the rest of Tcl that there was an error.
The simplest way of dealing with this is a little extra shell script wrapper. Let's hide it inside a procedure for convenience:
proc pidof {name} {
exec /bin/bash -c "pidof '$name'; exit \$(( \$? - 1 ))"
}
All that does is subtract 1 from the exit code before it hits back into Tcl.
(You could also fix this using the techniques described in the exec manual but I think it's simpler to fix on the bash side this time.)
I ran into this and ended up causing some issues with the old linux environment I run in (no bash and exit code handling was a bit different with busybox).
My solution that should work anywhere would be similar to what a few suggested:
proc pidof {name} {
catch {exec -ignorestderr -- pidof $name} pid
if {[string is entier -strict $pid]} {
return $pid
}
}

System command executes but is immediately backgrounded

I'm attempting to use the TProcess unit to execute ssh to connect to one of my servers and provide me with the shell. It's a rewrite of one I had in Ruby as the execution time for Ruby is very slow. When I run my Process.Execute function, I am presented with the shell but it is immediately backgrounded. Running pgrep ssh reveals that it is running but I have no access to it whatsoever, using fg does not bring it back. The code is as follows for this segment:
if HasOption('c', 'connect') then begin
TempFile:= GetRecord(GetOptionValue('c', 'connect'));
AProcess:= TProcess.Create(nil);
AProcess.Executable:= '/usr/bin/ssh';
AProcess.Parameters.Add('-p');
AProcess.Parameters.Add(TempFile.Port);
AProcess.Parameters.Add('-ntt');
AProcess.Parameters.Add(TempFile.Username + '#' + TempFile.Address);
AProcess.Options:= [];
AProcess.ShowWindow:= swoShow;
AProcess.InheritHandles:= False;
AProcess.Execute;
AProcess.Free;
Terminate;
Exit;
end;
TempFile is a variable of type TProfile, which is a record containing information about the server. The cataloging system and retrieval works fine, but pulling up the shell does not.
...
AProcess.ShowWindow:= swoShow;
AProcess.InheritHandles:= False;
AProcess.Execute;
AProcess.Free;
...
You're starting the process but not waiting for it to exit. This is from the documentation on Execute:
Execute actually executes the program as specified in CommandLine, applying as much as of the specified options as supported on the current platform.
If the poWaitOnExit option is specified in Options, then the call will only return when the program has finished executing (or if an error occured). If this option is not given, the call returns immediatly[sic], but the WaitOnExit call can be used to wait for it to close, or the Running call can be used to check whether it is still running.
You should set the poWaitOnExit option in options before calling Execute, so that Execute will block until the process exits. Or else call AProcess.WaitOnExit to explicitly wait for the process to exit.

cron script to act as a queue OR a queue for cron?

I'm betting that someone has already solved this and maybe I'm using the wrong search terms for google to tell me the answer, but here is my situation.
I have a script that I want to run, but I want it to run only when scheduled and only one at a time. (can't run the script simultaneously)
Now the sticky part is that say I have a table called "myhappyschedule" which has the data I need and the scheduled time. This table can have multiple scheduled times even at the same time, each one would run this script. So essentially I need a queue of each time the script fires and they all need to wait for each one before it to finish. (sometimes this can take just a minute for the script to execute sometimes its many many minutes)
What I'm thinking about doing is making a script that checks myhappyschedule every 5 min and gathers up those that are scheduled, puts them into a queue where another script can execute each 'job' or occurrence in the queue in order. Which all of this sounds messy.
To make this longer - I should say that I'm allowing users to schedule things in myhappyschedule and not edit crontab.
What can be done about this? File locks and scripts calling scripts?
add a column exec_status to myhappytable (maybe also time_started and time_finished, see pseudocode)
run the following cron script every x minutes
pseudocode of cron script:
[create/check pid lock (optional, but see "A potential pitfall" below)]
get number of rows from myhappytable where (exec_status == executing_now)
if it is > 0, exit
begin loop
get one row from myhappytable
where (exec_status == not_yet_run) and (scheduled_time <= now)
order by scheduled_time asc
if no such row, exit
set row exec_status to executing_now (maybe set time_started to now)
execute whatever command the row contains
set row exec_status to completed
(maybe also store the command output/return as well, set time_finished to now)
end loop
[delete pid lock file (complementary to the starting pid lock check)]
This way, the script first checks if none of the commands is running, then runs first not-yet run command, until there are no more commands to be run at the given moment. Also, you can see what command is executing by querying the database.
A potential pitfall: if the cron script is killed, a scheduled task will remain in "executing_now" state. That's what the pid lock at beginning and end is for: to see if the cron script terminated properly. pseudocode of create/check pidlock:
if exists pidlockfile then
check if process id given in file exists
if not exists then
update myhappytable set exec_status = error_cronscript_died_while_executing_this
where exec_status == executing_now
delete pidlockfile
else (previous instance still running)
exit
endif
endif
create pidlockfile containing cron script process id
You can use the at(1) command inside your script to schedule its next run. Before it exits, it can check myhappyschedule for the next run time. You don't need cron at all, really.
I came across this question while researching for a solution to the queuing problem. For the benefit of anyone else searching here is my solution.
Combine this with a cron that starts jobs as they are scheduled (even if they are scheduled to run at the same time) and that solves the problem you described as well.
Problem
At most one instance of the script should be running.
We want to cue up requests to process them as fast as possible.
ie. We need a pipeline to the script.
Solution:
Create a pipeline to any script. Done using a small bash script (further down).
The script can be called as
./pipeline "<any command and arguments go here>"
Example:
./pipeline sleep 10 &
./pipeline shabugabu &
./pipeline single_instance_script some arguments &
./pipeline single_instance_script some other_argumnts &
./pipeline "single_instance_script some yet_other_arguments > output.txt" &
..etc
The script creates a new named pipe for each command. So the above will create named pipes: sleep, shabugabu, and single_instance_script
In this case the initial call will start a reader and run single_instance_script with some arguments as arguments. Once the call completes, the reader will grab the next request off the pipe and execute with some other_arguments, complete, grab the next etc...
This script will block requesting processes so call it as a background job (& at the end) or as a detached process with at (at now <<< "./pipeline some_script")
#!/bin/bash -Eue
# Using command name as the pipeline name
pipeline=$(basename $(expr "$1" : '\(^[^[:space:]]*\)')).pipe
is_reader=false
function _pipeline_cleanup {
if $is_reader; then
rm -f $pipeline
fi
rm -f $pipeline.lock
exit
}
trap _pipeline_cleanup INT TERM EXIT
# Dispatch/initialization section, critical
lockfile $pipeline.lock
if [[ -p $pipeline ]]
then
echo "$*" > $pipeline
exit
fi
is_reader=true
mkfifo $pipeline
echo "$*" > $pipeline &
rm -f $pipeline.lock
# Reader section
while read command < $pipeline
do
echo "$(date) - Executing $command"
($command) &> /dev/null
done