BeanShellInterpreter: Error invoking bsh method: eval Sourced file: inline evaluation of: import org.apache.jmeter.services.FileServer; String path=FileServer.getFileSer . . . '' : Attempt to access property on undefined variable or class name 2022-01-27 19:50:51,923 WARN o.a.j.e.BeanShellPostProcessor: Problem in BeanShell script: org.apache.jorphan.util.JMeterException: Error invoking bsh method: eval Sourced file: inline evaluation of: import org.apache.jmeter.services.FileServer; String path=FileServer.getFileSer . . . '' : Attempt to access property on undefined variable or class name
JMETER BEANSHELL CODE: (Beanshell post processor)
import org.apache.jmeter.services.FileServer;
String path=FileServer.getFileServer().getBaseDir();
var1= vars.get("userid");
var2= vars.get("username");
var3= vars.get("userfullname");
var4= ${exam_id};
f = new FileOutputStream("C://apache-jmeter-5.4.1/apache-jmeter-5.4.1/bin/Script4.csv",true);
p = new PrintStream(f);
this.interpreter.setOut(p);
p.println(var1+ "," +var2 + "," +var3 + "," +var4);
f.close();
Change this line:
var4= ${exam_id};
to this one:
var4= vars.get("exam_id");
Since JMeter 3.1 it's recommended to use JSR223 Test Elements and Groovy language for scripting so it worth considering migrating to Groovy.
If you run this code with more than 1 thread (virtual user) you will face a race condition resulting in data corruption when multiple threads will be writing into the same file. JMeter provides sample_variables property allowing to write variables values to the .jtl results file, if you need to store the data in a separate file - go for Flexible File Writer
I have a shell script which calls some SQL like so
sqlplus system/$password#$instance #./oracle/mysqlfile.sql $var1 $var2 $var3
Then in mysqlfile.sql, I define properties like this:
DEFINE var1=&1
DEFINE var2=&3
DEFINE var3=&3
Later in the file, I call another SQL script:
// i wish to wrap this in a if statement - pseudo-code
if(var3="true") do the following
#./oracle/myOthersqlfile.sql &&varA &&varB
I am not sure how to implement this though, any suggestions appreciated
You could (ab)use substitution variables:
set termout off
column var3_path new_value var3_path
select case
when '&var3' = 'true' then './oracle/myOthersqlfile.sql &&varA &&varB'
else '/dev/null'
end as var3_path
from dual;
set termout on
#&var3_path
The query between the set termout commands - which just hide the output of the query - uses a case expression to pick either your real file path or a dummy file; I've used /dev/null, but you could have a 'no-op' file of your own that does nothing if that's clearer. The query gives the result of that the alias var3_path. The new_value line before it turns that into a substitution variable. The # then expands that variable.
So if var3 is 'true' then that runs:
#./oracle/myOthersqlfile.sql &&varA &&varB
(or, actually, with the varA and varB variables already replaced with their actual values) and if it is false it runs:
#/dev/null
which does nothing, silently.
You can set verify on around that code to see when and where substitution is happening.
You can't implement procedural logic into sqlplus. You have these options :
Implement the IF-THEN-ELSE logic inside the shell script that is running the sqlplus.
Use PL/SQL, but then your SQL Script should be called as a process inside an anonymous block, not like an external script.
In your case the easiest way is to change your shell script.
#/bin/bash
#
# load environment Oracle variables
sqlplus system/$password#$instance #./oracle/mysqlfile.sql $var1 $var2 $var3
# if then
if [ $var3 == "true" ]
then
sqlplus system/$password#$instance #./oracle/myOthersqlfile.sql
fi
You should realise that sqlplus is just a CLI ( Command Line Interface ). So you can't apply procedural logic to it.
I have no idea what you do in those sql scripts ( running DMLs, creating files, etc ), but the best approach would be to convert them to PL/SQL, then you can apply whatever logic you need to.
I want to save the output of a program to a variable.
I use the following approach ,but fail.
$ PIPE RUN TEST | DEFINE/JOB VALUE #SYS$PIPE
$ x = f$logical("VALUE")
I got an error:%DCL-W-MAXPARM, too many parameters - reenter command with fewer parameters
\WORLD\
reference :
How to assign the output of a program to a variable in a DCL com script on VMS?
The usual way to do this is to write the output to a file and read from the file and put that into a DCL symbol (or logical). Although not obvious, you can do this with the PIPE command was well:
$ pipe r 2words
hello world
$ pipe r 2words |(read sys$pipe line ; line=""""+line+"""" ; def/job value &line )
$ sh log value
"VALUE" = "hello world" (LNM$JOB_85AB4440)
$
IF you are able to change the program, add some code to it to write the required values into symbols or logicals (see LIB$ routines)
If you can modify the program, using LIB$SET_SYMBOL in the program defines a DCL symbol (what you are calling a variable) for DCL. That's the cleanest way to do this. If it really needs to be a logical, then there are system calls that define logicals.
I'm looking for the SQL equivalent of SET varname = value in Hive QL
I know I can do something like this:
SET CURRENT_DATE = '2012-09-16';
SELECT * FROM foo WHERE day >= #CURRENT_DATE
But then I get this error:
character '#' not supported here
You need to use the special hiveconf for variable substitution.
e.g.
hive> set CURRENT_DATE='2012-09-16';
hive> select * from foo where day >= ${hiveconf:CURRENT_DATE}
similarly, you could pass on command line:
% hive -hiveconf CURRENT_DATE='2012-09-16' -f test.hql
Note that there are env and system variables as well, so you can reference ${env:USER} for example.
To see all the available variables, from the command line, run
% hive -e 'set;'
or from the hive prompt, run
hive> set;
Update:
I've started to use hivevar variables as well, putting them into hql snippets I can include from hive CLI using the source command (or pass as -i option from command line).
The benefit here is that the variable can then be used with or without the hivevar prefix, and allow something akin to global vs local use.
So, assume have some setup.hql which sets a tablename variable:
set hivevar:tablename=mytable;
then, I can bring into hive:
hive> source /path/to/setup.hql;
and use in query:
hive> select * from ${tablename}
or
hive> select * from ${hivevar:tablename}
I could also set a "local" tablename, which would affect the use of ${tablename}, but not ${hivevar:tablename}
hive> set tablename=newtable;
hive> select * from ${tablename} -- uses 'newtable'
vs
hive> select * from ${hivevar:tablename} -- still uses the original 'mytable'
Probably doesn't mean too much from the CLI, but can have hql in a file that uses source, but set some of the variables "locally" to use in the rest of the script.
Most of the answers here have suggested to either use hiveconf or hivevar namespace to store the variable. And all those answers are right. However, there is one more namespace.
There are total three namespaces available for holding variables.
hiveconf - hive started with this, all the hive configuration is stored as part of this conf. Initially, variable substitution was not part of hive and when it got introduced, all the user-defined variables were stored as part of this as well. Which is definitely not a good idea. So two more namespaces were created.
hivevar: To store user variables
system: To store system variables.
And so if you are storing a variable as part of a query (i.e. date or product_number) you should use hivevar namespace and not hiveconf namespace.
And this is how it works.
hiveconf is still the default namespace, so if you don't provide any namespace it will store your variable in hiveconf namespace.
However, when it comes to referring a variable, it's not true. By default it refers to hivevar namespace. Confusing, right? It can become clearer with the following example.
If you do not provide namespace as mentioned below, variable var will be stored in hiveconf namespace.
set var="default_namespace";
So, to access this you need to specify hiveconf namespace
select ${hiveconf:var};
And if you do not provide namespace it will give you an error as mentioned below, reason being that by default if you try to access a variable it checks in hivevar namespace only. And in hivevar there is no variable named var
select ${var};
We have explicitly provided hivevar namespace
set hivevar:var="hivevar_namespace";
as we are providing the namespace this will work.
select ${hivevar:var};
And as default, workspace used during referring a variable is hivevar, the following will work too.
select ${var};
Have you tried using the dollar sign and brackets like this:
SELECT *
FROM foo
WHERE day >= '${CURRENT_DATE}';
Just in case someone needs to parameterize hive query via cli.
For eg:
hive_query.sql
SELECT * FROM foo WHERE day >= '${hivevar:CURRENT_DATE}'
Now execute above sql file from cli:
hive --hivevar CURRENT_DATE="2012-09-16" -f hive_query.sql
Two easy ways:
Using hive conf
hive> set USER_NAME='FOO';
hive> select * from foobar where NAME = '${hiveconf:USER_NAME}';
Using hive vars
On your CLI set vars and then use them in hive
set hivevar:USER_NAME='FOO';
hive> select * from foobar where NAME = '${USER_NAME}';
hive> select * from foobar where NAME = '${hivevar:USER_NAME}';
Documentation: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+VariableSubstitution
One thing to be mindful of is setting strings then referring back to them. You have to make sure the quotes aren't colliding.
set start_date = '2019-01-21';
select ${hiveconf:start_date};
When setting dates then referring to them in code as the strings can conflict. This wouldn't work with the start_date set above.
'${hiveconf:start_date}'
We have to be mindful of not setting twice single or double quotes for strings when referring back to them in the query.
There are multiple options to set variables in Hive.
If you're looking to set Hive variable from inside the Hive shell, you can set it using hivevar. You can set string or integer datatypes. There are no problems with them.
SET hivevar:which_date=20200808;
select ${which_date};
If you're planning to set variables from shell script and want to pass those variables into your Hive script (HQL) file, you can use --hivevar option while calling hive or beeline command.
# shell script will invoke script like this
beeline --hivevar tablename=testtable -f select.hql
-- select.hql file
select * from <dbname>.${tablename};
Try this method:
set t=20;
select *
from myTable
where age > '${hiveconf:t}';
it works well on my platform.
You can export the variable in shell script
export CURRENT_DATE="2012-09-16"
Then in hiveql you like
SELECT * FROM foo WHERE day >= '${env:CURRENT_DATE}'
You can store the output of another query in a variable and latter you can use the same in your code:
set var=select count(*) from My_table;
${hiveconf:var};
I want to write a pig script that takes a filter condition as a command line parameter. From the command line I want to type something like:
pig -p "MY_FILTER=field1 == 0 and field2 == 5" myscript.pig
In my script I have a line:
my_filtered_data = filter my_data by $MY_FILTER;
This works as expected when MY_FILTER has no spaces and I pass quotes around my value; So if I type MY_FILTER=\"field1==0\" at the command line the shell will pass the quotes with the value and pig does the expansion I want. However, the parameter will fail to expand if I supply it like MY_FILTER=\"field1 == 0\"
I've tried a bunch of different quoting techniques and even tried running the command directly from python's subprocess module to ensure my shell wasn't doing something weird.
Which version of Pig do you use? I use 0.9.2 and the following command works for me:
pig -p "F='field1 == 3 AND field2 == 5'" test.pig
But it doesn't work with 0.8.1.