I'm currently running some performance tests and am having issues converting a string I have extracted from JSON into an int.
The problem I'm having is that I need this number which has been extracted as both an int and a string, its currently only a string and I don't see how I can create another variable where the number is an int.
Here is the JSON extractor I'm using
How can I have another variable which is an int?
By default JMeter stores values into JMeter Variables in String form, if you need to save it in Integer form as well you can do it using i.e. __groovy() function like:
${__groovy(vars.putObject('Fixture_ID_INT'\, vars.get('Fixture_ID') as int),)}
and access it where required like:
${__groovy(vars.getObject('Fixture_ID_INT'),)}
Demo:
More information: Apache Groovy - Why and How You Should Use It
you can use the below code
Integer.parseInt(vars.get("urs"));
Related
What I'm doing now:
I have a table with one field that is a json value that is stored as a super type in my staging schema.
the field containing the json is called elements
In my clean table, I typecast this field to VARCHAR in order to search it and use string functions
I want to search for the string net within that json in order to determine the key/value that I want to use for my filter
I tried the following:
select
elements
, elements_raw
from clean.events
where 1=1
and lower(elements) like '%net%'
or strpos(elements,'net')
My output
When running the above query, I keep getting an empty set returned.
My issue
I tried running the above code and using the elements_raw value instead but I got an issue :ERROR: function strpos(super, "unknown") does not exist Hint: No function matches the given name and argument types. You may need to add explicit type casts.
I checked the redshift super page and it doesn't list any specifics on searching strings within super types
Desired result:
Perform string operations on super field
Cast super field to a string type
There are some super related idiosyncrasies that are being run into here:
You cannot change the type of a super field via :: or cast()
String functions like and strpos do not work on super types
To address both of these issues, you can use the function json_serialize to return your super as a string.
I have PCollection<String> of type String and I want to transform this to get values of specific column from BigQuery table. So I used BigQueryIO.readTableRows to get values from BigQuery.
Here is my Code:
PCollection<TableRow> getConfigTable = pipeline.apply("read from Table",
BigQueryIO.readTableRows().from("TableName"));
RetrieveDestTableName retrieveDestTableName = new RetrieveDestTableName();
PCollection<String> getDestTableName = getConfigTable.apply(ParDo.of(new DoFn<String,String>(){
#ProcessElement
public void processElement(ProcessContext c){
c.output(c.element().get("ColoumnName").toString());
}
}));
As per above code I will get an output from getDestTableName of type PCollection<String> but I want this output in String variable.
Is there any way to convert PCollection<String> to String datatype variable so that I can able to use variable in my code?
Converting a PCollection<String> to a String is not possible in the Apache Beam programming model. A PCollection simply describes the state of the pipeline at any given point. During development, you do not have literal access to the strings in the PCollection.
You can process the strings in a PCollection through transforms. However, it seems like you need the table configuration to construct the rest of the pipeline. You'll need to know the destination ahead of time or you can use DynamicDestinations to determine which table to write to during pipeline execution. You cannot get the table configuration value from the PCollection and use it to further construct the pipeline.
It seems that you want something like JdbcIO.readAll() but for BigQuery, allowing the read configuration(s) to be dynamically computed by the pipeline. This is currently not implemented for BigQuery, but it'd be a reasonable request.
Meanwhile your options are:
Express what you're doing as a more complex BigQuery SQL query, and use a single BigQueryIO.read().fromQuery()
Express the part of your pipeline where you extract the table of interest without the Beam API, instead using the BigQuery API directly, so you are operating regular Java variables instead of PCollections.
How do I auto generate an arbitrary number (Long) for use in a Live Template in IntelliJ?
Example:
public static final long uid = $randomLong$;
where randomLong is replaced with a random long value. I have tried adding the following as an expression for the variable on the live template definition but nothing is generated when the template outputs.
new Random().nextLong()
What I am trying to achieve is very similar to what the IDEA code inspector generates for the Serialization version UID field but with a live template.
Please try adding groovyScript("new Random().nextLong()") as the variable expression instead.
Is there a way to dynamically compute the input value to a LOAD statement in pig? Conceptually, I want to do something like this:
%declare MYINPUT com.foo.myMethod('2013-04-15');
raw = LOAD '$MYINPUT' ...
myMethod() is a UDF that accepts a date as input and returns a (comma-separated) list of directories as a string. That string is then given as the input to the LOAD statement.
Thanks.
It doesn't sound to me like myMethod() needs to be a UDF. Presuming this list of directories doesn't need to be computed in map reduce you could run the function to get the string first, then make it a property you pass to pig. Sample if your driver was in java provided below:
String myInput = myMethod("2013-04-15");
PigServer pig = new PigServer(ExecType.MAPREDUCE);
Map<String,String> myProperties = new HashMap<String,String>();
myProperties.put("myInput",myInput);
pig.registerScript("myScriptLocation.pig");
and then your script would start with
raw = LOAD '$myInput' USING...
this is assuming your myInput String is in a glob format PigStorage can read, or you have a different LoadFunc that can handle your comma separated string in mind.
I had a similar issue and opted for a Java LoadFunc implementation instead of a pre-processor. Using a custom LoadFunc means the script can still be run by analysts using the stock pig executable, and doesn't require another dependency.
I have a really simple WCF service operation GetCurrentBalance. It returns a decimal.
I also have the odatagen generated entity files included in the project, which contains an implementation of the GetCurrentBalance operation returning a string. Calling this method returns me an XML string with the desired value in it.
I also tried using executeServiceOperation method in the generated class and pass in the operation name as a parameter, the returned value again is the same XML string.
Is there a way to extract this value? Or do I have to write a custom parser for it?
Thanks in advance.
Without further informations, if the returned value is a formatted XML string you may try extracting the value using XPath queries, have a look at this to get you started