How to use CSV Dataset Config's variable in Bean Shell Post Processor in Jmeter - testing

In my application I have two scenarios.
1. Create: Here we book a hotel room. After booking application returns a transaction ID.
2. Cancel: We need to pass the transaction Id to the application to cancel booking.
I want to test with jmeter in such a way that after a create call is made, the cancel call of the respective create is called with the generated transaction ID automatically.
So I have created two thread groups. One for create where I am calling create API, saving the transaction Id in a CSV file using Regular Expression Extractor & Bean Shell Post Processor. Another thread is for cancel where I am picking the transaction ID using CSV Dataset Config & calling the cancel API.
Problem is I want to delete that transaction ID from CSV file after calling the cancel API. I think Bean Shell Post Processor will do the job. This is my CSV Data Set Config:
Here is my Bean Shell Post Processor code:
File inputFile = new File("/home/demo/LocalFolder/CSV/result.csv");
File tempFile = new File("/home/demo/LocalFolder/CSV/myTempFile.csv");
BufferedReader reader;
try {
reader = new BufferedReader(new FileReader(inputFile));
BufferedWriter writer = new BufferedWriter(new FileWriter(tempFile));
String lineToRemove = vars.get("transactionId");
//String lineToRemove = "${transactionId}";
String currentLine;
while((currentLine = reader.readLine()) != null) {
// trim newline when comparing with lineToRemove
String trimmedLine = currentLine.trim();
if(trimmedLine.equals(lineToRemove)) continue;
writer.write(currentLine + System.getProperty("line.separator"));
}
writer.close();
reader.close();
boolean successful = tempFile.renameTo(inputFile);
System.out.println("Completed");
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
But the transaction ID is not getting deleted from the file. I think that vars.get("transactionId") is not returning anything or returning wrong value. If I hardcode a transation ID then the code works fine. Can anyone help me?

JMeter Variables are local to the current Thread Group only, in order to pass the data between Thread Groups you need to use JMeter Properties (props instead of vars). See Knit One Pearl Two: How to Use Variables in Different Thread Groups article for more detailed explanation and usage example.
P.S. Maybe it would be easier to use HTTP Simple Table Server instead?

Related

Java - Insert a single row at a time into google Big Query ?

I am creating an application where every time a user clicks on an article, I need to capture the article data and the user data to calculate the reach of every article and be able to run analytics on the reached data.
My application is on App Engine.
When I check documentation for inserts into BQ, most of them point towards bulk inserts in the form of Jobs or Streams.
Question:
Is it even a good practice to insert into big Query one row at a time every time a user action is initiated ? If so, could you point me to some Java code to effectively do this ?
There are limits on the number of load jobs and DML queries (1,000 per day), so you'll need to use streaming inserts for this kind of application. Note that streaming inserts are different from loading data from a Java stream.
TableId tableId = TableId.of(datasetName, tableName);
// Values of the row to insert
Map<String, Object> rowContent = new HashMap<>();
rowContent.put("booleanField", true);
// Bytes are passed in base64
rowContent.put("bytesField", "Cg0NDg0="); // 0xA, 0xD, 0xD, 0xE, 0xD in base64
// Records are passed as a map
Map<String, Object> recordsContent = new HashMap<>();
recordsContent.put("stringField", "Hello, World!");
rowContent.put("recordField", recordsContent);
InsertAllResponse response =
bigquery.insertAll(
InsertAllRequest.newBuilder(tableId)
.addRow("rowId", rowContent)
// More rows can be added in the same RPC by invoking .addRow() on the builder
.build());
if (response.hasErrors()) {
// If any of the insertions failed, this lets you inspect the errors
for (Entry<Long, List<BigQueryError>> entry : response.getInsertErrors().entrySet()) {
// inspect row error
}
}
(From the example at https://cloud.google.com/bigquery/streaming-data-into-bigquery#bigquery-stream-data-java)
Note especially that a failed insert does not always throw an exception. You must also check the response object for errors.
Is it even a good practice to insert into big Query one row at a time every time a user action is initiated ?
Yes, it's pretty typical to stream event streams to BigQuery for analytics. You'll could get better performance if you buffer multiple events into the same streaming insert request to BigQuery, but one row at a time is definitely supported.
A simplified version of Google's example.
Map<String, Object> row1Data = new HashMap<>();
row1Data.put("booleanField", true);
row1Data.put("stringField", "myString");
Map<String, Object> row2Data = new HashMap<>();
row2Data.put("booleanField", false);
row2Data.put("stringField", "myOtherString");
TableId tableId = TableId.of("myDatasetName", "myTableName");
InsertAllResponse response =
bigQuery.insertAll(
InsertAllRequest.newBuilder(tableId)
.addRow("row1Id", row1Data)
.addRow("row2Id", row2Data)
.build());
if (response.hasErrors()) {
// If any of the insertions failed, this lets you inspect the errors
for (Map.Entry<Long, List<BigQueryError>> entry : response.getInsertErrors().entrySet()) {
// inspect row error
}
}
You can use Cloud Logging API to write one row at a time.
https://cloud.google.com/logging/docs/reference/libraries
Sample code from document
public class QuickstartSample {
/** Expects a new or existing Cloud log name as the first argument. */
public static void main(String... args) throws Exception {
// Instantiates a client
Logging logging = LoggingOptions.getDefaultInstance().getService();
// The name of the log to write to
String logName = args[0]; // "my-log";
// The data to write to the log
String text = "Hello, world!";
LogEntry entry =
LogEntry.newBuilder(StringPayload.of(text))
.setSeverity(Severity.ERROR)
.setLogName(logName)
.setResource(MonitoredResource.newBuilder("global").build())
.build();
// Writes the log entry asynchronously
logging.write(Collections.singleton(entry));
System.out.printf("Logged: %s%n", text);
}
}
In this case you need to create sink from dataflow logs. Then message will be redirect to the big Query table.
https://cloud.google.com/logging/docs/export/configure_export_v2

AmazonS3: Getting warning: S3AbortableInputStream:Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection

Here's the warning that I am getting:
S3AbortableInputStream:Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.
I tried using try with resources but S3ObjectInputStream doesn't seem to close via this method.
try (S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent();
BufferedReader reader = new BufferedReader(new InputStreamReader(s3ObjectInputStream, StandardCharsets.UTF_8));
){
//some code here blah blah blah
}
I also tried below code and explicitly closing but that doesn't work either:
S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent();
try (BufferedReader reader = new BufferedReader(new InputStreamReader(s3ObjectInputStream, StandardCharsets.UTF_8));
){
//some code here blah blah
s3ObjectInputStream.close();
s3object.close();
}
Any help would be appreciated.
PS: I am only reading two lines of the file from S3 and the file has more data.
Got the answer via other medium. Sharing it here:
The warning indicates that you called close() without reading the whole file. This is problematic because S3 is still trying to send the data and you're leaving the connection in a sad state.
There's two options here:
Read the rest of the data from the input stream so the connection can be reused.
Call s3ObjectInputStream.abort() to close the connection without reading the data. The connection won't be reused, so you take some performance hit with the next request to re-create the connection. This may be worth it if it's going to take a long time to read the rest of the file.
Following option #1 of Chirag Sejpal's answer I used the below statement to drain the S3AbortableInputStream to ensure the connection can be reused:
com.amazonaws.util.IOUtils.drainInputStream(s3ObjectInputStream);
I ran into the same problem and the following class helped me
#Data
#AllArgsConstructor
public class S3ObjectClosable implements Closeable {
private final S3Object s3Object;
#Override
public void close() throws IOException {
s3Object.getObjectContent().abort();
s3Object.close();
}
}
and now you can use without warning
try (final var s3ObjectClosable = new S3ObjectClosable(s3Client.getObject(bucket, key))) {
//same code
}
To add an example to Chirag Sejpal's answer (elaborating on option #1), the following can be used to read the rest of the data from the input stream before closing it:
S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
try (S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent()) {
try {
// Read from stream as necessary
} catch (Exception e) {
// Handle exceptions as necessary
} finally {
while (s3ObjectInputStream != null && s3ObjectInputStream.read() != -1) {
// Read the rest of the stream
}
}
// The stream will be closed automatically by the try-with-resources statement
}
I ran into the same error.
As others have pointed out, the /tmp space in lambda is limited to 512 MB.
And if the lambda context is re-used for a new invocation, then the /tmp space is already half-full.
So, when reading the S3 objects and writing all the files to the /tmp directory (as I was doing),
I ran out of disk space somewhere in between.
Lambda exited with error, but NOT all bytes from the S3ObjectInputStream were read.
So, two things one need to keep in mind:
1) If the first execution causes the problem, be stingy with your /tmp space.
We have only 512 MB
2) If the second execution causes the problem, then this could be resolved by attacking the root problem.
Its not possible to delete the /tmp folder.
So, delete all the files in the /tmp folder after the execution is finished.
In java, here is what I did, which successfully resolved the problem.
public String handleRequest(Map < String, String > keyValuePairs, Context lambdaContext) {
try {
// All work here
} catch (Exception e) {
logger.error("Error {}", e.toString());
return "Error";
} finally {
deleteAllFilesInTmpDir();
}
}
private void deleteAllFilesInTmpDir() {
Path path = java.nio.file.Paths.get(File.separator, "tmp", File.separator);
try {
if (Files.exists(path)) {
deleteDir(path.toFile());
logger.info("Successfully cleaned up the tmp directory");
}
} catch (Exception ex) {
logger.error("Unable to clean up the tmp directory");
}
}
public void deleteDir(File dir) {
File[] files = dir.listFiles();
if (files != null) {
for (final File file: files) {
deleteDir(file);
}
}
dir.delete();
}
This is my solution. I'm using spring boot 2.4.3
Create an amazon s3 client
AmazonS3 amazonS3Client = AmazonS3ClientBuilder
.standard()
.withRegion("your-region")
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials("your-access-key", "your-secret-access-key")))
.build();
Create an amazon transfer client.
TransferManager transferManagerClient = TransferManagerBuilder.standard()
.withS3Client(amazonS3Client)
.build();
Create a temporary file in /tmp/{your-s3-key} so that we can put the file we download in this file.
File file = new File(System.getProperty("java.io.tmpdir"), "your-s3-key");
try {
file.createNewFile(); // Create temporary file
} catch (IOException e) {
e.printStackTrace();
}
file.mkdirs(); // Create the directory of the temporary file
Then, we download the file from s3 using transfer manager client
// Note that in this line the s3 file downloaded has been transferred in to the temporary file that we created
Download download = transferManagerClient.download(
new GetObjectRequest("your-s3-bucket-name", "your-s3-key"), file);
// This line blocks the thread until the download is finished
download.waitForCompletion();
Now that the s3 file has been successfully transferred into the temporary file that we created. We can get the InputStream of the temporary file.
InputStream input = new DataInputStream(new FileInputStream(file));
Because the temporary file is not needed anymore, we just delete it.
file.delete();

Is it possible to execute a SQL file in servlet using JDBC?

Now I am creating a simple banking project for learning purpose where I need to do a lot of search, update and insert operations for a simple action. For example, if I want to create a transaction from a sample user id, in the "Create Trasaction" Screen, after inputting the details and pressing "submit" button, my application will do the following actions.
1) Insert a row in login session table with values: IP address, user id and timing.
2) To check if the particular user id has access to create a transaction option from user access table.
3) To check if the accounts being debited/credited belong to the same branch code as the home branch code of the creating user.
3) To check if the input inventory (if any) i.e. DD, Cheque is valid or not from inventory table.
4) To check if the account being debited/credited has freeze or not.
5) To check if the account being debited has enough available balance or not.
6) Check the account status Active/Inactive or Dormant.
7) Check and create service tax if applicable i.e. another search from S.Tax table and insert into accounts transaction table
and finally,
8) Insert a row into the accounts transaction table if the criteria pass.
Now I do not feel comfortable to write so many preparedstatement code in my Servlet for only creating a transactions. There will be other operations in my application too. So I was wondering if there is a way we can simply write these SQL statements and pass the SQL file to the Servlet anyway. Or maybe we can write a function in PL/SQL and pass the function to the servelt. Are these ways possible?
Please note, I am using J2EE and Oracle database.
I did this once with a project I was doing some years back and I actually achieved something close to what you are looking for I created a properties file in this format:
trans.getTransactons=select * from whateverTable where onesqlquery
trans.getTranId=select tran_id from whatevertable where anothersqlquery
So that when you write your classes you just load the Properties from the file and the query is populated from the property: for example: This Loads the Property fle
public class QueriesLoader {
Properties prop;
public QueriesLoader() {
}
public Properties getProp() {
prop = new Properties();
ClassLoader classLoader = getClass().getClassLoader();
try {
InputStream url = classLoader.getResourceAsStream("path/to/your/propertiesFile/databasequeries.properties");
prop.load(url);
} catch (IOException asd) {
System.out.println(asd.getMessage());
}
return prop;
}
}
And then in you Database Access Objects
public ArrayList getAllTransactions() {
ArrayList arr = new ArrayList();
try {
String sql = que.getProp().getProperty("trans.getTransactons");
PreparedStatement ps = DBConnection.getDbConnection().prepareStatement(sql);
ResultSet rs = ps.executeQuery();
while (rs.next()) {
arr.add(rs.getString(1));
}
DBConnection.closeConn(DBConnection.getDbConnection());
} catch (IOException asd) {
log.debug(Level.FATAL, asd);
} catch (SQLException asd) {
log.debug(Level.FATAL, asd);
}
return arr;
}
And I ended up not writing a single Query Inside my classes. I hope this Helps you.

Redis on Appharbor - Booksleeve GetString exception

i am trying to setup Redis on appharbor. I have followed their instructions and again i have an issue with the Booksleeve API. Here is the code i am using to make it work initially:
var connectionUri = new Uri(url);
using (var redis = new RedisConnection(connectionUri.Host, connectionUri.Port, password: connectionUri.UserInfo.Split(new[] { ':' }, 2)[1]))
{
redis.Strings.Set(1, "greeting", "welcome to remember your stuff!");
try
{
var task = redis.Strings.GetString(1, "greeting");
redis.Wait(task);
ViewBag.Message = task.Result;
}
catch (Exception)
{
// It throws an exception trying to wait for the task?
}
}
However, the issue is that it sets the string correctly, but when trying to retrieve the same string from the key value store, it throws a timeout exception waiting for the task to eexecute. However, this code works on my local redis server connection.
Am i using the API in a wrong way? or is this something related to Appharbor?
Thanks
Like a SqlConnection, you need to call Open() (otherwise your messages are queued for delivery).
Unlike SqlConnection, you should not fire up a RedisConnection each time you need it - it is intended to be used as a shared, thread-safe, multiplexer - i.e. a single connection is held somewhere and used by lots and lots of unrelated callers. Unless of course you only need to do one thing!

NHibernate UniqueResult alternative?

We're using NHibernate in a project that gets data out of the database and writes reports to a separate system. In my scenario, a patient will usually, but not always, have a next appointment scheduled when the report gets written. The query below gets the next appointment data, to include in the report.
private NextFollowup GetNextFollowup(int EncounterID)
{
try
{
NextFollowup myNextF = new NextFollowup();
IQuery myNextQ = this.Session.GetNamedQuery("GetNextFollowup").SetInt32("EncounterID", EncounterID);
myNextF = myNextQ.UniqueResult<NextFollowup>();
return myNextF;
}
catch (Exception e)
{
throw e;
}
}
Here's the question:
Usually this works fine, as there is a single result when an appointment is scheduled. However, in the cases where there is no next followup, I get the error that there is no unique result. I don't really want to throw an exception in this case, I want to return the empty object. If I were to get a list instead of a UniqueResult, I'd get an empty list in the situations where there is no next followup. Is there a better way to handle the situation of "when there is a value, there will be only one" than using a list in the HQL query?
This may work:
private NextFollowup GetNextFollowup(int encounterID)
{
IQuery query = this.Session.GetNamedQuery("GetNextFollowup").SetInt32("EncounterID", encounterID);
// nextFollowup will be either the next instance, or null if none exist in the db.
var nextFollowup = query.Enumerable<NextFollowup>().SingleOrDefault();
return nextFollowup;
}
Note: updated naming to follow Microsoft best practices
The try catch is not serving any purpose here except to loose the stack trace if there is an exception so I've removed it.
If you want to return a new NextFollowup if none exist, you can update the query line to:
var nextFollowup = query.Enumerable<NextFollowup>().SingleOrDefault() ?? new NextFollowup();