I am using Beanshell sampler to save one pdf file content into another pdf file.
In Beanshell sampler I have put this following code:
FileInputStream in = new FileInputStream("C:\\Users\\Dey\\Downloads\\sample.pdf");
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
for (int i; (i = in.read(buffer)) != -1; ) {
bos.write(buffer, 0, i);
}
in.close();
byte[] extractdata = bos.toByteArray();
bos.close();
vars.put("extractdata", new String(extarctdata));
using beanshell post processor I saved this variable ${extractdata} in another pdf file .
file is generated but when open the file it's empty means there is no content showing.
So , can someone please tell me how to resolve this issue ?? is there anything wrong in above code snippet ?? please guide me.
You made a typo
byte[] extractdata = bos.toByteArray();
and
vars.put("extractdata", new String(extarctdata));
so your test element is silently failing, check jmeter.log file it should contain some errors.
It's not possible to state what else is wrong because we don't see your Beanshell Post-Processor code, most probably there is an issue with encoding when converting the byte array to string and vice versa.
So I would suggest skipping this step and using vars.putObject() function instead like:
vars.put("extractdata", extractdata);
and then
byte [] extractdata = vars.getObject("extractdata");
If you just need to copy the file you can use the following snippet:
import java.nio.file.Files
import java.nio.file.Path
import java.nio.file.Paths
Path source = Paths.get("C:\\Users\\Dey\\Downloads\\sample.pdf");
Path target = Paths.get("/location/for/the/new/file.pdf")
Files.copy(source, target);
Since JMeter 3.1 it's recommended to use JSR223 Test Elements and Groovy for scripting so you should switch to Groovy, in this case you will be able to do something like:
new File('/location/for/the/new/file.pdf').bytes = new File('C:\\Users\\Dey\\Downloads\\sample.pdf').bytes
More information on Groovy scripting in JMeter: Apache Groovy: What Is Groovy Used For?
I tried to open a locally stored pdf with xamarin.
example code:
var files = Directory.GetFiles(Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData));
var filepath = "file://" + files[0];
if (File.Exists(filepath))
{
await Launcher.OpenAsync(filepath);
}
But the file does not open. The only message I get is (android device):
what do I miss?
EDIT
the variable filepath contains:
file:///data/user/0/com.companyname.scgapp_pdfhandler/files/.config/test.pdf
also tried
file://data/user/0/com.companyname.scgapp_pdfhandler/files/.config/test.pdf
does not help
Figured I would add my comment as an answer for easier visibility in case others run into it in the future.
Pass a OpenFileRequest object instead, if you use a string it has to be the correct uri scheme for it. I suspect the uri scheme you are passing to it isn't something that is understood by the system
I got an exception, I never got before when testing my application that uploads a file from ec2 to s3. The content is:
Exception in thread "Thread-1" com.amazonaws.services.s3.model.AmazonS3Exception: The Content-MD5 you specified did not match what we received. (Service: Amazon S3; Status Code: 400; Error Code: BadDigest; Request ID: 972CB8E04388AB20), S3 Extended Request ID: T7bmFnQ2RlGWlJD+aGYfTy97XZw88pbQrwNB8YCezSjyq6O2joxHRP/6ko+Q2zZeGewkw4x/90k=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1383)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:902)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607)
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376)
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3676)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1439)
at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:131)
at com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:123)
at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:139)
at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:47)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
What can I do to fix this bug? I used the same code as before in my application.
I think I have solved my problem. I finally found that some of my files actually changed during the uploading. Because the file is generated by another thread, the uploading and generating is done at the same time. The file can not be generated immediately, and during the generating of a file, it may be uploading at the same time, the file actually changed during the uploading.
The md5 of file is created at the beginning of uploading by the AmazonS3Client, then the whole file is uploaded to the S3, at this time, the file is different from the file uploaded at beginning, so the md5 actually changed. I modified my program to a single-threading program, and the problem never turned up again.
Another reason for having this issue is to run a code such as this (python)
with open(filename, 'r') as fd:
self._bucket1.put_object(Key=key, Body=fd)
self._bucket2.put_object(Key=key, Body=fd)
In this case the file object (fd) is pointing to the end of the file when it reaches line 3, so we will get the "Content MD5" error, in order to avoid it we will need to point the file reader back to the start position in the file
with open(filename, 'r') as fd:
bucket1.put_object(Key=key, Body=fd)
fd.seek(0)
bucket2.put_object(Key=key, Body=fd)
This way we won't get the aforementioned Boto error.
I also ran into this error when I was doing something like this:
InputStream productInputStream = convertImageFileToInputStream(file);
InputStream thumbnailInputStream = generateThumbnail(productInputStream);
String uploadedFileUrl = amazonS3Uploader.uploadToS3(BUCKET_PRODUCTS_IMAGES, productFilename, productInputStream);
String uploadedThumbnailUrl = amazonS3Uploader.uploadToS3(BUCKET_PRODUCTS_IMAGES, productThumbnailFilename, thumbnailInputStream);
The generateThumbnail method was manipulating the productInputStream using a third party library. Because I couldn't modify the third party library, I simply performed the upload first:
InputStream productInputStream = convertImageFileToInputStream(file);
// do this first...
String uploadedFileUrl = amazonS3Uploader.uploadToS3(BUCKET_PRODUCTS_IMAGES, productFilename, productInputStream);
/// and then this...
InputStream thumbnailInputStream = generateThumbnail(productInputStream);
String uploadedThumbnailUrl = amazonS3Uploader.uploadToS3(BUCKET_PRODUCTS_IMAGES, productThumbnailFilename, thumbnailInputStream);
... and added this line inside my generateThumbnail method:
productInputStream.reset();
FWIW, I've managed to find a completely different way of triggering this problem, which requires a different solution.
It turns out that if you decide to assign ObjectMetadata to a PutObjectRequest explicitly, for example to specify a cacheControl setting, or a contentType, then the AWS SDK mutates the ObjectMetadata instance to stash the MD5 that it computes for the put request. This means that if you are putting multiple objects, all of which you think should have the same metadata assigned to them, you still need to create a new ObjectMetadata instance for each and every PutObjectRequest. If you don't do this, then it reuses the MD5 computed from the previous put request and you get the MD5 mismatch error on the second object you try to put.
So, to be explicit, doing something like this will fail on the second iteration:
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType("text/html");
for(Put obj: thingsToPut)
{
PutObjectRequest por =
new PutObjectRequest(bucketName, obj.s3Key, obj.file);
por = por.withMetadata(metadata);
PutObjectResult res = s3.putObject(por);
}
You need to do it like this:
for(Put obj: thingsToPut)
{
ObjectMetadata metadata = new ObjectMetadata(); // <<-- New ObjectMetadata every time!
metadata.setContentType("text/html");
PutObjectRequest por =
new PutObjectRequest(bucketName, obj.s3Key, obj.file);
por = por.withMetadata(metadata);
PutObjectResult res = s3.putObject(por);
}
I too ran into this problem. How I solved this:
I have a microservice that processes AWS SQS Messages. Each message would create multiple temporary files that would have to be uploaded to S3.
The issue was that the temporary files were named with fixed names without any salt added to them.
So between two messages, it was possible to rewrite the original file that was to be uploaded.
I fixed it by adding a random salt (this can be a UUID or the current time in millis depending on what you want) to the file names, after which the files were not being over-written and were successfully uploaded to S3.
For me it was that I used ContentLength in the params while executing upload. When it is commented out, it worked just fine.
const params = {
Bucket: "",
ContentType: "application/json",
Key: "filename.json",
// ContentLength: body.length, <--- what I have commented out
Body: body
};
await s3.upload(params).promise();
USing Wicket 6.17 and servlet 2.5, I have a form that allows file upload, and also has ReCaptcha (using Recaptcha4j). When the form has ReCaptcha without file upload, it works properly using the code:
final HttpServletRequest servletRequest = (HttpServletRequest ) ((WebRequest) getRequest()).getContainerRequest();
final String remoteAddress = servletRequest.getRemoteAddr();
final String challengeField = servletRequest.getParameter("recaptcha_challenge_field");
final String responseField = servletRequest.getParameter("recaptcha_response_field");
to get the challenge and response fields so that they can be validated.
This doesn't work when the form has the file upload because the form must be multipart for the upload to work, and so when I try to get the parameters in that fashion, it fails.
I have pursued trying to get the parameters differently using ServletFileUpload:
ServletFileUpload fileUpload = new ServletFileUpload(new DiskFileItemFactory(new FileCleaner()) );
String response = IOUtils.toString(servletRequest.getInputStream());
and
ServletFileUpload fileUpload = new ServletFileUpload(new DiskFileItemFactory(new FileCleaner()) );
List<FileItem> requests = fileUpload.parseRequest(servletRequest);
both of which always return empty.
Using Chrome's network console, I see the values that I'm looking for in the Request Payload, so I know that they are there somewhere.
Any advice on why the requests are coming back empty and how to find them would be greatly appreciated.
Update: I have also tried making the ReCaptcha component multipart and left out the file upload. The result is still the same that the response is empty, leaving me with the original conclusion about multipart form submission being the problem.
Thanks to the Wicket In Action book, I have found the solution:
MultipartServletWebRequest multiPartRequest = webRequest.newMultipartWebRequest(getMaxSize(), "ignored");
// multiPartRequest.parseFileParts(); // this is needed since Wicket 6.19.0+
IRequestParameters params = multiPartRequest.getRequestParameters();
allows me to read the values now using the getParameterValue() method.
I am making a call to splunk and then I am trying to use the ResultsReaderJson class to get my results.
InputStream results = jobSavedSearch.getResults();
ResultsReaderJson resultsReader = new ResultsReaderJson(results);
And I keep getting this error.
com.google.gson.stream.MalformedJsonException: Use JsonReader.setLenient(true) to accept malformed JSON at line 1 column 6
I have no access to the JsonReader from this class. Does anybody have any ideas of what I can do to get around this?
You have not asked for the results stream to return you JSON. The default is XML. To fix this you could use:
Args outputArgs = new Args();
outputArgs.put("output_mode","json");
InputStream results = jobSavedSearch.getResults(outputArgs);
In Splunk 1.3.0 API you can do:
JobExportArgs jobargs = new JobExportArgs();
jobargs.setOutputMode(JobExportArgs.OutputMode.JSON);
InputStream exportSearch = jobSavedSearch.getResults(jobargs);
MultiResultsReaderJson multiResultsReader = new MultiResultsReaderJson(exportSearch);