public void testTakeScreenshot()
{
try{
File fscreenshot = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE);
System.out.println(fscreenshot.getPath());
File fdest = new File("E:/");
FileUtils.copyFile(fscreenshot,fdest);
System.out.println(fdest.getPath());
}catch(Exception e)
{
e.printStackTrace();
}
}
Generated Output at console :
C:\Users\Bunty\AppData\Local\Temp\screenshot1773089913844817102.png
java.io.IOException: Destination 'E:\' exists but is a directory
The test is running Ok but the file is created as shown in the console. while copying the link . I couldnt find any file for the same. Also the copy function is not working ; hence no file is present in E drive.
As the errormessage suggests, you shouldn't give the path to the directory ('E:\'), but the path to the file. Try:
File fdest = new File("E:/screenshot.png");
Related
Using a service account, Google Drive API and Google SpreadSheet API, I create a spreadsheet that i then move to a specific folder, using the following code :
public async Task<File> SaveNewSpreadsheet(Spreadsheet spreadsheet, File folder)
{
try
{
Spreadsheet savedSpreadsheet = await _sheetService.Spreadsheets.Create(spreadsheet).ExecuteAsync();
string spreadsheetId = GetSpreadsheetId(savedSpreadsheet);
File spreadsheetFile = await GetFileById(spreadsheetId);
File spreadsheetFileMoved = await MoveFileToFolder(spreadsheetFile, folder);
return spreadsheetFileMoved;
}
catch (Exception e)
{
_logger.LogError(e, $"An error has occured during new spreadsheet save to Google drive API");
throw;
}
}
public async Task<File> MoveFileToFolder(File file, File folder)
{
try
{
var updateRequest = _driveService.Files.Update(new File(), file.Id);
updateRequest.AddParents = folder.Id;
if (file.Parents != null)
{
string previousParents = String.Join(",", file.Parents);
updateRequest.RemoveParents = previousParents;
}
file = await updateRequest.ExecuteAsync();
return file;
}
catch (Exception e)
{
_logger.LogError(e, $"An error has occured during file moving to folder.");
throw;
}
}
This used to work fine for a year or so, but since today, the MoveFileToFolder request throw the following exception:
Google.GoogleApiException: Google.Apis.Requests.RequestError
Increasing the number of parents is not allowed [403]
Errors [
Message[Increasing the number of parents is not allowed] Location[ - ] Reason[cannotAddParent] Domain[global]
]
The weird thing is that if I create a new service account and use it instead of the previous one, it works fine again.
I've looked for info on this "cannotAddParent" error but I couldn't find anything.
Any ideas on why this error is thrown ?
I have the same problem and filed in issue in the Google Issue Tracker. This is intended behavior, unfortunately. You are no longer able to place a file in multiple parents as in your example. See the linked documentation for migration.
Beginning Sept. 30, 2020, you will no longer be able to place a file in multiple parent folders; every file must have exactly one parent folder location. Instead, you can use a combination of status checks and a new shortcut implementation to accomplish file-related operations.
https://developers.google.com/drive/api/v2/multi-parenting
I've got a problem with Xamarin.UITest, specifically screenshot feature. It is not working as expected.
I'm trying to copy "created" screenshot to another directory, but I get the following error:
Message: System.IO.FileNotFoundException : Could not find file
'C:\Program Files (x86)\Microsoft Visual
Studio\2017\Enterprise\Common7\IDE\screenshot-1.png'.
I'm using this piece of code to copy image file:
var screen = app.Screenshot("Welcome screen.");
screen.CopyTo(#"C:\Users\someuser\Desktop\screenshotTest.png");
How to specify first path/location for screenshots, because the original path probably needs admin privileges, that I don't have.
Screenshots saved with App.Screenshot() are located in your test project's directory: MyTestProject"\bin\Debug folder where the first screenshot is named screenshot-1.
Half solution to the problem: I downgraded NUnit from 3.11.0 to 2.7.0, so it works OK.
Use MoveTo() insted of CopyTo().
var screenshot = app.Screenshot($"{DateTime.Now}_{platform}");
screenshot.MoveTo($#"{Destination}\{screenshot.Name}.{screenshot.Extension}");
My screenshots are saving to C:\Users\username\AppData\Local\Temp
Try this code
[TearDown]
public void Teardown()
{
SaveScreenshotIfTestFails();
}
private void SaveScreenshotIfTestFails()
{
if (TestContext.CurrentContext.Result.Outcome.Status == TestStatus.Failed)
{
var testName = TestContext.CurrentContext.Test.Name;
var filename = $"{testName}.png";
var file = app.Screenshot(testName);
var dir = file.DirectoryName;
File.Delete(dir + "\\" + filename);
file.MoveTo($"{testName}.png");
}
}
Screenshots are saved to the Current Directory. Change it via Directory.SetCurrentDirectory.
Here's the warning that I am getting:
S3AbortableInputStream:Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.
I tried using try with resources but S3ObjectInputStream doesn't seem to close via this method.
try (S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent();
BufferedReader reader = new BufferedReader(new InputStreamReader(s3ObjectInputStream, StandardCharsets.UTF_8));
){
//some code here blah blah blah
}
I also tried below code and explicitly closing but that doesn't work either:
S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent();
try (BufferedReader reader = new BufferedReader(new InputStreamReader(s3ObjectInputStream, StandardCharsets.UTF_8));
){
//some code here blah blah
s3ObjectInputStream.close();
s3object.close();
}
Any help would be appreciated.
PS: I am only reading two lines of the file from S3 and the file has more data.
Got the answer via other medium. Sharing it here:
The warning indicates that you called close() without reading the whole file. This is problematic because S3 is still trying to send the data and you're leaving the connection in a sad state.
There's two options here:
Read the rest of the data from the input stream so the connection can be reused.
Call s3ObjectInputStream.abort() to close the connection without reading the data. The connection won't be reused, so you take some performance hit with the next request to re-create the connection. This may be worth it if it's going to take a long time to read the rest of the file.
Following option #1 of Chirag Sejpal's answer I used the below statement to drain the S3AbortableInputStream to ensure the connection can be reused:
com.amazonaws.util.IOUtils.drainInputStream(s3ObjectInputStream);
I ran into the same problem and the following class helped me
#Data
#AllArgsConstructor
public class S3ObjectClosable implements Closeable {
private final S3Object s3Object;
#Override
public void close() throws IOException {
s3Object.getObjectContent().abort();
s3Object.close();
}
}
and now you can use without warning
try (final var s3ObjectClosable = new S3ObjectClosable(s3Client.getObject(bucket, key))) {
//same code
}
To add an example to Chirag Sejpal's answer (elaborating on option #1), the following can be used to read the rest of the data from the input stream before closing it:
S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
try (S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent()) {
try {
// Read from stream as necessary
} catch (Exception e) {
// Handle exceptions as necessary
} finally {
while (s3ObjectInputStream != null && s3ObjectInputStream.read() != -1) {
// Read the rest of the stream
}
}
// The stream will be closed automatically by the try-with-resources statement
}
I ran into the same error.
As others have pointed out, the /tmp space in lambda is limited to 512 MB.
And if the lambda context is re-used for a new invocation, then the /tmp space is already half-full.
So, when reading the S3 objects and writing all the files to the /tmp directory (as I was doing),
I ran out of disk space somewhere in between.
Lambda exited with error, but NOT all bytes from the S3ObjectInputStream were read.
So, two things one need to keep in mind:
1) If the first execution causes the problem, be stingy with your /tmp space.
We have only 512 MB
2) If the second execution causes the problem, then this could be resolved by attacking the root problem.
Its not possible to delete the /tmp folder.
So, delete all the files in the /tmp folder after the execution is finished.
In java, here is what I did, which successfully resolved the problem.
public String handleRequest(Map < String, String > keyValuePairs, Context lambdaContext) {
try {
// All work here
} catch (Exception e) {
logger.error("Error {}", e.toString());
return "Error";
} finally {
deleteAllFilesInTmpDir();
}
}
private void deleteAllFilesInTmpDir() {
Path path = java.nio.file.Paths.get(File.separator, "tmp", File.separator);
try {
if (Files.exists(path)) {
deleteDir(path.toFile());
logger.info("Successfully cleaned up the tmp directory");
}
} catch (Exception ex) {
logger.error("Unable to clean up the tmp directory");
}
}
public void deleteDir(File dir) {
File[] files = dir.listFiles();
if (files != null) {
for (final File file: files) {
deleteDir(file);
}
}
dir.delete();
}
This is my solution. I'm using spring boot 2.4.3
Create an amazon s3 client
AmazonS3 amazonS3Client = AmazonS3ClientBuilder
.standard()
.withRegion("your-region")
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials("your-access-key", "your-secret-access-key")))
.build();
Create an amazon transfer client.
TransferManager transferManagerClient = TransferManagerBuilder.standard()
.withS3Client(amazonS3Client)
.build();
Create a temporary file in /tmp/{your-s3-key} so that we can put the file we download in this file.
File file = new File(System.getProperty("java.io.tmpdir"), "your-s3-key");
try {
file.createNewFile(); // Create temporary file
} catch (IOException e) {
e.printStackTrace();
}
file.mkdirs(); // Create the directory of the temporary file
Then, we download the file from s3 using transfer manager client
// Note that in this line the s3 file downloaded has been transferred in to the temporary file that we created
Download download = transferManagerClient.download(
new GetObjectRequest("your-s3-bucket-name", "your-s3-key"), file);
// This line blocks the thread until the download is finished
download.waitForCompletion();
Now that the s3 file has been successfully transferred into the temporary file that we created. We can get the InputStream of the temporary file.
InputStream input = new DataInputStream(new FileInputStream(file));
Because the temporary file is not needed anymore, we just delete it.
file.delete();
I want to get the time elapsed while opening a pdf file . I am not able to find a way to do it using PDFBox.I created a PDDocument in my java program and want to use some API to launch the PDF file through my code .I am unable to find out which PDFBox API that would serve the purpose.
So it would be helpful if i can get some info on that.
Thanks.
Swati
Paste the file 04-Request-Headers.pdf in Documents folder in C drive.
If you execute the below java code it will opens the 04-Request-Headers.pdf file in Adobe Acrobat. Total time to be taken to open the PDF file is displayed in console.
Below one is example of how to open the PDF file using Java.
Code:
package com.pdf.pdfbox.test;
import java.awt.Desktop;
import java.io.File;
public class OpenPDFFileUsingJava {
public static void main(String[] args) {
try {
File file = new File("C:/Documents/04-Request-Headers.pdf");
if (file.exists()) {
long startTime = System.currentTimeMillis();
Desktop.getDesktop().open(file);
long endTime = System.currentTimeMillis();
System.out.println("Total time taken to open file -> "+ file.getName() +" in "+ (endTime - startTime) +" ms");
} else {
System.out.println("File not exits -> "+ file.getAbsolutePath());
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
Output: Total time taken to open file -> 04-Request-Headers.pdf in 94 ms
I have a simple java program that runs fine in eclipse but cannot find the .txt files I read from and write to when run from command line. I tried changing the permissions of the files but because they run in eclipse it seems that is not the issue. I'm not experienced in reading from files in Java. But I think it is a path issue or something. Can anyone help me fix my script or whatever so it works?
I get a bunch of these:
java.io.FileNotFoundException: helloState.txt (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:106)
at bot.FileRead.readByLine(FileRead.java:33)
at bot.BuildStates.buildStates(BuildStates.java:16)
at bot.Kate.main(Kate.java:98)
My file structure is as follows CS317_A4/src/myPackage/(class and source files)
My text files are in the CS317_A4 directory and my script is in the src directory (I can't seem to run the program from the CS317_A4 directory
Here is my script for running the program:
#!/bin/bash
set classpath=
java -cp .:.. bot.Kate
Here is how I open the file:
public LinkedList<String> readByLine(String filename) {
File file = new File(filename);
FileInputStream fis = null;
BufferedInputStream bis = null;
BufferedReader br = null;
String in;
LinkedList<String> fileLines = new LinkedList<String>();
try {
fis = new FileInputStream(file);
bis = new BufferedInputStream(fis);
br = new BufferedReader(new FileReader(file));
while(br.ready()){
/* read the line from the text file */
in = br.readLine();
/* if the line is empty stop reading */
if(in.isEmpty()){
break;
}
/* add the line to the linked list */
fileLines.add(in);
}
/* dispose all the resources after using them. */
fis.close();
bis.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return fileLines;
}
Try to start it from the directory that's above src. As classpath, use src.