YUI-Compressor: result file is empty - yui-compressor

I am using the YUI Compressor library to minify CSS and JavaScript files. I directly use the classes CssCompressor and JavaScriptCompressor.
Unfortunatly some of the resulting files are empty without any warnings or exceptions.
I already tried it with the versions:
yuicompressor-2.4.2.jar
yuicompressor-2.4.6.jar
yuicompressor-2.4.7pre.jar
My used code is:
public static void compress(File file) {
try {
long start = System.currentTimeMillis();
File targetFile = new File("results", file.getName() + ".min");
Writer writer = new FileWriter(targetFile);
if (file.getName().endsWith(".css")) {
CssCompressor cssCompressor = new CssCompressor(new FileReader(file));
cssCompressor.compress(writer, -1);
} else if (file.getName().endsWith(".js")) {
JavaScriptCompressor jsCompressor = new JavaScriptCompressor(new FileReader(file), new MyErrorReporter());
jsCompressor.compress(writer, -1, true, false, false, true);
}
long end = System.currentTimeMillis();
System.out.println("\t compressed " + file.getName() + " within " + (end - start) + " milliseconds");
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
Files which do not work (are empty afterwards) are e.g.
http://code.google.com/p/open-cooliris/source/browse/trunk/fancy/jquery.fancybox.css?r=2
http://nodejs.org/sh_main.js
I know there are some bugs within the YUICompressor using media but could this be in relation with the empty results?

I had the same problem.
In my case it stemmed from that my javascript code was not ECMA valid (we use a variable named double which is not allowed according to the ECMA rules).
I did not have the courage to check if your js is valid but trying to compress different parts of your js file can easily lead you to the problem if it exists.

Well, after a while of debugging I figured out a solution.
The problem was not the YUI Compressor it self but it was the FileWriter given to the method.
Flushing an closing the FileWriter should solve the problem with empty result files
since I only need the minified String for further processing I now use a StringWriter with closing and flushing

Related

Changing screen shot name that is saved in /img folder on QAF framework

I have a requirement where I need to add the time stamp for the screenshot image that is saved in /img folder. When I see AssertionService.java(https://github.com/qmetry/qaf/blob/master/src/com/qmetry/qaf/automation/ui/selenium/AssertionService.java), I See it is adding some random string at the end.
How to remove this random string added and add time stamp? Thanks for the help in advance!
private String captureScreenShot() {
String filename = StringUtil.createRandomString(getTestCaseName()) + ".png";
try {
selenium.captureEntirePageScreenshot(getScreenShotDir() + filename, "");
} catch (Exception e) {
try {
selenium.windowFocus();
} catch (Throwable t) {
logger.error(t);
}
selenium.captureScreenshot(getScreenShotDir() + filename);
}
lastCapturedScreenShot = filename;
logger.info("Captured screen shot: " + lastCapturedScreenShot);
return filename;
}
Are you using selenium 1 or 2 api? Selenium 2 uses following code https://github.com/qmetry/qaf/blob/d58b1d1ca01b2df1a916bcd6d555df4f51a13b12/src/com/qmetry/qaf/automation/core/QAFTestBase.java#L351. Regardless of API, you can't change naming strategy for automatic screenshots. As alternate you may disable auto capturing of screenshot, capture as and when needed and set calling setLastCapturedScreenShot

AmazonS3: Getting warning: S3AbortableInputStream:Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection

Here's the warning that I am getting:
S3AbortableInputStream:Not all bytes were read from the S3ObjectInputStream, aborting HTTP connection. This is likely an error and may result in sub-optimal behavior. Request only the bytes you need via a ranged GET or drain the input stream after use.
I tried using try with resources but S3ObjectInputStream doesn't seem to close via this method.
try (S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent();
BufferedReader reader = new BufferedReader(new InputStreamReader(s3ObjectInputStream, StandardCharsets.UTF_8));
){
//some code here blah blah blah
}
I also tried below code and explicitly closing but that doesn't work either:
S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent();
try (BufferedReader reader = new BufferedReader(new InputStreamReader(s3ObjectInputStream, StandardCharsets.UTF_8));
){
//some code here blah blah
s3ObjectInputStream.close();
s3object.close();
}
Any help would be appreciated.
PS: I am only reading two lines of the file from S3 and the file has more data.
Got the answer via other medium. Sharing it here:
The warning indicates that you called close() without reading the whole file. This is problematic because S3 is still trying to send the data and you're leaving the connection in a sad state.
There's two options here:
Read the rest of the data from the input stream so the connection can be reused.
Call s3ObjectInputStream.abort() to close the connection without reading the data. The connection won't be reused, so you take some performance hit with the next request to re-create the connection. This may be worth it if it's going to take a long time to read the rest of the file.
Following option #1 of Chirag Sejpal's answer I used the below statement to drain the S3AbortableInputStream to ensure the connection can be reused:
com.amazonaws.util.IOUtils.drainInputStream(s3ObjectInputStream);
I ran into the same problem and the following class helped me
#Data
#AllArgsConstructor
public class S3ObjectClosable implements Closeable {
private final S3Object s3Object;
#Override
public void close() throws IOException {
s3Object.getObjectContent().abort();
s3Object.close();
}
}
and now you can use without warning
try (final var s3ObjectClosable = new S3ObjectClosable(s3Client.getObject(bucket, key))) {
//same code
}
To add an example to Chirag Sejpal's answer (elaborating on option #1), the following can be used to read the rest of the data from the input stream before closing it:
S3Object s3object = s3Client.getObject(new GetObjectRequest(bucket, key));
try (S3ObjectInputStream s3ObjectInputStream = s3object.getObjectContent()) {
try {
// Read from stream as necessary
} catch (Exception e) {
// Handle exceptions as necessary
} finally {
while (s3ObjectInputStream != null && s3ObjectInputStream.read() != -1) {
// Read the rest of the stream
}
}
// The stream will be closed automatically by the try-with-resources statement
}
I ran into the same error.
As others have pointed out, the /tmp space in lambda is limited to 512 MB.
And if the lambda context is re-used for a new invocation, then the /tmp space is already half-full.
So, when reading the S3 objects and writing all the files to the /tmp directory (as I was doing),
I ran out of disk space somewhere in between.
Lambda exited with error, but NOT all bytes from the S3ObjectInputStream were read.
So, two things one need to keep in mind:
1) If the first execution causes the problem, be stingy with your /tmp space.
We have only 512 MB
2) If the second execution causes the problem, then this could be resolved by attacking the root problem.
Its not possible to delete the /tmp folder.
So, delete all the files in the /tmp folder after the execution is finished.
In java, here is what I did, which successfully resolved the problem.
public String handleRequest(Map < String, String > keyValuePairs, Context lambdaContext) {
try {
// All work here
} catch (Exception e) {
logger.error("Error {}", e.toString());
return "Error";
} finally {
deleteAllFilesInTmpDir();
}
}
private void deleteAllFilesInTmpDir() {
Path path = java.nio.file.Paths.get(File.separator, "tmp", File.separator);
try {
if (Files.exists(path)) {
deleteDir(path.toFile());
logger.info("Successfully cleaned up the tmp directory");
}
} catch (Exception ex) {
logger.error("Unable to clean up the tmp directory");
}
}
public void deleteDir(File dir) {
File[] files = dir.listFiles();
if (files != null) {
for (final File file: files) {
deleteDir(file);
}
}
dir.delete();
}
This is my solution. I'm using spring boot 2.4.3
Create an amazon s3 client
AmazonS3 amazonS3Client = AmazonS3ClientBuilder
.standard()
.withRegion("your-region")
.withCredentials(
new AWSStaticCredentialsProvider(
new BasicAWSCredentials("your-access-key", "your-secret-access-key")))
.build();
Create an amazon transfer client.
TransferManager transferManagerClient = TransferManagerBuilder.standard()
.withS3Client(amazonS3Client)
.build();
Create a temporary file in /tmp/{your-s3-key} so that we can put the file we download in this file.
File file = new File(System.getProperty("java.io.tmpdir"), "your-s3-key");
try {
file.createNewFile(); // Create temporary file
} catch (IOException e) {
e.printStackTrace();
}
file.mkdirs(); // Create the directory of the temporary file
Then, we download the file from s3 using transfer manager client
// Note that in this line the s3 file downloaded has been transferred in to the temporary file that we created
Download download = transferManagerClient.download(
new GetObjectRequest("your-s3-bucket-name", "your-s3-key"), file);
// This line blocks the thread until the download is finished
download.waitForCompletion();
Now that the s3 file has been successfully transferred into the temporary file that we created. We can get the InputStream of the temporary file.
InputStream input = new DataInputStream(new FileInputStream(file));
Because the temporary file is not needed anymore, we just delete it.
file.delete();

org.apache.poi.openxml4j.exceptions.OpenXML4JRuntimeException: Fail to save

I'm facing org.apache.poi.openxml4j.exceptions.OpenXML4JRuntimeException: Fail to save:an error occurs while saving the package : The part /docProps/app.xml fail to be saved in the stream with marshaller <br/> org.apache.poi.openxml4j.opc.internal.marshallers.DefaultMarshaller#7c81475b
Exception when try to write the each test scenario result(PASS or FAIL) into Excel sheet(.xlsx) after the each test scenario execution completion. I write the following two different modules for this purpose.
Please tell me where is the problem and how to resolve it..
//Method for writing results into Report
public void putResultstoReport(String values[])
{
int j=NoofTimesExecuted;
NoofTimesExecuted++;
XSSFRow row = sheet.createRow(j);
for(int i=0;i<values.length;i++)
{
XSSFCell cell = row.createCell(i);
cell.setCellValue(values[i]);
}
try {
System.out.println("Times:"+NoofTimesExecuted);
wb.write(fileOut);
}
//fileOut.flush();
//fileOut.close();
}
catch(Exception e) {
System.out.println("Exception at closing opened Report :"+e);
}
//Method for Creating the Excelt Report
public void createReport()
{
String FileLocation = getProperty("WorkSpace")+"//SCH_Registration//OutPut//TestResults.xlsx";
try {
fileOut = new FileOutputStream(FileLocation);
String sheetName = "TestResults"; //name of sheet
wb = new XSSFWorkbook();
sheet = wb.createSheet(sheetName);
fileOut.flush();
fileOut.close();
}
catch(Exception e)
{
System.out.println("Exception at Create Report file:"+e);
}
}
I had this problem today and fixed it already.
The problem is in putResultstoReport()
You can't wb.write(fileOut); in your cycle.
resolution:
first call putResultstoReport(); then wb.write(fileOut);
I also had this error.
I found my mistake was caused because I was opening the same file / workbook multiple times.
So I would recommend to make sure you are opening just once before attempting to close as well.
This can happen if a timeout occurs. I have code that works for a small dataset and throws this error with huge dataset.
I had the same issue.
When I shortened the output excel filename, it stopped.
I had similar Issue.
Finally I got the reason and that was version for the below jar file was getting overrided.
org.apache.xmlgraphics:batik-dom
Hence, I added below dependency and now it is working fine.
<dependency>
<groupId>org.apache.xmlgraphics</groupId>
<artifactId>batik-dom</artifactId>
<version>1.8</version>
</dependency>
This jar contains dependency for xalan.
To generate the report xalan is required.
I had the same problem user refreshing page and sending the request again before the previous request is completed.
when creating the name use millisecond in name to avoid name conflict with these updates in code of name creation resolved the above issue.
String sheetName="projectName"+System.currentTimeMillis() + ".xlsx"
FileOutputStream fileOut = new FileOutputStream(sheetName);
workbook.write(fileOut);
fileOut.close();

Best way to extract .zip and put in .jar

I have been trying to find the best way to do this I have thought of extracting the contents of the .jar then moving the files into the directory then putting it back as a jar. Im not sure is the best solution or how I will do it. I have looked at DotNetZip & SharpZipLib but don't know what one to use.
If anyone can give me a link to the code on how to do this it would be appreciated.
For DotNetZip you can find very simple VB.NET examples of both creating a zip archive and extracting a zip archive into a directory here. Just remember to save the compressed file with extension .jar .
For SharpZipLib there are somewhat more comprehensive examples of archive creation and extraction here.
If none of these libraries manage to extract the full JAR archive, you could also consider accessing a more full-fledged compression software such as 7-zip, either starting it as a separate process using Process.Start or using its COM interface to access the relevant methods in the 7za.dll. More information on COM usage can be found here.
I think you are working with Minecraft 1.3.1 no? If you are, there is a file contained in the zip called aux.class, which unfortunately is a reserved filename in windows. I've been trying to automate the process of modding, while manipulating the jar file myself, and have had little success. The only option I have yet to explore is find a way to extract the contents of the jar file to a temporary location, while watching for that exception. When it occurs, rename the file to a temp name, extract and move on. Then while recreating the zip file, give the file the original name in the archive. From my own experience, SharZipLib doesnt do what you need it do nicely, or at least I couldnt figure out how. I suggest using Ionic Zip (Dot Net Zip) instead, and trying the rename route on the offending files. In addition, I also posted a question about this. You can see how far I got at Extract zip entries to another Zip file
Edit - I tested out .net zip more (available from http://dotnetzip.codeplex.com/), and heres what you need. I imagine it will work with any zip file that contains reserved file names. I know its in C#, but hey cant do all the work for ya :P
public static void CopyToZip(string inArchive, string outArchive, string tempPath)
{
ZipFile inZip = null;
ZipFile outZip = null;
try
{
inZip = new ZipFile(inArchive);
outZip = new ZipFile(outArchive);
List<string> tempNames = new List<string>();
List<string> originalNames = new List<string>();
int I = 0;
foreach (ZipEntry entry in inZip)
{
if (!entry.IsDirectory)
{
string tempName = Path.Combine(tempPath, "tmp.tmp");
string oldName = entry.FileName;
byte[] buffer = new byte[4026];
Stream inStream = null;
FileStream stream = null;
try
{
inStream = entry.OpenReader();
stream = new FileStream(tempName, FileMode.Create, FileAccess.ReadWrite);
int size = 0;
while ((size = inStream.Read(buffer, 0, buffer.Length)) > 0)
{
stream.Write(buffer, 0, size);
}
inStream.Close();
stream.Flush();
stream.Close();
inStream = new FileStream(tempName, FileMode.Open, FileAccess.Read);
outZip.AddEntry(oldName, inStream);
outZip.Save();
}
catch (Exception exe)
{
throw exe;
}
finally
{
try { inStream.Close(); }
catch (Exception ignore) { }
try { stream.Close(); }
catch (Exception ignore) { }
}
}
}
}
catch (Exception e)
{
throw e;
}
}

Returning binary content from a JPF action with Weblogic Portal 10.2

One of the actions of my JPF controller builds up a PDF file and I would like to return this file to the user so that he can download it.
Is it possible to do that or am I forced to write the file somewhere and have my action forward a link to this file? Note that I would like to avoid that as much as possible for security reasons and because I have no way to know when the user has downloaded the file so that I can delete it.
I've tried to access the HttpServletResponse but nothing happens:
getResponse().setContentLength(file.getSize());
getResponse().setContentType(file.getMimeType());
getResponse().setHeader("Content-Disposition", "attachment;filename=\"" + file.getTitle() + "\"");
getResponse().getOutputStream().write(file.getContent());
getResponse().flushBuffer();
We have something similar, except returning images instead of a PDF; should be a similar solution, though, I'm guessing.
On a JSP, we have an IMG tag, where the src is set to:
<c:url value="/path/getImage.do?imageId=${imageID}" />
(I'm not showing everything, because I'm trying to simplify.) In your case, maybe it would be a link, where the href is done in a similar way.
That getImage.do maps to our JPF controller, obviously. Here's the code from the JPF getImage() method, which is the part you're trying to work on:
#Jpf.Action(forwards = {
#Jpf.Forward(name = FWD_SUCCESS, navigateTo = Jpf.NavigateTo.currentPage),
#Jpf.Forward(name = FWD_FAILURE, navigateTo = Jpf.NavigateTo.currentPage) })
public Forward getImage(final FormType pForm) throws Exception {
final HttpServletRequest lRequest = getRequest();
final HttpServletResponse lResponse = getResponse();
final HttpSession lHttpSession = getSession();
final String imageIdParam = lRequest.getParameter("imageId");
final long header = lRequest.getDateHeader("If-Modified-Since");
final long current = System.currentTimeMillis();
if (header > 0 && current - header < MAX_AGE_IN_SECS * 1000) {
lResponse.setStatus(HttpServletResponse.SC_NOT_MODIFIED);
return null;
}
try {
if (imageIdParam == null) {
throw new IllegalArgumentException("imageId is null.");
}
// Call to EJB, which is retrieving the image from
// a separate back-end system
final ImageType image = getImage(lHttpSession, Long
.parseLong(imageIdParam));
if (image == null) {
lResponse.sendError(404, IMAGE_DOES_NOT_EXIST);
return null;
}
lResponse.setContentType(image.getType());
lResponse.addDateHeader("Last-Modified", current);
// public: Allows authenticated responses to be cached.
lResponse.setHeader("Cache-Control", "max-age=" + MAX_AGE_IN_SECS
+ ", public");
lResponse.setHeader("Expires", null);
lResponse.setHeader("Pragma", null);
lResponse.getOutputStream().write(image.getContent());
} catch (final IllegalArgumentException e) {
LogHelper.error(this.getClass(), "Illegal argument.", e);
lResponse.sendError(404, IMAGE_DOES_NOT_EXIST);
} catch (final Exception e) {
LogHelper.error(this.getClass(), "General exception.", e);
lResponse.sendError(500);
}
return null;
}
I've actually removed very little from this method, because there's very little in there that I need to hide from prying eyes--the code is pretty generic, concerned with images, not with business logic. (I changed some of the data type names, but no big deal.)