I used the code below to compress files and they keep growing instead of shrinking. I comressed a 4 kb file and it became 6. That is understandable for a small file because of the compression overhead. I tried a 400 mb file and it became 628 mb after compressing. What is wrong? See the code. (.net 2.0)
Public Sub Compress(ByVal infile As String, ByVal outfile As String)
Dim sourceFile As FileStream = File.OpenRead(inFile)
Dim destFile As FileStream = File.Create(outfile)
Dim compStream As New GZipStream(destFile, CompressionMode.Compress)
Dim myByte As Integer = sourceFile.ReadByte()
While myByte <> -1
compStream.WriteByte(CType(myByte, Byte))
myByte = sourceFile.ReadByte()
End While
sourceFile.Close()
destFile.Close()
End Sub
If the underlying file is itself highly unpredictable (already compressed or largely random) then attempting to compress it will cause the file to become bigger.
Going from 400 to 628Mb sounds highly improbable as an expansion factor since the deflate algorithm (used for GZip) tends towards a maximum expansion factor of 0.03% The overhead of the GZip header should be negligible.
Edit: The 4.0 c# release indicates that the compression libraries have been improved to not cause significant expansion of uncompressable data. This suggests that they were not implementing the "fallback to the raw stream blocks" mode. Try using SharpZipLib's library as a quick test. That should provide you with close to identical performance when the stream is incompressible by deflate. If it does consider moving to that or waiting for the 4.0 release for a more performant BCL implementation. Note that the lack of compression you are getting strongly suggests that there is no point you attempting to compress further anyway
Are you sure that writing byte by byte to the stream is a really good idea? It will certainly not have ideal performance characteristics and maybe that's what confuses the gzip compressing algorithm too.
Also, it might happen that the data you are trying to compress is just not really well-compressable. If I were you I would try your code with a text document of the same size as text documents tend to compress much better than random binary.
Also, you could try using a pure DeflateStream as opposed to a GZipStream as they both use the same compression algorithm (deflate), the only difference is that gzip adds some additional data (like error checking) so a DeflateStream might yield smaller results.
My VB.NET is a bit rusty so I'll rather not try to write a code example in VB.NET. Instead, here's how you should do it in C#, it should be relatively straightforward to translate it to VB.NET for someone with a bit of experience: (or maybe someone who is good at VB.NET could edit my post and translate it to VB.NET)
FileStream sourceFile;
GZipStream compStream;
byte[] buffer = new byte[65536];
int bytesRead = 0;
while (bytesRead = sourceFile.Read(buffer, 0, 65536) > 0)
{
compStream.Write(buffer, 0, bytesRead);
}
This is a known anomaly with the built-in GZipStream (And DeflateStream).
I can think of two workarounds:
use an alternative compressor.
build some logic that examines the size of the "compressed" output and compares it to the size of the input. If larger, chuck the output and just store the data.
DotNetZip includes a "fixed" GZipStream based on a managed port of zlib. (It takes approach #1 from above). The Ionic.Zlib.GZipStream can replace the built-in GZipStream in your apps with a simple namespace swap.
Thank you all for good answers. Earlier on I tried to compress .wmv files and one text file. I changed the code to DeflateStream and it seems to work now. Cheers.
Related
I have a speed problem and memory efficiency, I'm reading the cutting the big chunk of bytes from the .bin files then writing it in another file, the problem is that to read the file i need to create a huge byte array for it:
Dim data3(endOFfile) As Byte ' end of file is here around 270mb size
Using fs As New FileStream(path, FileMode.Open, FileAccess.Read, FileShare.None)
fs.Seek(startOFfile, SeekOrigin.Begin)
fs.Read(data3, 0, endOFfile)
End Using
Using vFs As New FileStream(Environment.GetFolderPath(Environment.SpecialFolder.Desktop) & "\test.bin", FileMode.Create) 'save
vFs.Write(data3, 0, endOFfile)
End Using
so it takes a long time to procedure, what's the more efficient way to do it?
Can I somehow read and write in the same file stream without using a bytes array?
I've never done it this way but I would think that the Stream.CopyTo method should be the easiest method and as quick as anything.
Using inputStream As New FileStream(...),
outputStream As New FileStream(...)
inputStream.CopyTo(outputStream)
End Using
I'm not sure whether that overload will read all the data in one go or use a default buffer size. If it's the former or you want to specify a buffer size other than the default, there's an overload for that:
inputStream.CopyTo(outputStream, bufferSize)
You can experiment with different buffer sizes to see whether it makes a difference to performance. Smaller is better for memory usage but I would expect bigger to be faster, at least up to a point.
Note that the CopyTo method requires at least .NET Framework 4.0. If you're executing this code on the UI thread, you might like to call CopyToAsync instead, to avoid freezing the UI. The same two overloads are available, plus a third that accepts a CancellationToken. I'm not going to teach you how to use Async/Await here, so research that yourself if you want to go that way. Note that CopyToAsync requires at least .NET Framework 4.5.
I am using iText v5.5.1 to read PDF and render paint text from it:
pdfReader = new PdfReader(new CloseShieldInputStream(is));
pdfParser = new PdfReaderContentParser(pdfReader);
int maxPageNumber = pdfReader.getNumberOfPages();
int pageNumber = 1;
StringBuilder sb = new StringBuilder();
SimpleTextExtractionStrategy extractionStrategy = new SimpleTextExtractionStrategy();
while (pageNumber <= maxPageNumber) {
pdfParser.processContent(pageNumber, extractionStrategy);
sb.append(extractionStrategy.getText());
pageNumber++;
}
On one PDF file the following exception is thrown:
java.lang.ClassCastException: com.itextpdf.text.pdf.PdfNumber cannot be cast to com.itextpdf.text.pdf.PdfLiteral
at com.itextpdf.text.pdf.parser.PdfContentStreamProcessor.processContent(PdfContentStreamProcessor.java:382)
at com.itextpdf.text.pdf.parser.PdfReaderContentParser.processContent(PdfReaderContentParser.java:80)
That PDF file seems to be broken, but maybe its contents still makes sense...
Indeed
That PDF file seems to be broken
The content streams of all pages look like this:
/GS1 gs
q
595.00 0 0
It looks like they all are cut off early as the last line is not a complete operation. This certainly can make a parser hickup as iText does.
Furthermore the content should be longer because even the size of their compressed stream is a bit larger than the length of this. This indicates streams broken on the byte level.
Looking at the bytes of the PDF file one cannot help but notice that
even inside binary streams the codes 13 and 10 only occur together and
cross-reference offset values are less than the actual positions.
So I assume that this PDF has been transmitted using a transport method handling it as textual data, especially replacing any kind of assumed line break (CR or LF or CR LF) with the CR LF now omnipresent in the file (CR = Carriage Return = 13; LF = Line Feed = 10). Such replacements will automatically break any compressed data stream like the content streams in your file.
Unfortunately, though...
but maybe its contents still makes sense
Not much. There is one big image associated to each page respectively. Considering the small size of the content streams and the large image size I would assume that the PDF only contains scanned pages. But the images also are broken due to the replacements mentioned above.
This isn't the best solution, but I had this exact problem and unfortunately can't share the exact PDFs I was having issues with.
I made a fork of itextpdf that catches the ClassCastException and just skips PdfObjects that it takes issue with. It prints to System.out what the text contained and what type itextpdf thinks it was. I haven't been able to map this out to some systemic problem with my PDFs (someone smarter than me will need to do that), and this exception only happens once in a blue moon. Anyway, in case it helps anyone, this fork at least doesn't crash your code, lets you parse the majority of your PDFs, and gives you a bit of info on what types of bytestrings seem to give itextpdf indigestion.
https://github.com/njhwang/itextpdf
Recently I'm into creating checksums for files in go. My code is working with small and big files. I tried two methods, the first uses ioutil.ReadFile("filename") and the second is working with os.Open("filename").
Examples:
The first function is working with the io/ioutil and works for small files. When I try to copy a big file my ram gets blastet and for a 1.5GB iso it uses 3GB of ram.
func byteCopy(fileToCopy string) {
file, err := ioutil.ReadFile(fileToCopy) //1.5GB file
omg(err) //error handling function
ioutil.WriteFile("2.iso", file, 0777)
os.Remove("2.iso")
}
Even worse when I want to create a checksum with crypto/sha512 and io/ioutil.
It will never finish and abort because it runs out of memory.
func ioutilHash() {
file, _ := ioutil.ReadFile(iso)
h := sha512.New()
fmt.Printf("%x", h.Sum(file))
}
When using the function below everything works fine.
func ioHash() {
f, err := os.Open(iso) //iso is a big ~ 1.5tb file
omg(err) //error handling function
defer f.Close()
h := sha512.New()
io.Copy(h, f)
fmt.Printf("%x", h.Sum(nil))
}
My Question:
Why is the ioutil.ReadFile() function not working right? The 1.5GB file should not fill my 16GB of ram. I don't know where to look right now.
Could somebody explain the differences between the methods? I don't get it with reading the go-doc and examples.
Having usable code is nice, but understanding why its working is way above that.
Thanks in advance!
The following code doesn't do what you think it does.
func ioutilHash() {
file, _ := ioutil.ReadFile(iso)
h := sha512.New()
fmt.Printf("%x", h.Sum(file))
}
This first reads your 1.5GB iso. As jnml pointed out, it continuously makes bigger and bigger buffers to fill it. In the end, And total buffer size is no less than 1.5GB and no greater than 1.875GB (by the current implementation).
However, after that you then make another buffer! h.Sum(file) doesn't hash file. It appends the current hash to file! This may or may not cause yet another allocation.
The real problem is that you are taking that file, now appended with the hash, and printing it with %x. Fmt actually pre-computes using the same type of method jnml pointed out that ioutil.ReadAll used. So it constantly allocated bigger and bigger buffers to store the hex of your file. Since each letter is 4 bits, that means we are talking about no less than a 3GB buffer for that and no greater than 3.75GB.
This means your active buffers may be as big 5.625GB. Combine that with the GC not being perfect and not removing all the intermediate buffers, and it could very easily fill your space.
The correct way to write that code would have been.
func ioutilHash() {
file, _ := ioutil.ReadFile(iso)
h := sha512.New()
h.Write(file)
fmt.Printf("%x", h.Sum(nil))
}
This doesn't do nearly the number the allocations.
The bottom line is that ReadFile is rarely what you want to use. IO streaming (using readers and writers) is always the best way when it is an option. Not only do you allocate much less when you use io.Copy, you also hash and read the disk concurrently. In your ReadFile example, the two resources are used synchronously when they don't depend on each other.
ioutil.ReadFile is working right. It's your fault to abuse the system resources by using that function for things you know are huge.
ioutil.ReadFile is a handy helper for files you're pretty sure in advance that they're going to be small. Like configuration files, most source code files etc. (Actually it's optimizing things for files <= 1e9 bytes, but that's an implementation detail and not part of the API contract. Your 1.5GB file forces it to use slice growing and thus allocating more than one big buffer for your data in the process of reading the file.)
Even your other approach using os.File is not okay. You definitely should be using the "bufio" package for sequential processing of large files, see bufio.NewReader.
I'd like to reduce the message size when sending serialized integer over the network.
In the below section buff.Length is 256 - to great an overhead to be efficient!
How it can be reduced to the minimum (4 bytes + minimum overhead)?
int val = RollDice(6);
// Should 'memoryStream' be allocated each time!
MemoryStream memoryStream = new MemoryStream();
BinaryFormatter formatter = new BinaryFormatter();
formatter.Serialize(memoryStream, val);
byte[] buff = memoryStream.GetBuffer();
Thanks in advance,
--- KostaZ
Have a look at protobuf.net...it is a very good serialization lib (you can get it on NuGet). Also, ideally you should be using a "using" statement around your memory stream.
To respond to the comment below, then the most efficient method depends on your use case. If you know exactly what you need to serialize and don't need a general purpose serializer then you could write your own binary formatter, which might have no overhead at all (there is some detail here custom formatters).
This link has a comparison of the BinaryFormatter and protobuf.net for your reference.
I am trying to build a resource file for a website basically jamming all the images into a compressed file that is then unpacked on the output buffers to the client.
my question is in vb2005 can a filestream be multi threaded if you know the size of the converted file, ala like a bit torrent and work on pieces of the filestream ( the individual files in this case) and add them to the resource filestream when they are done instead of one at a time?
If you need something similar to the torrents way of writing to a file, this is how I would implement it:
Open a FileStream on Thread T1, and create a queue "monitor" for step 2
Create a queue that will be read from T1, but written by multiple network reader threads. (the queue data structure would look like this: (position where to write, size of data buffer, data buffer).
Fire up the threads
:)
Anyway, from your comments, your problem seems to be another one..
I have found something in, but I'm not sure if it works:
If you want to write data to a file,
two parallel methods are available,
WriteByte() and Write(). WriteByte()
writes a single byte to the stream:
byte NextByte = 100;
fs.WriteByte(NextByte);
Write(), on the other hand, writes out
an array of bytes. For instance, if
you initialized the ByteArray
mentioned before with some values, you
could use the following code to write
out the first nBytes of the array:
fs.Write(ByteArray, 0, nBytes);
Citation from:
Nagel, Christian, Bill Evjen, Jay
Glynn, Morgan Skinner, and Karli
Watson. "Chapter 24 - Manipulating
Files and the Registry". Professional
C# 2005 with .NET 3.0. Wrox Press. ©
2007. Books24x7. http://common.books24x7.com/book/id_20568/book.asp
(accessed July 22, 2009)
I'm not sure if you're asking if a System.IO.FileStream object can be read from or written to in a multi-threaded fashion. But the answer in both cases is no. This is not a supported scenario. You will need to add some form of locking to ensure serialized access to the resource.
The documentation calls out multi-threaded access to the object as an unsupported scenario
http://msdn.microsoft.com/en-us/library/system.io.filestream.aspx