How to increase memory size of jpg file? - photoshop

My jpg file size is 5 KB. How can I make it 15 KB file?

please Go through these Link.
http://www.mkyong.com/java/how-to-resize-an-image-in-java/
http://www.java2s.com/Code/Java/2D-Graphics-GUI/Imagesize.htm

JPEG is a compression algorithm, and hence its 5 KB. Save it as a 32-bit or higher Bitmap to get maximum size possible for that image. And your question has a lot of scope for improvement. Give context, reasons, and methods you already tried.

Related

How to convert scanned document images to a PDF document with high compression?

I need to convert scanned document images to a PDF document with high compression. Compression ratio is very important. Can someone recommend any solution on C# for this task?
Best regards, Alexander
There is a free program called PDFBeads that can do it. It requires Ruby, ImageMagick and optionally jbig2enc.
The PDF format itself will probably add next to no overhead in your case. I mean your images will account for most of the output file size.
So, you should compress your images with highest possible compression. For black-and-white images you might get smallest output using FAX4 or JBIG2 compression schemes (both supported in PDF files).
For other images (grayscale, color) either use smallest possible size, lowest resolution and quality, or convert images to black-and-white and use FAX4/JBIG2 compression scheme.
Please note, that most probably you will lose some detail of any image while converting to black-and-white.
If you are looking for a library that can help you with recompression then have a look at Docotic.Pdf library (Disclaimer: I am one of developers of the library).
The Optimize images sample code shows how to recompress images before adding them to PDF. The sample shows how to recompress with JPEG, but for FAX4 the code will be almost the same.

Batch files for saving several pdf as reduced size pdf

I have several PDF in my server machine. they are taking lot of space. We are trying to reduce the size of PDF to optimize PDF scrolling performance and reduce size consumption. We have thought of using Adobe Acrobat 11 SDK. While using the trial, I have found the way of reducing PDF size by
file>saveasother>reducedsize PDF
Is any batch file possible that can reduce the size of PDF on the given folders. Alternatively a JavaScript or IAC would be also possible.
Any help would be greatly appreciated.
Thanks.

what is the maximum extent of compressing a pdf file?

Whenever I try to compress a PDF file to a lower possible size, by either using ghostscript or pdftk or pdfopt, I end up having a file near to half the size of original. But lately, I am getting files of size in 1000 MB range, which are compressing to say a few hundreds. Can we further reduce them?
The pdf is made from jpg images which are of higher resolutions, cant we reduce the size of those images and further bring in some more reduction in size?
As far I know, without degrading jpeg streams and loosing quality, you can try the special feature offered by
Multivalent
https://rg.to/file/c6bd7f31bf8885bcaa69b50ffab7e355/Multivalent20060102.jar.html
java -cp path/to.../multivalent.jar tool.pdf.Compress -compact file.pdf
resulting output will be compressed in a special way. the resulting file needs Multivalent browser to be read again
it is unpredictable how much space you can save (many times you cannot save any further space)

Efficient thumbnail generation of huge pdf file?

In a system I'm working on we're generating thumbnails as part of the workflow.
Sometimes the pdf files are quite large (print size 3m2) and can contain huge bitmap images.
Are there thumbnail generation capable programs that are optimized for memory footprint handling such large pdf files?
The resulting thumbnail can be png or jpg.
ImageMagick is what I use for all my CLI graphics, so maybe it can work for you:
convert foo.pdf foo-%png
This produces three separate PNG files:
foo-0.png
foo-1.png
foo-2.png
To create only one thumbnail, treat the PDF as if it were an array ([0] is the first page, [1] is the second, etc.):
convert foo.pdf[0] foo-thumb.png
Since you're worrying about memory, with the -cache option, you can restrict memory usage:
-cache threshold megabytes of memory available to the pixel cache.
Image pixels are stored in memory
until threshold megabytes of memory have been
consumed. Subsequent pixel operations
are cached on disk. Operations to
memory are significantly faster but
if your computer does not have a
sufficient amount of free memory you
may want to adjust this threshold
value.
So to thumbnail a PDF file and resize it,, you could run this command which should have a max memory usage of around 20mb:
convert -cache 20 foo.pdf[0] -resize 10%x10% foo-thumb.png
Or you could use -density to specify the output density (900 scales it down quite a lot):
convert -cache 20 foo.pdf[0] -density 900 foo-thumb.png
Should you care? Current affordable servers have 512 GB ram. That supports storing a full colour uncompressed bitmap of over 9000 inches (250 m) square at 1200 dpi. The performance hit you take from using disk is large.

Create discrepancy between size on disk and actual size in NTFS

I keep finding files which show a size of 10kb but a size on disk on 10gb. Trying to figure out how this is done, anyone have any ideas?
You can make sparse files on NTFS, as well as on any real filesystem. :-)
Seek to (10 GB - 10 kB), write 10 kB of data. There, you have a so-called 10 GB file, which in reality is only 10 kB big. :-)
You can create streams in NTFS files. It's like a separate file, but with the same filename. See here: Alternate Data Streams
I'm not sure about your case (or it might be a mistake in your question) but when you create a NTFS sparse file it will show different sizes for these fields.
When I create a 10MB sparse file and fill it with 1MB of data windows explorer will show:
Size: 10MB
Size on disk: 1MB
But in your case its the opposite. (or a mistake.)