Maximum near-lossless compression via dcm2dcm - dcm4che

I have uncompressed CT DICOM files and using dcm2dcm to compress lossless image from ~500KB to ~120KB:
dcm2dcm --j2kr src.dcm dest.dcm
I wish to push the compression much futher, it must be lossy compression but like near-lossless using but dont know which is the best encodingRate. My goal is to compress ~500KB to <50KB:
dcm2dcm --j2ki -Q [encodingRate] src.dcm dest.dcm
In Oviyam viewer, they made it down to 20KB-30KB JPEG and the image quality is quite well.

Related

parquet file codec conversion

I have a parquet file which has compression codec BROTLI. BROTLI is not supported by trino
Therefore, I need to convert it to a supported codec which is GZIP, SNAPPY,.. Conversion doesn't seem straight forward or at least i could not find any python library which does it. Please share your ideas or strategies for this codec conversion.
You should be able to do this with pyarrow. It can brotli-compressed Parquet files.
import pyarrow.parquet as pq
table = pq.read_table(<filename>)
pq.write_table(table, <filename)
This will save it as a snappy-compressed file by default. You can specify different compression schemes using the compression keyword argument.

How to compress a pdf with images

I'm trying to compress a 30 Mb pdf file which contains scanned text book
i want to reduce the size to something less than 10 MB.
i tried many software like ghost script , Scribus , gimp , Inkscape and more
but no hope
any idea get appreciated .

Batch extract Hex colour from images to file

I have around 10k images that I need to get the Hex colour from for each one. I can obviously do this manually with PS or other tools but I'm looking for a solution that would ideally:
Run against a folder full of JPG images.
Extract the Hex from dead center of the image.
Output the result to a text file, ideally a CSV, containing the file name and the resulting Hex code on each row.
Can anyone suggest something that will save my sanity please? Cheers!
I would suggest ImageMagick which is installed on most Linux distros and is available for OSX (via homebrew) and Windows.
So, just at the command-line, in a directory full of JPG images, you could run this:
convert *.jpg -gravity center -crop 1x1+0+0 -format "%f,%[fx:int(mean.r*255)],%[fx:int(mean.g*255)],%[fx:int(mean.b*255)]\n" info:
Sample Output
a.png,127,0,128
b.jpg,127,0,129
b.png,255,0,0
Notes:
If you have more files in a directory than your shell can glob, you may be better of letting ImageMagick do the globbing internally, rather than using the shell, with:
convert '*.jpg' ...
If your files are large, you may better off doing them one at a time in a loop rather than loading them all into memory:
for f in *.jpg; do convert "$f" ....... ; done

Compress m4a file created on the iphone before uploading them to the server

I have merged two streams of caff files into one streo file with the format of m4a/caff
the properties of the files are the following:
44100 Hz, 16bit stereo, 256kb/sec
for a 31 seconds file i get a 667 KB
what can i do to reduce the size of this file after the fact..?
can i convert it to a single channel (mono)? can i reduce the sample size or something like that?
I tried several sample application out their - but none of them gave me a good solution.
Do you have any idea?
Using this command line on the mac worked - but i don't know how to do it on iphone
sudo afconvert -d aac -f 'caff' -b 32768 call_record.m4a test_32.caf
Normally you'd use the ExtAudioFile API to do the conversion. To reduce the size you could convert to a compressed format like AAC. See some sample code here: https://developer.apple.com/library/ios/samplecode/iPhoneExtAudioFileConvertTest/Introduction/Intro.html

JPG to PDF Conversion, How to Fit Full Page

I have page scans of various sizes in JPG format which I convert to a single PDF using ImageMagick. However I noticed every PDF page for each type of scan produces a different size PDF page, even if I use -page A4 option on ImageMagick. I would every JPG, in whatever size to "fill" each PDF page, and every PDF page to be the same. I also have access to tools like pdftk, pdfjam.
Any ideas?
As a hack you can use pdflatex and the wallpaper package. It does the trick and has the advantage over most other methods of not altering the image content (resolution, compression, pixel content) and adds only about 1.2kB of overhead.
To keep the aspect ratio, use:
filename=test.jpg;
echo "\documentclass[a4paper]{article}\
\usepackage{wallpaper}\usepackage{grffile}\
\begin{document}\
\thispagestyle{empty}\
\ThisCenterWallPaper{1}{$filename}~\
\end{document}"\
| pdflatex --jobname "$filename";
rm "$filename".aux "$filename".log
To fill the page completely, use:
filename=test.jpg;
echo "\documentclass[a4paper]{article}\
\usepackage{wallpaper}\usepackage{grffile}\
\begin{document}\
\thispagestyle{empty}\
\ThisTileWallPaper{\paperwidth}{\paperheight}{$filename}~\
\end{document}"\
| pdflatex --jobname "$filename";
rm "$filename".aux "$filename".log
Finally you can concatenate your pages using pdftk
pdftk page1.pdf ... page2.pdf cat out final_document.pdf
When I used -density 50% on ImageMagick convert it managed to zoom lower res images to bigger PDF pages.
This should do the trick, assuming your PDF output should have A4 sized pages (portrait):
convert -scale 595x842\! *.jpg output.pdf