How can i reduce the file size of the ViewPort3D xaml? - xaml

I have a 3D max file for a shape. And i convert it to a 3ds file. Then convert it to a xaml file by Zam3D. But the file is too big to load, my computer is crashed by the exception "Output of Memory". What can i do to reduce the xaml file size?

how large is you file? I am working with 3D model files having about 8 MB and are very detailed. The largest I imported successfully had 13.4 MB and the 29 MB-model crashed with the same exception you got but it was not well designed.
To convert 3ds into xaml I am using the Reader3ds of Wpf-Graphics. You avoid the step to convert it twice (through zam3d). The Reader3ds is able to read even large files and you can comfortably use the elements. Next I'm not glad using zam3d, I expect zam3d to put more information to the file than needed. Even the geometry I get is not the same as it was before and lights are added I did not use.
To reduce your xaml you can also try to work with a ResourceDictionary to swap the information to another file and just use it when it's needed.
Hope this helps
Stef

Related

PDF Table Lines Missing from GhostScript

I am trying to convert a PDF file to an image format (ideally PNG), but some of the table lines do not render in the output, which is an issue since the purpose of my conversion is to use computer vision on it.
I unfortunately do not have access to the file used to generate the PDF.
Thank you in advance for your help!
Attached is the ghostscript rendering vs the actual pdf:
Original
GhostScript
EDIT: Thanks for the answers. Here is what I had already tried:- ---
Changing the scaling & Changing the Antialiasing (I doubt that any combination of this will work in Ghostscript at this point)
Converting to PostScript and then to PNG/PDF
Saving from a Browser
Saving from various virtual printers to PDF
Using Poppler to do the rendering
All to no avail. Digging deeper, I found some interesting things which may be helpfull. Ghostscript does recognize the lines when using -sDevice=X11 and -sDevice=PS2Write (apologies for coding typos). That is, using Ghostscript to visualize the PDF does work, but not to process them into anything else than Postscript.
Also, printing into a PDF from Adobe Acrobat does fix my problem, however this is something that I need to be able to do from the command line on thousands of files.
Hope this helps!
EDIT2:
Link to a concerned file
https://transfer.sh/PuIF90/e176ad9824ddc6cb5e6aead2d389c131-filer.pdf
I thought that I would share the fix that I found. Turns out that a bunch of the pdf we need to process were generated using a specific HTML5 to PDF conversion tool which turns each lines of the PDF into a rectangle with size 0. Solution for me has been to automate decompressing PDFs, and looking through the text file for "A A A A re", with all "A's" being numbers. Should the last or next to last A be a zero, I change it to size 1.
For instance (once again, after decompressing the PDF):
1000 2000 0 14 re
to
1000 2000 1 14 re
Hope this helps someone else out there and let me know if there is a more elegant way of doing this, I am still a beginner about all things PDF.

Large PDF sizes but less quality

I'm organizing a large amount of PDFs, some of which need to be inverted, or have their contrast adjusted. But when I use convert to modify a PDF, the new file size become much bigger than the original file size, using the density and quality command to achieve the original quality. A typical command looks like this:
convert -density 300 OrignalPDF.pdf -quality 100 -negate NewPDF.pdf
This results in a pdf that looks very nearly as sharp as the original, but when switching between the two (with the original inverted within the pdf viewer's settings (qpdfview)), one notices that the new one seems very slightly shrunken and that all the lines become slightly thicker/bolder. Obviously this isn't too bad, but shouldn't I be able to invert the colors with almost no noticeable changes?
This slight change becomes even more ridiculous when one notices the size disparity: the original image was 276 KB and the modified file is 28 MB. That's more than 100 times larger! Given that I have hundreds of PDFs, out of which more than 20 or 30 need to be (custom) modified, how can I keep the total size near the original total size, while retaining quality?
Imagemagick's documentation says:
However the reading of these formats is very complicated, as they are full computer languages designed specifically to generate a printed page on high quality laser printers. This is well beyond the scope of ImageMagick, and so it relies on a specialized delegate program known as "ghostscript" to read, and convert Postscript and PDF pages to a raster image.
So, ImageMagick converts PDF to raster image first and then it makes a simple PDF from this raster image. And the output PDF is unsearchable, contains no vectors, no hidden text etc but just the page wide raster image. But PDF (and PostScript) is not just a set of images but a set of commands, text, vectors, fonts, and even a sub-scripts inside (to calculate output color, for example). PDF is more like an application rather than a static image.
Anywa, I suppose you may have 2 types of input PDF files:
with page-wide images inside (for example, scanned documents). You should process 1st type only using imagemagick. This type of files will be converted into the nearly the same file size.
with pure text and vectors inside (for example, PDF invoices). This type of files should not be processed using imagemagick as the conversion damages the input file (and finally increases the output file size). If you still need to adjust contrast or compression of images inside files of this type then consider using the ghostscript directly, check this tutorial.

Adjusting format of PDF to print it faster

I am using a combination of iTextSharp and PdfSharp to assemble a large PDF file for printing to a Canon Oce VarioPrint 6000 series printer. The PDF is replacing a postscript file.
Both this new file and the old are transferred to the printer via an LPR command.
The postscript file would take maybe 10 minutes to rip to the printer. My PDF version of the same file is taking over 30 minutes to process before it is ready to print.
Can anyone give me pointers into ways I could change the way this file is written / created that would decrease the processing time on the Vario?
EDIT: I took the file that was ripping so slowly and ran it through Acrobat Preflight and it found many RGB images, that it wanted to convert to CMYK. When I look at the PDF though, they are all black and white logos, so I had Preflight do a fix up to convert all images to print Black and White.
I also noticed the Preflight was consolidating backgrounds. Half of the pages have the same logo on them, so leveraging this conversion is probably also helpful.
When I LPR'd that file, it copyed and ripped in less than 5 minutes! So I guess the real question is how can I do that programmatically?
I am modifying the title and tags.
Thanks!
An equivalent result to the preflight repair process in this case can be gotten by using iText (or in my case, iTextSharp). I replaced the PdfSharp method of aggregating the pdfs with the PdfSmartCopy class. This brought down the size of the outputted pdf significantly, combined with using iText's reader.RemoveUnusedObjects(), and my rip time to the printer was lowered to the same or below the previous rip times that we had with the postscript file. Very pleased.
So the RGB images that were probably contributing to the large processing time, were narrowed by the Smart copy removing duplicates.
More info on PdfSmartCopy can be found at: http://api.itextpdf.com/itext/com/itextpdf/text/pdf/PdfSmartCopy.html
and in Bruno's book, iText In Action, more specifically in Chapter 6.

Extract large icon from .exe file in VB.net

I'm trying to get many large (48x48, 256x256, etc.) icons from an exe file and place them in a ListView control. Icon.ExtractAssociatedIcon() only gets a 32x32 image which is too small, are there any alternative methods I could use? I've looked at solutions like this but they don't work.

Saving "heavy" figure to PDF in MATLAB - rendering problem

I generate a figure in MATLAB with large amount of elements (100000+) and want to save it into a PDF file. With zbuffer or painters renderer I've got very large and slowly opened file (over 4 Mb) - all points are in vector format. Using OpenGL renderer rasterize the figure in PDF, ok for the plot, but not good for text labels. The file size is about 150 Kb.
Try this simplified code, for example:
x=linspace(1,10,100000);
y=sin(x)+randn(size(x));
plot(x,y,'.')
set(gcf,'Renderer','zbuffer')
print -dpdf -r300 testpdf_zb
set(gcf,'Renderer','painters')
print -dpdf -r300 testpdf_pa
set(gcf,'Renderer','opengl')
print -dpdf -r300 testpdf_op
The actual figure is much more complex with several axes and different types of plots.
Is there a way to rasterize the figure, but keep text labels as vectors?
Another problem with OpenGL is that is does not work in terminal mode (-nosplash -nodesktop -nodisplay) under Mac OSX. Looks like OpenGL is not supported. I have to use terminal mode for automation. The MATLAB version I run is 2007b. Mac OSX server 10.4.
This is a funny one. Your problem is not Matlab, it's Ghostscript (Matlab creates PDFs by calling Ghostscript, at least on Windows). When I run
x=linspace(1,10,100000);
y=sin(x)+randn(size(x));
plot(x,y,'.')
print -dpsc2 test.ps
I've got a 2Mb PS file (all vector, of course), which when compressed became a 164Kb ZIP. One would expect to get more-or-less the same result when converting PS to PDF, but ps2pdf test.ps produced your 4Mb file!
Since you are on a Mac, you probably have Distiller. I'd give it a try — generate PS files as above, and then run them through Distiller; you should get a 150K vector PDF.
If you insist on rasterizing, I can suggest printing the figure without any axes or labels to a tiff, opening the tiff, and recreating axes and labels on top of it.
If you don't want to go with a 2D histogram (i.e. an image where pixel brightness corresponds to density of points) as BlessedKey suggests, it looks like the only good way is to do the rasterizing yourself, as mentioned by AB.
getframe followed by frame2im seems to be the way to go for that. Unfortunately, getframe returns empty if you run with -nodisplay. Therefore, you'd have to save the figure as .fig, and on another computer run a script that
opens the figure, gets the content of the axes with getframe, displays the image from getframe and then saves to pdf.
As an alternative to simple plotting or a 2D histogram, you may want to look into scattercloud, which combines plotting the points with density information, by the way.
If at all possible you should try to subsample your problem before building the illustration. If you are plotting points on a curve then 10,000 is probably more than you need. A modern printer is only about 600 DPI afterall.
If the points are illustrating a cloud with some density properties, a better solution may be to build a two dimensional histogram first, and illustrate that with imshow or imagesc.
If multiple clouds are being illustrated with different colors you may be interested in building one such image for each cloud and the combining them with transparency.