differences between PFType0Font and PFType1Font file sizes - pdfbox

I've started to use PDType0Font instead of PDType1Font (due to IllegalArgumentException) and noticed that my pdf files that are being generated have ~double size now (from 1.5MB it jumped to 2.8MB and from 19KB jumped to 298KB).
I also noticed that it is directly related to the number of fields in the form that I need to fill with data.
Is it the expected behavior? If so, is there a 'best practice' way to use PFType0Font? Is there a way to optimize it?
UPDATE: I've changed my code to prevent subsetting.
Thanks.

Related

How to fix the auto code formatting in Pharo?

When I save a method and get back to it later, all of my variable names become temp and all of my parameters becomes arg and the code indentation get changed.
Any thoughts on how I can fix this?
The behaviour that you are experiencing is not code formatting at all. You immage is experiencing an issue where it can't access original source code. Thus it uses a backup solution and decomples method bytecode. During the compilation process the variable names are erased, so they can't be re-created during the decompilation, and generic substitudes are used instead.
Now, why you are missing sources is another question. First of all it's important to check if you get some exceptions. Often these happen when you open or save your image, but also thaty may occur when you save methods.
Depending on the Pharo version you may be missing .changes or .sources files. This often happens when you more an image without moving other supporting files.

!Dimension too large when knitting file to PDF using rmarkdown in RStudio

I receiving the following error when I try to knit into a pdf:
! Dimension too large.
\fb#put#frame ...p \ifdim \dimen# >\ht \#tempboxa
\fb#putboxa #1\fb#afterfra...
It's an extremely long line of code that I need to knit into a pdf (about 5000 lines). A lot of preprocessing data. The output itself is quite small maybe a line or more. Has anyone had this issue with huge blocks of code? If so, could you tell me how you solved it? I'm up for suggestions.
That's a LaTeX framed package error. RMarkdown tries to put all of that code into a single environment (I believe it's a snugshade environment, but I might be wrong), and that environment isn't ready for something that's going to stretch over many pages. The most I managed to get was about 1300 lines which were broken up into 48 pages of code.
The simplest solution would be to break that up into 4 or 5 pieces, but that might not be easy for you to do.
Next simplest might be not to show it as code at all: use echo = FALSE in the code chunk that runs it, and include it some other way (e.g. in a verbatim environment, or using the listings package). With that much code, showing it as a couple of hundred pages of text doesn't really make much sense anyway.

Realm database performance

I'm trying to use this database with react-native. First of all, i've found out that it can't retrieve plain objects - i have to retrieve all of the properties in the desired object tree recursively. And it takes about a second per object (~50 numeric props). Slow-ish!
Now, i've somehow imported ~9000 objects in it (each up to 1000 chars including titles). Looks like there is no easy ay to import it, at least it is not described in docs. Anyway, that's acceptable. But now i've found out that my database size (default.realm) is 3.49GB (!). The JSON file, which i was importing is only 6.5mb. I've opened default.realm with Realm Browser and it shows only those ~9000 objects, nothing else. Why so heavy?
Either, i don't understand something very fundamental about this database or it is complete garbage. I really hope i'm wrong. What am I doing wrong?
Please make sure you are not running in chrome debug mode. This is probably why things seem so slow. As far as the file size issue goes, it would be helpful if you posted your code to help figure out why that is happening.

PS PDF Conversion text issues

having a bit of a weird issue that I can't for the life of me figure out how to solve. It may just be a blond moment on my part but I've been scratching my head with this one for a while.
Basically I've made a load of PS PDF files in CS6 and used Acrobat Pro DC to merge the files into one big file. Granted this may not be the most efficient way to work with PDFs but it seemed like the way that made the most sense to me. Anyway I've been doing this for a while and I've not really had any issues however when I did it today I hit a little snag. Several pages in the document come out with their text all mangled. The odd thing is that the text appears to look fine in the DC viewer but if I print them out or view them in Windows Reader they look deformed.
I've tried a load of different ways to try to solve the problem but I can't really find the answer I'm looking for. I've tried a number of different file formats and different printing settings (which I now realize are useless as it's messed up in Reader as well as in print). I've also tried rasterizing the text which seems to work but obviously the text becomes unselectable in the PDF views so I'd prefer to use this as a last resort and find the actual root of the problem.
I can only assume that it's an issue on photoshops side as the majority of the pages come out fine and they all use the same base template and same fonts.
Any insight into this will be really helpful.

Error checking overkill?

What error checking do you do? What error checking is actually necessary? Do we really need to check if a file has saved successfully? Shouldn't it always work if it's tested and works ok from day one?
I find myself error checking for every little thing, and most of the time if feels overkill. Things like checking to see if a file has been written to a file system successfully, checking to see if a database statement failed.......shouldn't these be things that either work or don't?
How much error checking do you do? Are there elements of error checking that you leave out because you trust that it'll just work?
I'm sure I remember reading somewhere something along the lines of "don't test for things that'll never really happen".....can't remember the source though.
So should everything that could possibly fail be checked for failure? Or should we just trust those simpler operations? For example, if we can open a file, should we check to see if reading each line failed or not? Perhaps it depends on the context within the application or the application itself.
It'd be interesting to hear what others do.
UPDATE: As a quick example. I save an object that represents an image in a gallery. I then save the image to disc. If the saving of the file fails I'll have to image to display even though the object thinks there is an image. I could check for failure of the the image saving to disc and then delete the object, or alternatively wrap the image save in a transaction (unit of work) - but that can get expensive when using a db engine that uses table locking.
Thanks,
James.
if you run out of free space and try to write file and don't check errors your appliation will fall silently or with stupid messages. i hate when i see this in other apps.
I'm not addressing the entire question, just this part:
So should everything that could
possibly fail be checked for failure?
Or should we just trust those simpler
operations?
It seems to me that error checking is most important when the NEXT step matters. If failure to open a file will allow error messages to get permanently lost, then that is a problem. If the application will simply die and give the user an error, then I would consider that a different kind of problem. But silently dying, or silently hanging, is a problem that you should really do your best to code against. So whether something is a "simple operation" or not is irrelevant to me; it depends on what happens next, or what would be the result if it failed.
I generally follow these rules.
Excessively validate user input.
Validate public APIs.
Use Asserts that get compiled out of production code for everything else.
Regarding your example...
I save an object that represents an image in a gallery. I then save the image to disc. If the saving of the file fails I'll have [no] image to display even though the object thinks there is an image. I could check for failure of the the image saving to disc and then delete the object, or alternatively wrap the image save in a transaction (unit of work) - but that can get expensive when using a db engine that uses table locking.
In this case, I would recommend saving the image to disk first before saving the object. That way, if the image can't be saved, you don't have to try to roll back the gallery. In general, dependencies should get written to disk (or put in a database) first.
As for error checking... check for errors that make sense. If fopen() gives you a file ID and you don't get an error, then you don't generally need to check for fclose() on that file ID returning "invalid file ID". If, however, file opening and closing are disjoint tasks, it might be a good idea to check for that error.
This may not be the answer you are looking for, but there is only ever a 'right' answer when looked at in the full context of what you're trying to do.
If you're writing a prototype for internal use and if you get the odd error, it doens't matter, then you're wasting time and company money by adding in the extra checking.
On the other hand, if you're writing production software for air traffic control, then the extra time to handle every conceivable error may be well spent.
I see it as a trade off - extra time spent writing the error code versus the benefits of having handled that error if and when it occurs. Religiously handling every error is not necessary optimal IMO.