How to deal with phantomJS coredump? - phantomjs

Earlier, phantomJS core-dumped during a URL request
and related processing. (My shell does not capture core,
so I can't tell you what happened.)
I presume this is a bug in phantomJS. But more generally,
what is the recommended best practice for handling fatal
behavior in phantomJS? Does the process just return non-zero
if it fails?
Thanks.

PhantomJs has lots of little problems, crashing may be because it's out of memory (even if you have free pagefile space) or because you use it too many times without restarting, or if you try loading in resources it doesn't process properly (like .otf webfonts) or if there's an error in your script, or if you try rendering an image that's too big, or.... well, you get the picture.
The best way to troubleshoot this is to use console.write() statements in your script (everywhere, including in all the WebPage callbacks) and use this to narrow down where the crash is occurring.
The troubleshooting technique is rather crude, but I used this to great effect in my own usages of PhantomJs.

Related

Pharo Smalltalk image unresponsive with FileStream on Windows 10

I'm using a Pharo 6.1 image to do some CSV file processing. Sometimes my image just becomes unresponsive — I have to terminate the Pharo VM from Task Manager.
Is it necessary to use somethings like:
ensure:[toStream close]
ensure:[fromStream close]
What's a simple but reliable approach to reading and especially writing files with Pharo?
The image is not really unresponsive but the UI thread for sure is when you do a long operation like reading a big CSV file.
The best would be to for a process for doing it.
Like in:
[ target doSomeLongThing ] fork.
You can view the process in the process browser from the world menu and terminate it there.
Now that is not really something one will think about when trying things out interactively in a playground.
What I do to alleviate this situation is twofold:
Put these things in a test. Why? Because tests have a maximum duration and if things are stuck, they will come back with a time exceeded notification from the test.
Start the Zinc REPL on a port, so that I can access the system from a web browser if the UI thread is kind of stuck and restart the UI thread.
I wish the interrupt key would work in all cases but well, it doesn't and is a solid annoyance.
I am of the opinion that images should be considered as discardable artifacts and CI job should be building them regularly (like daily) so that we have some recovery path.
This is not really cool as yes, using an image and being able to come back to it without having to rebuild stuff all the time when doing explorations shouldn't frustrate us with UI thread blockages. I hate that. I have an image that is stuck on restart from some Glamour problem, it is infuriating as it cannot be debugged or anything.
You can also use:
(FileLocation imageDirectory / 'somefile.csv') asFileReference readStreamDo: [ :stream |
"do something with the stream" ].
This will do the ensure: bit for you. Cleaner. Easier.
For CSV, also give a shot to NeoCSV as it will do the file handling for you as well.
HTH

Does Println() use memory in live app?

In other words, should a production app eliminate all logging and printing to the console to reduce memory usage (or is the memory usage negligible)?
In general logging take some very small amount of device resources but it is so small that you shouldn't feel any changes unless you are printing a whole book to console ;)
Despite of this, I would recommend to remove all your logs if they are not provide any valuable information to user. Pure debug logs could be very annoying and if someone will work with your code in future, he can call you unprofessional ;)
I will recommend removing logs in methods which are called frequently due to the nature of the app (e.g. an update method in a game loop). In such methods one println may cause noticeable stuttering.
You can use a method for logging which will print only in debug (scheme-wise) and do nothing on a release scheme.
You could keep your logs in debug builds, but you better disable them completely, there have been some apps that printed passwords, tokens to the log, which isn't the best thing to do,

Should app crash or continue on normally while noting the problem?

Options:
1) When there is bad input, the app crashes and prints a message to the console saying what happened
2) When there is bad input, the app throws away the input and continues on as if nothing happened (though nothing the problem in a separate log file).
While 2 may seem like the obvious solution, the app is an engine and framework for game development, so if a user is writing something and does something wrong, it may be beneficial for that problem to be immediately obvious (app crashing) rather than it being ignored and the user potentially forgetting to check the log to see if there were any problems (may forget if the programmed behavior isn't very noticeable on screen, so he doesn't catch that it is missing).
There is no one-size-fits-all solution. It really depends on the situation and how bad the input is.
However, since you specifically mentioned this is for an engine or framework, then I would say it should never crash. It should raise exceptions or provide notable return codes or whatever is relevant for your environment, and then the application developer using your framework can decide how to handle. The framework itself should not make this decision for all apps that utilize the framework.
I would use exceptions if the language you are using allows them..
Since your framework will be used by other developers you shouldn't really constraint any approach, you should let the developers catch your exception (or errors) and manage what to do..
Generally speaking nothing should crash on user input. Whether the app can continue with the error logged or stop right there is something that is useful to be able to configure.
If it's too easy to ignore errors, people will just do so, instead of fixing them. On the other hand, sometimes an error is not something you can fix, or it's totally unrelated to what you're working on, and it's holding up your current task. So it depends a bit on who the user is.
Logging libraries often let you switch logs on and off by module and severity. It might be that you want something similar, to let users configure the "stop on error" behaviour for certain modules or only when above a certain level of severity.
Personally I would avoid the crash approach and opt for (2) that said make sure that the error is detected and logged and above all avoid any swallowing of errors (e.g. empty catch).
It is always helpful to have some kind of tracing/logging module, for instance later when you are doing performance tuning or general troubleshooting.
It depends on what the problem is. When I'm programming and writing error handling I use this as my mantra:
Is this exception really exceptional?
Meaning, is the error in input or whatever condition is "not normal" recoverable? In the case of a game, a File not Found exception on a texture could be recoverable and you could show a default texture so you know something broke.
However, if you have textures in a compressed file and you keep getting checksum errors, that would be an exceptional exception and I would crash the game with the details.
It really boils down to: can the application keep running without issue?
The one exception to this rule though (ha ha) is, if something is corrupted you can no longer trust your validation methods and you should crash as quickly as you can to prevent the corruption from spreading.

How to locally test cross-domain builds?

Using the dojo toolkit, what is the proper way of locally testing code that will be executed as cross-domain, without making the actual build?
As it appears, there are three possible options (each, with their own drawbacks):
Using local (non xd) XMLHttpRequest dojo.require
This option does not really test the xd behavior, since it dojo.require[s] the js synchronously via XHR.
djConfig.debugAtAllCosts = true;
Although this option does load the required code asynchronously (via the 'script' tag), it also pulls the code in via XHR, parses the dojo.require[s] inside that, and pulls them in. This (using the loader_debug), again, is not what the loader_xd is doing. More info on this topic in a different question.
Creating a cross-domain build
This approach requires a build, which is not possible in the environment which I'm running the code in (We're using our own on-the-fly build process, which includes only the js that is necessary for a particular page. This process is not suitable for development).
Thus, my question: is there a way to use the loader_xd, which does not require an xd build (which adds the xd prefix / suffix to every file)?
The 2nd way (using the debugAtAllCosts) also makes me question the motivation for pre-parsing the dojo.require[s]. If the loader_xd will not (or rather can not) pre-parse, why is the method that was created for testing/debugging doing so?
peller has described the situation. If you wanted to just generate .xd.js file for your modules, you could look at util/buildscripts/jslib/buildUtilXd.js and its buildUtilXd.xdgen() function.
It would take a bit of work to make your own script, but you could look at util/buildscripts/build.js for pointers.
I am hoping in the future for Dojo (maybe Dojo 2.x timeframe) we can switch to a loader that just uses script tags with a module format that has a function wrapper around the module, something that is coded by the developer. This would allow the same module format to work in the local and xd cases.
I don't think there's any way to do XD loading without building and deploying it. Your analysis of the various options seems about right.
debugAtAllCosts is there specifically to solve a debugging problem, where most browsers, until recently, could not do anything intelligent with code brought in through eval. Still today, Firefox will report exception in the console as appearing at the eval site (bootstrap.js) with a line number offset from the eval, rather than from the actual eval buffer, and normally that eval buffer is anonymous. Firebug was the first debugger to jump through some hoops to enhance the debugging experience and permitted special metadata that Dojo's loader injects between the XHR and the eval to determine a filepath to the source. Webkit/Safari have recently implemented this also. I believe debugAtAllCosts pre-dates the XD loader.

How would I go about taking a snapshot of a process to preserve its state for future investigation? Is this possible?

Whether this is possible I don't know, but it would mighty useful!
I have a process that fails periodically (running in Windows 2000). I then have just one chance to react to it before having to restart it and painfully wait for it to fail again. I didn't write the process so don't have the source to debug. The failure is seemingly random.
With a snapshot of the process I could repeatedly and quickly test reactions to the failure.
I had thought of running inside a VM but this isn't possible in this instance.
EDIT:
#Jon Cage asked:
When you say a snapshot, you mean capturing a process when it's about to fail (including memory, program state etc. etc.) ...and then replaying it's final few seconds repeatedly to see what effect it has on some other component?
This is exactly what I mean!
I think minidump is what you are looking for.
You can also used Userdump:
The User Mode Process Dumper
(userdump) dumps any running Win32
processes memory image (including
system processes such as csrss.exe,
winlogon.exe, services.exe, etc) on
the fly, without attaching a debugger,
or terminating target processes.
Generated dump file can be analyzed or
debugged by using the standard
debugging tools.
This article shows you how to use it.
My best bet is to start the process in a debugger (OllyDbg being my preferred tool).
The process will pause on an exception, and you can try to figure out what happened shortly before that.
This needs some understanding of assembler and does not allow to create a snapshot of the process for later analysis. You would need to write your own debugger for that - it should be theoretically possible.