I ran /home/foo/bar.p6 from /path/to/data and it says "Segmentation fault (core dumped)"
I cannot find the core dump file in /var/crash or my home directory or current working directory.
I think it is Raku itself that core-dumped.
Where would raku put the core dump file if my program caused the dump, and where would I find the dump file if Raku itself core-dumped?
Thanks.
/var/crash is for system crash dumps. Core dumps are usually saved under /var/lib/systemd/coredump/, if not, then it might help to also tell us your distribution and whether your system uses systemd. Also check /etc/systemd/coredump.conf for custom settings.
Related
In my home folder in Linux I have several config files that have "rc" as a file name extension:
$ ls -a ~/|pcregrep 'rc$'
.bashrc
.octaverc
.perltidyrc
.screenrc
.vimrc
What does the "rc" in these names mean?
It looks like one of the following:
run commands
resource control
run control
runtime configuration
Also I've found a citation:
The ‘rc’ suffix goes back to Unix's grandparent, CTSS. It had a command-script feature called "runcom". Early Unixes used ‘rc’ for the name of the operating system's boot script, as a tribute to CTSS runcom.
Runtime Configuration normally if it's in the config directory. I think of them as resource files. If you see rc in file name this could be version i.e. Release Candidate.
Edit: No, I take it back officially... "run commands"
[Unix: from runcom files on the CTSS system 1962-63, via the startup script /etc/rc]
Script file containing startup instructions for an application program (or an entire operating system), usually a text file containing commands of the sort that might have been invoked manually once the system was running but are to be executed automatically each time the system starts up.
Thus, it would seem that the "rc" part stands for "runcom", which I believe can be expanded to "run commands". In fact, this is exactly what the file contains, commands that bash should run.
Quoted from What does “rc” in .bashrc stand for?
I learnt something new! :)
In the context of Unix-like systems, the term rc stands for the phrase "run commands". It is used for any file that contains startup information for a command. It is believed to have originated somewhere in 1965 from a runcom facility from the MIT Compatible Time-Sharing System (CTSS).
Reference: https://en.wikipedia.org/wiki/Run_commands
In Unix world, RC stands for "Run Control".
http://www.catb.org/~esr/writings/taoup/html/ch10s03.html
To understand rc files, it helps to know that Ubuntu boots into several different runlevels. They are 0-6, 0 being "halt", 1 being "single-user", 2 being "multi-user"(the default runlevel), etc. This system has now been outdated by the Upstart and initd programs in most Linux Distros. It is still maintained for backwards compatibility.
Within the /etc directory are several folders labeled "rc0.d, rc1.d" etc, through rc6.d. These are the directories the kernel refers to to know which init scripts it should run for that runlevel. They are symbolic links to the system service scripts residing in the /etc/init.d directory.
In the context you are using it, it would appear that you are listing any files with rc in the name. The code in these files will set the way the services/tasks startup and run when initialized.
I have a few clojure applications that load the sensitive info off of .properties file in /etc/ and this has worked well so far.
Recently, I have had to deal with a few windows machines added into our server collection and I need to run the clojure applications on there as well. Windows doesn't obviously have or understand /etc/ path and I got around that fact by looking at /etc/ and if that's missing then looking at d:\configs.
But I don't quite like this way of doing it, because, if there is another windows developer looking into it and he doesn't have d:\ or prefers elsewhere for configs it would get messy.
Is there any way I can load a file from clojure, no matter what operating system it is? My initial thoughts were of saving a key-path in the Environment variable and accessing it from clojure.
I am just wondering if there is a better way of doing it.
Thanks.
Have a look at environ. It offers some flexibility when it comes to configuring your Clojure app, letting you choose between a number of options:
environment variables: This seems to be the way to go in Clojureland, so I'd say your initial thought wasn't the worst;
in ~/.lein/profiles.clj: You can store them in the :user profile as Clojure data - that sounds quite nice, I guess;
Java CLI properties: Finally, you can pass them to the java executable directly via the command line.
environ will collect data from all these places.
If I try to build an application with the application class outside the default package, so the application file path is /app/AppClass.mxml instead of /AppClass.mxml (as would normally be the case), Flash builder cannot launch the application for debugging because it is looking for the SWF in debug/app/AppClass.swf and the SWF is being output to debug/AppClass.swf instead. Changing the output folder to debug/app makes it put the swf in debug/app, but then it puts the application configuration file "AppClass-app.xml" in /debug/app/app and then that can't be found.
Is there a way to change only the SWF output folder, or the location of the xml configuration file in the run-configuration?
You may use symbolic link to created swf file - http://en.wikipedia.org/wiki/Symbolic_link
for example for Windows :
cd project/path/bin-debug/package/path/
MKLINK ClassName.swf project/path/bin-debug/ClassName.swf
and it's work
or you can use symbolic link for folder:
cd project/path/bin-debug/package/
MKLINK path project/path/bin-debug/ /D
I think I remember this worked for me. But it was long time ago. And, yes, it is a known problem, I also recall Adobe people mentioning it as a limitation of FB.
In my Ant script, you'll need to do the adjustments to reflect your actual file names and directory structure. Also note that it will make it more cumbersome to debug it from FB. You'll need to use the debugging target in Ant, and then connect the debugger to the running application (so that some info, especially on the startup) will be lost. The only way you would be able to debug it, though I've never tried it, is with the commandline tools (I'm not sure of adl syntax for breakpoints / printing / stack frames, so idk how to do it.
Also, for the released application you will probably want to change the signing mechanism.
We have a desktop application using JNI that occasionally causes the JVM to crash. Luckily the JVM produces a hs_err_pidXXXX.log file, which is quite useful in debugging such errors. However, it always seems to go to the current working directory, and it's annoying to dig it from there, since our other log files all go to a specific "log file place".
Is it possible to specify different location for those "crash dump" files? How?
Joonas,
Although the HeapDumpPath works for the heap dump it is not the answer for your question. The heap dump and the jvm crash log are two separate things.
To change the destination of the jvm crash log run java with this option:
-XX:ErrorFile=/path/to/file.
Path/to/file is the place you want the JVM crash log to output.
By default the heap dump is created in a file called java_pidpid.hprof in the working directory of the VM. You can specify an alternative file name or directory with the -XX:HeapDumpPath= option. For example -XX:HeapDumpPath=/disk2/dumps will cause the heap dump to be generated in the /disk2/dumps directory.
My visual studio solution includes a web application and a unit test application. My web application uses log4net. I want to be able to use msbuild from the command-line to build my solution. However, whenever I build the solution from the command-line, I get build errors because it can't copy log4net.xml to the test project's bin directory.
The error message is:
"Unable to copy file '\bin\log4net.xml' to 'bin\Debug\log4net.xml'. Access to the path '\bin\log4net.xml' is denied."
It looks like Visual Studio is locking this file, but I can't figure out why it would need to. Is there a way to prevent VS from locking the XML documentation files in a project that it has loaded?
I've found the following solution:
In VS postbuild event or in NAnt/MSbuild script execute the cmd script
handle.exe -p devenv [Path to the folder with locked files] > handles.txt
FOR /F "skip=5 tokens=3,4 delims=: " %%i IN (handles.txt) DO handle -p %%i -c %%j -y
handle.exe is available here http://technet.microsoft.com/en-us/sysinternals/bb896655.aspx
first line of the script dumps to handles.txt all handles for files locked by VS
second line reads handle ids from the file and kills the handles
After the script is executed files may be removed/replaced/moved etc
If you're fine with omitting the xml & pdb files altogether from the output, you can pass /p:AllowedReferenceRelatedFileExtensions=none to msbuild on the command line.
(Thanks to related answer https://stackoverflow.com/a/8757941/251011 )
EDIT: If you also have problems with dll files having this error, I recently discovered an environment variable solution: https://stackoverflow.com/a/23069603/251011
I've had this problem with Visual Studio, too. We use NAnt instead of MSBuild, but the problem is the same. I was able to work around it by modifying the build file to ignore failures when copying xml documentation.
Note that this doesn't actually solve the original problem since the xml files are still locked, but this workaround was good enough for us since the actual content of our xml documentation doesn't change very often.
Krystan wrote:
You could drop this file into another directory and reference it from there or place code that uses it into a library and have the post build event on that copy it to its bin directory and then reference.
Our xml file locking problem is not in the projects bin directory, rather an external reference directory. We hit it when performing TortoiseSVN->Update where a new version is available. Assuming it's because VS is using the file for intellisense.
For those who hit this locking issue due to TortoiseSVN->Update, I'm currently experimenting with a pre-update hook which deletes the offending file(s) before updating (they will be restored if no update is needed), so far this seems to work (which is weird) but I haven't tested it thoroughly enough to say for sure. Will update this answer if it proves reliable.
Here's hoping MS fix it in VS 2010.
Basically don't check files into the bin folder, its a bad idea.
You could drop this file into another directory and reference it from there or place code that uses it into a library and have the post build event on that copy it to its bin directory and then reference.
Msbuild will then copy that to the webprojects bin directory for you :)
We have this exact issue with people checking in stuff to the bin directory, unless you absolutely have to bin directories should either not be checked in at all or just have .refresh files in there to avoid these sorts of locking issues.
Bit late on the reply, sorry :)