What is the best way to flush precompiled perl6 modules? - raku

I am trying to refactor some code. My approach (using vi) is to copy my old libraries from /lib to /lib2. That way I can hack out big sections, but still have a framework to refactor.
So I go ahead and change mymain.p6 header from use lib '../lib'; to use lib '../lib2';. Then I delete a chunk of the lines in ../lib2/mylibrary.pm6 and make darn sure :w is doing what I expect.
Imagine my surprise when my program still works perfectly despite having been largely deleted. It even works when I rm -R /lib, so nothing back there is persisting.
Is there a chance that I have a precomp of the old lib module lying around? If so, how can I flush it?
This is Rakudo Star version 2019.03.1 built on MoarVM version 2019.03
implementing Perl 6.d.

Precompiled modules are stored in the precomp directory. You can try to rename or delete the ~/.precomp directory.
See also this SO question here.

Update. Well I thought I'd replicated the scenario. It was reliably showing the bug during a one hour period. But now it isn't. Which is pretty disturbing. Investigation continues...
I've replicated #p6steve's scenario in case someone wishes to report this as a bug. At the moment I'm with #p6steve (per comment below) in that I'm going to treat this as a DIHWIDT rather than a reportable bug. That said, now we have a golf'd summary.
Original main program using path1 followed by the module it uses directly and then the one that uses:
use lib 'path1';
use lib1;
say $lib1::value;
unit module lib1;
use lib2;
our $value = $lib2::value;
unit module lib2;
our $value = 1;
This displays 1.
If the libs are copied to a fresh directory, including the .precomp directory, and then the lib2 is edited but the lib1 is not, the change to lib2 is ignored.
Here it is on glot.io before and after copying the libs and their .precomp directory and then editing the libs.
Original answer
Thank you for editing your question. That gives us all more to go on. :)
I'd like to try to get to the bottom of it and hope you're willing to have a go too. This (n)answer and comments below it will record our progress.
From your comment on #ValleLukas' answer:
Then I noticed ../lib2/.precomp directory - so realised library precomps are stored in the library folder. That did the job!
Here's my first guess at what happened:
You copied lib en masse to lib2. This copied the precomp directory with it.
You modified the use lib ... statement in mymain.p6 to refer to lib2.
Your mymain.p6 code includes a use module-that-directly-or-indirectly-uses-mylibrary.
You modify mylibrary.pm6.
But nothing changes! Why not?
You haven't touched module-that-directly-or-indirectly-uses-mylibrary, so Rakudo uses the precompiled version of that module from the lib2/.precomp directory.
Speculating...
Perhaps the fact that that precompiled version exists leads the precompilation logic to presume that if it also finds a precompiled version of a module that's used by module-that-directly-or-indirectly-uses-mylibrary then it can go ahead and use that and not even bother to check how its timestamp compares to the source version.
Does this match your scenario? If not, which bits does it get wrong?

Related

How to keep CMake generated files?

I'm using add_custom_command() to generate some files. ninja clean removes them, as it should. One of the files is intended as a default/example implementation, to be modified by the user. It is only generated if it does not already exist. I would like for ninja clean not to remove this file.
I have tried a number of things but without success:
add_custom_target(): CMake complains about the missing file unless I name it in BYPRODUCTS, but doing this also leads to removal on clean
set_file_properties(... GENERATED FALSE) doesn't work because CMake complains about the file missing.
set_directory_properties() failed in a similar way: "folder doesn't exist or not yet processed" (it does exist)
I previously generated the example implementation and just let the user copy it or model their code on it. This works, but isn't entirely satisfactory. Is my use-case so unlikely that CMake doesn't support it?
I am afraid you requirment (conceptually, have make create something which make clean does not remove) is rather unusual. I can think of two potential solutions/workarounds.
One, move the file's generation to CMake time. That is, create it using execute_process() instead of add_custom_command(). This may or may not be possible, based on whether the file-generation process (the current custom command) depends on the rest of the build or not.
Two, totally hide the example file's existence from CMake. That is, have the custom command also generate some other file (maybe just a timestamp file) and have its driving custom target depend on that one instead. Do not list the example file as ither the custom command's dependency, output, or byproduct. That way, nothing will depend on it and neither CMake nor Ninja should not care whether it exists or not, so they will not complain or try to clean it up.
If it is an example for the user, it should not be in your build folder, but in the install folder. I don't see why you would need add_custom_command or the other commands you listed.
Therefore, you have to provide install() instructions.
You can then call make install. Cleaning will not remove those and only installing again will overwrite them if necessary.
For those, who come here a long time after the original question was asked (like me), I'll write my solution:
The tool called in add_custom_command generates two files with identical content:
one that is saved in sources, never mentioned anywhere
and one that's marked as byproduct, and then is depended on
So the first one is the file we wanted in the first place.
And the second one is actually used in build process, and gets deleted on clean.
For me the issue is that I actually want to save generated files in VCS so I can track changes. And this approach gives ne what I need.

.precomp...repo-id subfolder in working folder of Perl 6

I usually find hidden subfolders in working directories, which, as I suppose, were produced by the Perl 6 compiler, e.g.:
.precomp/0717742595706FA8D59800F9F9F7074236546DE7.1505852292.23535/0B/0BDF8C54D33921FEA066491D8D13C96A7CB144B9.repo-id
So, I have two questions:
Is it normal?
Is it indispensable for the compiler or there is a way to avoid it?
The .precomp folder houses the precompiled form of PerlĀ 6 modules.
The first time you use a module it gets compiled and stored in .precomp so that it doesn't have to be compiled it again. (currently only modules, not programs)
You can delete the directory and your code will continue to function. It will just be slower. Note that it will be recreated again the next time you use a module unless the directory can't be written to. I occasionally delete it myself; though that is because I regularly rebuild Rakudo from git. I do that just to clean the remnants of older installs.
The reason for the long seemingly arbitrary directory names are due to the fact that multiple versions from multiple authors of a module may be installed at once, and the possibility of Unicode module names. There has been talk of using another system which would give the files/directories more reasonable names, it just hasn't happened yet.

Config.cmake file for custom made shared libraries

I have this project that I have done for experimentation with Qt and shared libraries. This is basically a couple of Qt Widgets from the tutorials for Qt and what I think is the right CMakeLists configuration so a MylibConfig.cmake is automatically generated from a MylibConfig.cmake.in to share the library. The problem is that I don't want the end user to add the dependencies of my library to its own CMakeLists.txt. This is, in my case, the library depends on Qt4, but I want that the end user to not have to do find_package(Qt 4 REQUIRED). Imagine that I want to provide an enclosed functionality to someone that does not need or want to know about what my library is built on. Is there a way in the automatic generation of the MylibConfig.cmake that it automatically finds all necessary packages or is the only option to add the fin package manually in the MylibConfig.cmake.in?
Thank you very much.
In fact, both mentioned projects do find of dependencies from their *Config.cmake files. And nowadays that is the only option -- CMake can't help you to do it "automatically".
So, some way or another, your config module should do the same.
The easy way is to add find_dependency() calls (cuz you know exactly what other packages, your project is based on).
A little bit harder is to do it "automatically" (writing your own helper function) -- for example by inspecting properties of your target(s), "searching" where all that libraries come from and finally generating find_dependency() calls anyway.

Why is cmake file GLOB evil?

The CMake doc says about the command file GLOB:
We do not recommend using GLOB to collect a list of source files from your source tree. If no CMakeLists.txt file changes when a source is added or removed then the generated build system cannot know when to ask CMake to regenerate.
Several discussion threads in the web second that globbing source files is evil.
However, to make the build system know that a source has been added or removed, it's sufficient to say
touch CMakeLists.txt
Right?
Then that's less effort than editing CMakeLists.txt to insert or delete a source file name. Nor is it more difficult to remember. So I don't see any good reason to advise against file GLOB.
What's wrong with this argument?
The problem is when you're not alone working on a project.
Let's say project has developer A and B.
A adds a new source file x.c. He doesn't changes CMakeLists.txt and commits after he's finished implementing x.c.
Now B does a git pull, and since there have been no modifications to the CMakeLists.txt, CMake isn't run again and B causes linker errors when compiling, because x.c has not been added to its source files list.
2020 Edit: CMake 3.12 introduces the CONFIGURE_DEPENDS argument to file(GLOB which makes globbing scan for new files: https://cmake.org/cmake/help/v3.12/command/file.html#filesystem
This is however not portable (as Visual Studio or Xcode solutions don't support the feature) so please only use that as a first approximation, else other people can have trouble building your CMake files under their IDE of choice!
It's not inherently evil - it has advantanges and disadvantages, covered relatively well in this answer here on StackOverflow. But if you use it carelessly, you could end up ignoring dependency changes and requiring clean rebuilds of large parts of your codebase.
I'm personally in favor of using it - in smaller projects, or on certain subdirectories in larger ones - to avoid having to enter every file manually into the build files. Edit: My preference has changed and I currently tend to avoid it.
On top of the reasons other people here posted, imho the worst issue with glob is that it can yield DIFFERENT file lists on different platforms. As I see it, that's a bug. In OSX glob ignores case sensitivity and in a ubuntu box it doesn't.
Globbing breaks all code inspection in things like CLion that otherwise understand limited subsets of CMakeLists.txt and do not and never will support globbing as it is unsafe.
Write script to dump the globbed list and paste it in, its very simple, and then CLion can actually find the referenced files and infer them as useful. Maybe even put such script into the tree so that the other devs can either run it without being a moron OR set git hooks to make it happen.
In no case should some random file dropped into some directory ever get automatically linked that's how trojans happen.
Also CLion without context jumping to known definitions and what not, is like hiking barefoot /// why bother.

Linker chooses "wrong" main with Boost.Test

When using Boost.Test, there is generally no need to define a main() function, since Boost.Test provides one itself.
I recently had to convert my project to use static linking of 3rd party libraries (on VS2010). Naturally, I had to link to multiple .libs so that the build succeeds, and my build ran just fine.
However, when I ran my test project, something really strange happened. It seems that one of the 3rd party .libs (libpng), required by one of my dependent libraries, contained a test file with a main() function defined within (pngtest.c, if you must know).
Since my project did not have a main() function, the linker chose that one as my "test" application. Thus, non of my tests run.
Does anyone know how I prevent this from happening? How can I tell the linker/compiler to use the Boost.Test main()?
Answering my own question, and clarifying #Tom's answer.
Turns out that the libpng build script I was using was not the the original shipping with libpng but one created by the OpenCV build system. The file pngtest.c was mistakenly included int the build.
The solution to the problem was to remove pngtest.c from the libpng build script.
The latest OpenCV version, does not include this file anymore.
For more details see my post to Boost mailing list here and my OpenCV bug report here.
Adi, I had the same problem. Looks like you were already all over this one. Thanks to Google and your efforts, I was able to figure it out.
Here's some info to round out the answer:
discussion:
http://boost.2283326.n4.nabble.com/Boost-Test-Linker-chooses-wrong-main-function-td4634872.html
solution:
http://code.opencv.org/issues/2322
Basically, I just excluded the pngtest.c file from the libpng project, and recompiled OpenCV. Looks like it will be fixed in the next release of OpenCV.
Thanks!