I usually find hidden subfolders in working directories, which, as I suppose, were produced by the Perl 6 compiler, e.g.:
.precomp/0717742595706FA8D59800F9F9F7074236546DE7.1505852292.23535/0B/0BDF8C54D33921FEA066491D8D13C96A7CB144B9.repo-id
So, I have two questions:
Is it normal?
Is it indispensable for the compiler or there is a way to avoid it?
The .precomp folder houses the precompiled form of PerlĀ 6 modules.
The first time you use a module it gets compiled and stored in .precomp so that it doesn't have to be compiled it again. (currently only modules, not programs)
You can delete the directory and your code will continue to function. It will just be slower. Note that it will be recreated again the next time you use a module unless the directory can't be written to. I occasionally delete it myself; though that is because I regularly rebuild Rakudo from git. I do that just to clean the remnants of older installs.
The reason for the long seemingly arbitrary directory names are due to the fact that multiple versions from multiple authors of a module may be installed at once, and the possibility of Unicode module names. There has been talk of using another system which would give the files/directories more reasonable names, it just hasn't happened yet.
Related
I am trying to refactor some code. My approach (using vi) is to copy my old libraries from /lib to /lib2. That way I can hack out big sections, but still have a framework to refactor.
So I go ahead and change mymain.p6 header from use lib '../lib'; to use lib '../lib2';. Then I delete a chunk of the lines in ../lib2/mylibrary.pm6 and make darn sure :w is doing what I expect.
Imagine my surprise when my program still works perfectly despite having been largely deleted. It even works when I rm -R /lib, so nothing back there is persisting.
Is there a chance that I have a precomp of the old lib module lying around? If so, how can I flush it?
This is Rakudo Star version 2019.03.1 built on MoarVM version 2019.03
implementing Perl 6.d.
Precompiled modules are stored in the precomp directory. You can try to rename or delete the ~/.precomp directory.
See also this SO question here.
Update. Well I thought I'd replicated the scenario. It was reliably showing the bug during a one hour period. But now it isn't. Which is pretty disturbing. Investigation continues...
I've replicated #p6steve's scenario in case someone wishes to report this as a bug. At the moment I'm with #p6steve (per comment below) in that I'm going to treat this as a DIHWIDT rather than a reportable bug. That said, now we have a golf'd summary.
Original main program using path1 followed by the module it uses directly and then the one that uses:
use lib 'path1';
use lib1;
say $lib1::value;
unit module lib1;
use lib2;
our $value = $lib2::value;
unit module lib2;
our $value = 1;
This displays 1.
If the libs are copied to a fresh directory, including the .precomp directory, and then the lib2 is edited but the lib1 is not, the change to lib2 is ignored.
Here it is on glot.io before and after copying the libs and their .precomp directory and then editing the libs.
Original answer
Thank you for editing your question. That gives us all more to go on. :)
I'd like to try to get to the bottom of it and hope you're willing to have a go too. This (n)answer and comments below it will record our progress.
From your comment on #ValleLukas' answer:
Then I noticed ../lib2/.precomp directory - so realised library precomps are stored in the library folder. That did the job!
Here's my first guess at what happened:
You copied lib en masse to lib2. This copied the precomp directory with it.
You modified the use lib ... statement in mymain.p6 to refer to lib2.
Your mymain.p6 code includes a use module-that-directly-or-indirectly-uses-mylibrary.
You modify mylibrary.pm6.
But nothing changes! Why not?
You haven't touched module-that-directly-or-indirectly-uses-mylibrary, so Rakudo uses the precompiled version of that module from the lib2/.precomp directory.
Speculating...
Perhaps the fact that that precompiled version exists leads the precompilation logic to presume that if it also finds a precompiled version of a module that's used by module-that-directly-or-indirectly-uses-mylibrary then it can go ahead and use that and not even bother to check how its timestamp compares to the source version.
Does this match your scenario? If not, which bits does it get wrong?
I'm using add_custom_command() to generate some files. ninja clean removes them, as it should. One of the files is intended as a default/example implementation, to be modified by the user. It is only generated if it does not already exist. I would like for ninja clean not to remove this file.
I have tried a number of things but without success:
add_custom_target(): CMake complains about the missing file unless I name it in BYPRODUCTS, but doing this also leads to removal on clean
set_file_properties(... GENERATED FALSE) doesn't work because CMake complains about the file missing.
set_directory_properties() failed in a similar way: "folder doesn't exist or not yet processed" (it does exist)
I previously generated the example implementation and just let the user copy it or model their code on it. This works, but isn't entirely satisfactory. Is my use-case so unlikely that CMake doesn't support it?
I am afraid you requirment (conceptually, have make create something which make clean does not remove) is rather unusual. I can think of two potential solutions/workarounds.
One, move the file's generation to CMake time. That is, create it using execute_process() instead of add_custom_command(). This may or may not be possible, based on whether the file-generation process (the current custom command) depends on the rest of the build or not.
Two, totally hide the example file's existence from CMake. That is, have the custom command also generate some other file (maybe just a timestamp file) and have its driving custom target depend on that one instead. Do not list the example file as ither the custom command's dependency, output, or byproduct. That way, nothing will depend on it and neither CMake nor Ninja should not care whether it exists or not, so they will not complain or try to clean it up.
If it is an example for the user, it should not be in your build folder, but in the install folder. I don't see why you would need add_custom_command or the other commands you listed.
Therefore, you have to provide install() instructions.
You can then call make install. Cleaning will not remove those and only installing again will overwrite them if necessary.
For those, who come here a long time after the original question was asked (like me), I'll write my solution:
The tool called in add_custom_command generates two files with identical content:
one that is saved in sources, never mentioned anywhere
and one that's marked as byproduct, and then is depended on
So the first one is the file we wanted in the first place.
And the second one is actually used in build process, and gets deleted on clean.
For me the issue is that I actually want to save generated files in VCS so I can track changes. And this approach gives ne what I need.
Preface: I'm new to cmake and only have a barely-passing knowledge of how it works, so this may not be as hard as it looks to me. :-)
For various "legacy" reasons, I'm using cmake 2.8.12, and I have a large complicated project that I did not create, but now I'm trying to figure out how to make it build faster. In the top-level CMakeLists.txt file that drives everything, there are lots of targets defined for the app build and other things, and at the end it has something like "include(StaticResources)".
StaticResources.cmake defines (and at the end, calls) a function that just iterates over a ton of dirs with static files and installs each of those dirs to the output dir, as well as installing a few explicitly enumerate files, some of which are renamed.
The problem I'm trying to eliminate is that even if a build does nothing at all, this function takes many seconds as cmake goes through and visits each dir/file to see if it needs to be copied. I'm trying to figure out how to structure this so that it doesn't even look at that stuff if nothing has changed.
Any ideas? Things I have thought of trying (but am not sure how):
- possibly make that StaticResources function depend on the main app target in some way such that it only runs when the main app has to be built (this isn't ideal, of course, because lots things can change that app without requiring resources to be deployed)
- having a StaticResources "build" target that computes a hash of the state of all of these static resources files and writes it to an output file only if something is changed, then having this static resources function install that target with custom code that actually installs the static files. It seems like this would make sure that the install is only considered if/when the resources actually change, but I'm not sure if cmake will allow me to "install(StaticResources "
The CMake doc says about the command file GLOB:
We do not recommend using GLOB to collect a list of source files from your source tree. If no CMakeLists.txt file changes when a source is added or removed then the generated build system cannot know when to ask CMake to regenerate.
Several discussion threads in the web second that globbing source files is evil.
However, to make the build system know that a source has been added or removed, it's sufficient to say
touch CMakeLists.txt
Right?
Then that's less effort than editing CMakeLists.txt to insert or delete a source file name. Nor is it more difficult to remember. So I don't see any good reason to advise against file GLOB.
What's wrong with this argument?
The problem is when you're not alone working on a project.
Let's say project has developer A and B.
A adds a new source file x.c. He doesn't changes CMakeLists.txt and commits after he's finished implementing x.c.
Now B does a git pull, and since there have been no modifications to the CMakeLists.txt, CMake isn't run again and B causes linker errors when compiling, because x.c has not been added to its source files list.
2020 Edit: CMake 3.12 introduces the CONFIGURE_DEPENDS argument to file(GLOB which makes globbing scan for new files: https://cmake.org/cmake/help/v3.12/command/file.html#filesystem
This is however not portable (as Visual Studio or Xcode solutions don't support the feature) so please only use that as a first approximation, else other people can have trouble building your CMake files under their IDE of choice!
It's not inherently evil - it has advantanges and disadvantages, covered relatively well in this answer here on StackOverflow. But if you use it carelessly, you could end up ignoring dependency changes and requiring clean rebuilds of large parts of your codebase.
I'm personally in favor of using it - in smaller projects, or on certain subdirectories in larger ones - to avoid having to enter every file manually into the build files. Edit: My preference has changed and I currently tend to avoid it.
On top of the reasons other people here posted, imho the worst issue with glob is that it can yield DIFFERENT file lists on different platforms. As I see it, that's a bug. In OSX glob ignores case sensitivity and in a ubuntu box it doesn't.
Globbing breaks all code inspection in things like CLion that otherwise understand limited subsets of CMakeLists.txt and do not and never will support globbing as it is unsafe.
Write script to dump the globbed list and paste it in, its very simple, and then CLion can actually find the referenced files and infer them as useful. Maybe even put such script into the tree so that the other devs can either run it without being a moron OR set git hooks to make it happen.
In no case should some random file dropped into some directory ever get automatically linked that's how trojans happen.
Also CLion without context jumping to known definitions and what not, is like hiking barefoot /// why bother.
We use InstallShield to build installers for a number of very similar products, each of which has shared files parked in fixed directories underneath our vendor directory. We have a DLL hell problem that I thought installers should handle.
Imagine products A and B both have file FOO, installed in directory D. FOO isn't necessarily a DLL; it can be any file.
In an ideal world, both copies of FOO are identical and when both products are installed FOO gets (overwritten) in D and everything is fine. And that's how it seems to work when everything is identical.
In practice, A and B may be from different generations, so in fact A has FOO variant A (FOO_a) and B has variant FOO_b. Now only one of FOO_a or FOO_b will end up in D if both products are installed, and consequently one of A or B will be confused because the wrong file is there.
What I would have expected from installers (and InstallShield) is that for each product P installed on a system, that a reference counter would be kept somewhere (the registry?) regarding each installed file. So when A is installed and FOO(_a) is placed, FOO is marked as belonging to A, and has reference count set to 1. If B is now installed with an FOO_b identical to FOO_a, FOO should be marked as belonging also to B, and the reference count incremented. If B is installed, and FOO_b is different than FOO_a, I'd expect the installer to complain that incompatible files FOO were being proposed for installation and the install should abort. When a product is de-installed, I'd expect reference counters for each installed file to be decremented, and the installed file only removed when the reference count goes to zero.
That's not what InstallShield appears to do. It simply smashes FOO into its target directory during an install. This is in effect the same thing as DLL hell. Is this the intended behavior of installers? Of InstallShield? Of Windows?
We thought we looked for a way to ask InstallShield to manage this, but can't find anything. I'd be pleased to have somebody tell where to look to configure this correctly. Or is it that InstallShield can't/won't do this? [If so, why am I giving them money?]
If variants of FOO are versioned and properly cumulative (this is key to all installation technologies that operate on shared locations), and if your component is shared, shares the same GUID in both products (if MSI), and shares the same installation location, then everything will work. As parts of that list of conditions are made untrue, various bad things can and will happen; some have workarounds, some are unrecoverable.
If the installation location differs, little or none of this matters, unless both products end up searching in the same location for some other reason.
If MSI, and the GUID differs (or the contents of the component differ), you've broken component rules. This may result in files not uninstalling at the final uninstall, unexpected changes in a repair, or other hard to predict results.
If the file is not properly versioned, it is not possible to know when to replace it with the other one. It is likely in some ordering, a new install will end up running with an old version. This can be mitigated in an MSI with companion files, if another file has properly increasing versions.
If the file is not cumulative with respect to versions, it is likely that a new version of the file will break the other consumer of it. In this scenario, you should not install it to a shared location.
One size fits all does get a little messy sometimes, so generally the problem space is simplified to one that can be addressed. In this case the cost is that all installation authors have to follow some rules if they want well-defined behavior.
If you are truly looking for a solution, use separate DLLs. This is simple and reliable, and in fact it's easier to talk about (as you have demonstrated when you mention FOO_a and FOO_b). It also avoids the error message, which is not helpful for your users.
Windows did have a reference counting scheme for shared DLLs, but it was by convention, so it was unreliable. If you've ever had an uninstaller ask if you really want to delete a shared component, now you know why.
No one can possibly begin to answer your question without knowing what installation technology you are using. InstallShield is a product that supports many different frameworks.
Windows only keeps track of shared references on DLL files. ( SharedDllCount ). Windows Installer ( assuming you are even using it ) also keeps track of Compoent reference counts.
However, it should be noted (again IF you are using Windows Installer ) that the file costing process that decides to overwrite a file really has nothing to do with reference counts. Reference counts come into play on uninstall.
Assuming you are using Windows Installer, take a look at:
File Versioning Rules
Also look at:
What happens if the component rules are broken?
This *is something Windows Installer and InstallShield know how to handle it assuming that you implement it correctly. Any programming language can be misused and abused intentionally or not.
if you don't want the files to be common, why in the world are you placing them in the same directory? I would rather put them in different location. Now suppose your FOO_a is being used by many programs at the same version, then for FOO_b which is required by your new program and others that might be added later, add another similar common directory and name them using generation names or something.
I believe you are correct in how the reference counting works if you are dealing with a pure MSI-based installer (it would help to know if A and B are Basic MSI or InstallScript projects). In any case, I would verify the components that contains FOO in the A and B install projects have it set as the key file, and have the Shared property set to Yes. That should cause the reference counting to be managed correctly.
Regarding what should happen when you run the installer and it has a newer version of a shared file, to my knowledge that depends mostly on whether the file has a version or not, whether it's a newer version, and properties like its last changed timestamp. If you are building an installer for B that contains FOO_b that is different from FOO_a, I'm not sure how InstallShield could know that it will break A (I could be wrong, maybe it can). One possible workaround is to use custom actions to check if FOO_b is different and if it is, make a copy of the existing one early in the install, then copy it back at the end of the process.