How to calculate a module's dist hash - raku

I have Perl 6 installed in ~/.rakudo-star/rakudo-star-2018.04, using LoneStar. When zef installs a module, it gets installed into a subdirectory of the Rakudo Perl 6 directory. In here is a directory named perl6/site/resources, which seem to hold all the installed files. How can I figure out which module is contained in which file, using Perl 6?

If you want to get the source of a namespace that would get loaded you can do:
my $module-name = 'Test';
# Get a Distribution object which provides an IO interface to its contents
my $compunit = $*REPO.resolve(CompUnit::DependencySpecification.new(:short-name{$module-name}));
my $distribution = $compunit.distribution;
my $handle-from-name = $distribution.content($distribution.meta<provides>{$module-name}.keys[0]).open;
say $handle-from-name.slurp(:close);
# Or if we already know the name-path:
my $handle-from-path = $distribution.content("lib/Test.pm6").open;
say $handle-from-path.slurp(:close);
Note that $compunit.distribution will only work if resolve returned a CompUnit from a CompUnit::Repository::Installation repository.
rakudo#1812 is the framework to improve this further, allowing individual repositories to be queried ( $*REPO.resolve iterates the linked list of repos to give a result ) and unifying behavior for resolve/candidates/etc between CompUnit::Repository::Installation and CompUnit::Repository::FileSystem.

If I remember correctly, you shouldn't. It's zef the one that must take care of it. But if you positively have to, use the SHA1 signatures in the directory with zef locate
zef --sha1 locate 5417D0588AE3C30CF7F84DA87D27D4521713522A
will output (in my system)
===> From Distribution: zef:ver<0.4.4>:auth<github:ugexe>:api<>
lib/Zef/Service/Shell/PowerShell/download.pm6 => /home/jmerelo/.rakudobrew/moar-2018.06/install/share/perl6/site/sources/5417D0588AE3C30CF7F84DA87D27D4521713522A
From your question, it's not too clear if what you want to do is the opposite, that is, find out which SHA1 corresponds to which file; in that case, try and to this:
zef locate bin/lwp-download.pl
which will return
===> From Distribution: LWP::Simple:ver<0.103>:auth<Cosimo Streppone>:api<>
bin/lwp-download.pl => /home/jmerelo/.rakudobrew/moar-2018.06/install/share/perl6/site/resources/059BD7DBF74D1598B0ACDB48574CC351A3AD16BC

Related

Running manifests (classes) from a task or plan in Puppet Enterprise

TL;DR
In Puppet Enterprise, how do I run a manifest (testpp.pp) from a task or plan (not Bolt).
plan base_windows::testplan (
  TargetSpec $targets,
  Optional[String] $contents = undef,
  String $filename,
){
  $apply_prep($targets)
  $apply_results = apply($targets, '_catch_errors' => true) {
    class { 'base_windows::testpp': }
  }
  $apply_results.each | $result | {
    notice($result.report)
  }
}
apply_prep seems to succeed, but apply is failing with the following error:
{
"msg" : "Evaluation Error: Unknown function: 'report'. (file: /opt/puppetlabs/server/data/orchestration-services/code/environments/development/modules/base_windows/plans/testplan.pp, line: 16, column: 19)",
"kind" : "bolt/plan-failure",
"details" : {
"class" : "Bolt::PAL::PALError"
}
}
If I change the code to:
plan base_windows::testplan (
  TargetSpec $targets,
  Optional[String] $contents = undef,
  String $filename,
){
  apply_prep($targets)
  $apply_results = apply($targets, '_catch_errors' => true) {
# Is this how to call a class? I cannot find an example.    
class { 'base_windows::testpp': }
  }
  $apply_results.each |$result| {
$target = $result.target.name
if $result.ok {
  out::message("${target} returned a value: ${result.value}")
} else {
 out::message("${target} errored with a message: ${result.error.message}")
}
  }
}
The plan tells me it has failed, but there are no errors in the node's report. In fact, there is no entry for the time the plan was executed.
I cannot find any examples on how to call a class from a plan, so the above apply() is a guess, based on this documentation.
I have installed the puppetlabs_reboot module and successfully ran a plan using it, therefore, I conclude my system is set up correctly, it's just my code that is wrong.
Background
I may be going about this all wrong, so here is some background to the problem. Currently, I have a series of manifests that install various packages from the public Chocolatey repository depending on a node's classification. Package definitions are stored in Hiera data and each package' version is set to latest. At the end of the Package{} resource, some manifests include a reboot.
These manifests are used to provision new nodes and keep existing nodes up-to-date with the latest package version.
The Puppet agent is set to run once per hour and if the source package is updated in the Chocolatey repo, on the next Puppet run, the manifest will update the package, rebooting the node, if required.
Goal
New nodes are provisioned with the latest package version.
Prevent package updates at undetermined times on existing nodes.
Continue to allow Puppet agent runs every hour.
Make use of existing manifests.
Ideas
Split out the package{} code from the profile manifest and place them in tasks / plans, allowing packages to be updated out-of-hours.
Specify the actual package version in Hiera. Although this is more declarative and idempotent, it means keeping an eye on over 100 package version. I guess it would be fairly simple to interrogate the Chocolatey repos with code to pull the latest version number, but even so I am no better off.
Create a task with a script that runs choco upgrade all, however, the next Puppet run would revert package versions according to the version defined in Hiera, meaning Hiera still needs to be kept up-to-date.
Problems
As per the main crux of this question, how do I run manifests (classes) from plans? If I understand correctly, tasks are for ad-hoc scripts, whereas plans can run tasks and manifests. As a lot of time has been invested in writing manifests, I would prefer not to rewrite all my manifests as scripts.
I am confused by the Puppet documentation as it seems to switch between PE and Bolt syntax. I am using Puppet Enterprise where Puppet says they don't recommend using Bolt but their examples seem to site Bolt commands.
No errors in the node' report. apply_prep() reports executed successfully, albeit taking far longer to execute than puppetlabs_reboot module, but apply() results in a failure, but nothing is logged in the node's reports.
Using puppetlabs_reboot module as a reference, it appears their plan uses a bunch of tasks. It appears that they don't use apply() to run their reboot{} class. Is this not duplicating the work?
If anyone has any suggestions or ideas, I'd be grateful if you could share.
I've got it to work. The class I was trying to run, required parameters that I hadn't provided!
plan base_windows::testplan (
TargetSpec $targets,
Optional[String] $contents = undef,
String $filename,
){
apply_prep($targets)
$apply_results = apply($targets, '_catch_errors' => true) {
class { 'base_windows::testpp':
filename => $filename,
contents => $contents,
}
}
}
# Output the whole result_set in the PE console
return $apply_results
I found this out using the logs.
Turn on debug level logging in /etc/puppetlabs/puppetserver/logback.xml (root level="debug")
Tail the following logs:
tail -f /var/log/puppetlabs/bolt-server/bolt-server.log
tail -f /var/log/puppetlabs/puppetserver/puppetserver.log | grep -B 5 -A 5 'testplan'
tail -f /var/log/puppetlabs/orchestration-services/orchestration-services.log

How to use :since with CompUnit

I am trying to create a cache of POD6 by precompiling them using the CompUnit set of classes.
I can create, store and retrieve pod as follows:
use v6.c;
use nqp;
my $precomp-store = CompUnit::PrecompilationStore::File.new(prefix=>'cache'.IO);
my $precomp = CompUnit::PrecompilationRepository::Default.new(store=> $precomp-store );
my $key = nqp::sha1('test.pod6');
'test.pod6'.IO.spurt(q:to/CONTENT/);
=begin pod
=TITLE More and more
Some more text
=end pod
CONTENT
$precomp.precompile('test.pod6'.IO, $key, :force);
my $handle = $precomp.load($key, )[0];
my $resurrected = nqp::atkey($handle.unit,'$=pod')[0];
say $resurrected ~~ Pod::Block::Named;
So now I change the POD, how do I use the :since flag? I thought that if :since contains a time after the compilation, then the value of the handle would be Nil. That does not seem to be the case.
my $new-handle = $precomp.load($key, :since('test.pod6'.IO.modified));
say 'I got a new handle' with $new-handle;
Output is 'I got a new handle'.
What I am doing wrong?
Here is a pastebin link with code and output: https://pastebin.com/wtA9a0nP
Module loading code caches look ups and essentially start with:
$lock.protect: {
return %loaded{$id} if %loaded{$id}:exists;
}
So the question becomes "how do I load a module and then unload it (so i can load it again)?" to which the answer is: you cannot unload a module. You can however change the filename, distribution longname ( via changing the name, auth, api, or version ), or precomp id -- whatever a specific CompUnit::Repository uses to uniquely identify modules -- to bypass the cache.
What seems to be overlooked is that $key is intended to represent an immutable name, such that it will always point at the same content. What version of foo.pm should be loaded for Module Used::Inside::A if foo.pm is being loaded by Module A and B at the same time, Module A loads foo first, and then Module B modifies foo? The old version Module A loaded, or the ( possibly conflicting with previous version ) Module B loaded version? And how would this differentiation work for precomp file generation themselves ( which again can happen in parallel )?
Of course if we ignore the above we could add code to make expensive .IO.modified calls for every single module load for all CompUnit::Repository types ( slowing down startup speed ) to say "hey this immutable thing changed". But the granularity of file system modified timestamps on some OS's made the check pretty fragile ( especially for multi-threaded module loading with precomp files being generated ), meaning that even more expensive calls to get a checksum would be necessary every single time.

extra-paths not added to python path with zc.recipe.testrunner

I am trying to run tests by adding a version of tornado downloaded from github.com in the sys.path.
[tests]
recipe = zc.recipe.testrunner
extra-paths = ${buildout:directory}/parts/tornado/
defaults = ['--auto-color', '--auto-progress', '-v']
But when I run bin/tests I get the following error :
ImportError: No module named tornado
Am I not understanding how to use extra-paths ?
Martin
Have you tried looking into generated bin/tests script if it contains your path? It will tell definitely if your buildout.cfg is correct or not. Maybe problem is elsewhere. Because it seem that your code is ok.
If you happen to regularly include various branches from git/mercurial or elsewhere to buildout, you might be interested in mr.developer. mr.developer can download and add package to develop =. You wont need to set extra-path in every section.

Using sub-repo with hgwebdir difficulties in mercurial

Allright I got myself in a deadlock with Mercurial and sub-repos... Here's what happenend:
I had a large mercurial repo that I server via apache and hgweb.cgi.
Due to the size of the repo I decided to move to sub-repositories and share these with hgwebdir.cgi.
Using the convert tool with the filemap option I created several sub-repositories:
/main/foo
/main/bar
Nicely created an entry for the sub-repositories in .hgsub:
foo = foo
bar = bar
And set hgwebdir.cgi up to show $/** as the root folder.
Now when I went to my site (foo.com/hg) I saw my sub-repositories with one empty reposory among them (no name, no content), but I could not download it (archive location unknown):
empty_repo http://img707.imageshack.us/img707/8237/emptysubrepo.png
That was allright until I added a new sub-repository.
I could not push the new .hgsub file to foo.com/hg, since that page is served by hgwebdir.
The only method I can work currently is switch from hgwebdir to hgweb, commit .hgsubste and switch back to hgwebdir.
Does someone have a good setup for such a mess?
On the webserver your main and its subrepos should appear as siblings -- not with the subrepos inside main.
Main
ASCII
AlignDistribute
And the URLs in your .hgsub should look like:
ASCII = ../ASCII
AlignDistribute = ../AlignDsitribute
Then you'll be able to push/pull to http://foo.com/hg/Main and when you clone it the clone/update will automatically attach and clone down the separate subrepos.
From what I've read on https://www.mercurial-scm.org/wiki/PublishingRepositories#multiple
The keys (on the left) and the values (on the right) are both filesystem paths
The keys should be prefixes of the values and are "subtracted" from the values in order to generate the URL paths to each repository
What I'm guessing happened is that in your hgweb(dir) configuration you're specifying the same value for a collection possibly as the key, so during subtraction it ends up with a blank name and no way to get to it.
When I use [collections] to set /a/full/path = /a/full/path directly to a repo, it'll end up blank too, because it's reading that folder as a repo because it is a repo, instead of each sub-directory being an individual repo, after I removed the .hg folder and .hgsubs and everything from the root of my collection entry, all the subfolders started showing up properly.
I originally used in [paths], /path/to/my/project = /path/to/my/project, and since that is a single referenced repository, it'll subtract the value from the key, leaving you once again with '', instead I used project = /path/to/my/project and it came out as 'project'.
Hopefully that URL or these descriptions will get you out of your pickle!

Branching with clearcase remote client

I am trying to branch a file in ClearCase Remote Client.
I have the branch set up and the config spec is updated to handle the branch.
But I can't find the option, and the googling isn't helping much.
The way I understand your question, it sounds like you want to somehow select a command from a Clearcase RC menu(s) and have the branch explicitly created(?)
Clearcase has no explicit "Generate Branch for this File" command; you would want the "Checkout" command in this case. Branching is indirect and is a result of checking out a version of a file in a view that has a config spec with the '-mkbranch ' operation in it. I.e. the following config spec will create the dev_1.0_branch once I check it out (for any and all vobs and files):
element * CHECKEDOUT
element * .../dev_1.0_branch/LATEST
element * /main/LATEST -mkbranch dev_1.0_branch
The first line is standard for views in which you are doing development, line 2 will assure that I see any file that has a dev_1.0_branch (particularly important for the checkout+mkbranch to work as expected :-), and line 3 will select the latest version of any file that does not have a dev_1.0_branch and will create the branch if (and only if) the file version selected by that rule is checked out.
Please let me know if any of the above sounds greek to you, particularly any of the config spec rules. Having worked with ClearCase for a long time, I assume and use a lot of its terminology and concepts as if it's common knowledge :-P.
One thing of note: if you checkout the file, then immediately uncheckout the file, you will leave an empty branch on that file (i.e. in the above you would have a file with a version such as: foo.c##/main/dev_1.0_branch/0, but no /main/dev_1.0_branch/1 version). Many sites prefer to keep the version tree clean and remove empty branches (one can be found in this IBM Rational Technical article)
Just to be clear, I'm familiar with ClearCase Base & ClearCase MultiSite, but have not worked with the Remote Client yet.
--- 2009-Jun-29 Update
In response to Paul's comment below, if you want to be selective in what files are branched, you can modify the "*" to be more specific. For example, if you want to only branch foo.c in the FOODEV VOB, but leave everything else on main:
UNIX config spec:
element * CHECKOUT
element * .../my_dev_branch/LATEST
element /vobs/FOODEV/src/foo.c -mkbranch my_dev_branch
element * /main/LATEST
(For windows, you would want to use Windows conventions. I.e. \FOODEV\src\foo.c).
You can also select a directory and all elements below the directory (again UNIX config spec):
element * CHECKOUT
element * .../my_dev_branch/LATEST
element /vobs/FOODEV/src/mycomponent/... -mkbranch my_dev_branch
element * /main/LATEST
The main page for config_spec (cleartool man config_spec from the command line on windows or unix) provides decent guidance in the "Pattern" section for how to write the element/version selector (2nd column).
You can do a lot of complex version selection with the config specs. Please let me know if you would like more details or specifics.
Here's a config spec that I used for fixing a particular bug, with names changed to disguise some of the guilty.
element * CHECKEDOUT
element * .../TEMP.bugnum171238.jleffler/LATEST
mkbranch -override TEMP.bugnum171238.jleffler
include /clearcase/cspecs/project/version-1.23.45
To create the branch, in each VOB, I used a command:
ct mkbrtype -c 'Branch for bug 171238' TEMP.bugnum171238.jleffler#/vobs/project
Previously, we used config specs with -mkbranch rules appended to the various element lines.