Basic usage of modules in puppet with hiera - module

I want to use puppet to manage some servers. Even after reading dozens of documentation pages, it is not clear to me how to use modules and how to use them with hiera. As first experiment I wanted a user "admin" to be created on one node and found this module -> https://github.com/camptocamp/puppet-accounts
My /etc/puppet/hiera.yaml looks as simple as this
---
:backends:
- yaml
:hierarchy:
- node/%{::fqdn}
- common
:yaml:
:datadir: /etc/puppet/hieradata
My /etc/puppet/hieradata/node/node1.example.com.yaml contains this
---
accounts::users:
admin:
uid: 1010
comment: admin
accounts::ssh_keys:
admin:
comment: ad
type: ssh-rsa
public: AAAAAAAAAAAAAA
This worked after I put this in my /etc/puppet/manifests/site.pp
hiera_include('classes')
class
{
'accounts':
ssh_keys => hiera_hash('accounts::ssh_keys', {}),
users => hiera_hash('accounts::users', {}),
usergroups => hiera_hash('accounts::usergroups', {}),
}
accounts::account
{
'admin':
}
Is this good practice? To me it feels wrong to put that stuff into site.pp since it gets messed up when I later use more modules. But where else to put it? I also don't understand how this separates data from logic, since I have data in both, node1.example.com.yaml and site.pp (admin). Some help would be great..

To understand what hiera is, you should think simply that Hiera is a DATABASE for puppet, a database of Variables/values and nothing more.
For a beginner I would suggest to focus on other parts of the system, like how to create modules! and how to manage your needs (without complexity) and then slowly build the "smart" recipes or the reusable ones...
Your puppet will first sick for a file called sites.pp (usually is on your main $confdir (puppet.conf variable. I am not going to mention environments it is for later.)
e path is /etc/puppet inside that directory, you have a directory manifests. There is the place for your sites.pp
usually a sites.pp structure is:
node default {
include *module*
include *module2*
}
node /server\.fqdn\.local/ {
include *module2*
include *module3*
}
this means that you have a default Node (if the node name doesn't fit any other node, will use the default, otherwise it will use the regex matching of the node FQDN in this case server.fqdn.local.
The modules (module, module2 and module3) are stored inside the $modulespath set on your puppet.conf. In our case i will use the: /etc/puppet/modules
the tree will look like:
/etc/puppet/modules/
/etc/puppet/modules/module/
/etc/puppet/modules/module/manifests/
/etc/puppet/modules/module/manifests/init.pp
/etc/puppet/modules/module2/
/etc/puppet/modules/module2/manifests/
/etc/puppet/modules/module2/manifests/init.pp
/etc/puppet/modules/module3/
/etc/puppet/modules/module3/manifests/
/etc/puppet/modules/module3/manifests/init.pp
About
classes: https://docs.puppetlabs.com/puppet/latest/reference/lang_classes.html
generally what i explained but from puppet labs: https://docs.puppetlabs.com/puppet/latest/reference/dirs_manifest.html

Please note that the example from the README
class { 'accounts':
ssh_keys => hiera_hash('accounts::ssh_keys', {}),
users => hiera_hash('accounts::users', {}),
usergroups => hiera_hash('accounts::usergroups', {}),
}
is catering to users of Puppet versions before 3.x which had no automatic parameter lookup. With a recent version, you should just use this manifest:
include accounts
Since the Hiera keys have appropriate names, Puppet will look them up implicitly.

This whole thing still makes no sense to me. Since I have to put
accounts::account
{
'admin':
}
in a manifest file to create that user, what for is hiera useful in this case? It doesn't separate data from logic. I have data in both, the .yaml file (ssh keys, other account data) and in a manifest file (the snippet above). By using hiera I expect to be able to create that user inside /etc/puppet/hieradata/node/node1.example.com.yaml but this is not the case. What is the right way to do this? What for is the example hiera file of this module useful? Wouldn't it be easier create an account the old style way in site.pp?

Related

Running manifests (classes) from a task or plan in Puppet Enterprise

TL;DR
In Puppet Enterprise, how do I run a manifest (testpp.pp) from a task or plan (not Bolt).
plan base_windows::testplan (
  TargetSpec $targets,
  Optional[String] $contents = undef,
  String $filename,
){
  $apply_prep($targets)
  $apply_results = apply($targets, '_catch_errors' => true) {
    class { 'base_windows::testpp': }
  }
  $apply_results.each | $result | {
    notice($result.report)
  }
}
apply_prep seems to succeed, but apply is failing with the following error:
{
"msg" : "Evaluation Error: Unknown function: 'report'. (file: /opt/puppetlabs/server/data/orchestration-services/code/environments/development/modules/base_windows/plans/testplan.pp, line: 16, column: 19)",
"kind" : "bolt/plan-failure",
"details" : {
"class" : "Bolt::PAL::PALError"
}
}
If I change the code to:
plan base_windows::testplan (
  TargetSpec $targets,
  Optional[String] $contents = undef,
  String $filename,
){
  apply_prep($targets)
  $apply_results = apply($targets, '_catch_errors' => true) {
# Is this how to call a class? I cannot find an example.    
class { 'base_windows::testpp': }
  }
  $apply_results.each |$result| {
$target = $result.target.name
if $result.ok {
  out::message("${target} returned a value: ${result.value}")
} else {
 out::message("${target} errored with a message: ${result.error.message}")
}
  }
}
The plan tells me it has failed, but there are no errors in the node's report. In fact, there is no entry for the time the plan was executed.
I cannot find any examples on how to call a class from a plan, so the above apply() is a guess, based on this documentation.
I have installed the puppetlabs_reboot module and successfully ran a plan using it, therefore, I conclude my system is set up correctly, it's just my code that is wrong.
Background
I may be going about this all wrong, so here is some background to the problem. Currently, I have a series of manifests that install various packages from the public Chocolatey repository depending on a node's classification. Package definitions are stored in Hiera data and each package' version is set to latest. At the end of the Package{} resource, some manifests include a reboot.
These manifests are used to provision new nodes and keep existing nodes up-to-date with the latest package version.
The Puppet agent is set to run once per hour and if the source package is updated in the Chocolatey repo, on the next Puppet run, the manifest will update the package, rebooting the node, if required.
Goal
New nodes are provisioned with the latest package version.
Prevent package updates at undetermined times on existing nodes.
Continue to allow Puppet agent runs every hour.
Make use of existing manifests.
Ideas
Split out the package{} code from the profile manifest and place them in tasks / plans, allowing packages to be updated out-of-hours.
Specify the actual package version in Hiera. Although this is more declarative and idempotent, it means keeping an eye on over 100 package version. I guess it would be fairly simple to interrogate the Chocolatey repos with code to pull the latest version number, but even so I am no better off.
Create a task with a script that runs choco upgrade all, however, the next Puppet run would revert package versions according to the version defined in Hiera, meaning Hiera still needs to be kept up-to-date.
Problems
As per the main crux of this question, how do I run manifests (classes) from plans? If I understand correctly, tasks are for ad-hoc scripts, whereas plans can run tasks and manifests. As a lot of time has been invested in writing manifests, I would prefer not to rewrite all my manifests as scripts.
I am confused by the Puppet documentation as it seems to switch between PE and Bolt syntax. I am using Puppet Enterprise where Puppet says they don't recommend using Bolt but their examples seem to site Bolt commands.
No errors in the node' report. apply_prep() reports executed successfully, albeit taking far longer to execute than puppetlabs_reboot module, but apply() results in a failure, but nothing is logged in the node's reports.
Using puppetlabs_reboot module as a reference, it appears their plan uses a bunch of tasks. It appears that they don't use apply() to run their reboot{} class. Is this not duplicating the work?
If anyone has any suggestions or ideas, I'd be grateful if you could share.
I've got it to work. The class I was trying to run, required parameters that I hadn't provided!
plan base_windows::testplan (
TargetSpec $targets,
Optional[String] $contents = undef,
String $filename,
){
apply_prep($targets)
$apply_results = apply($targets, '_catch_errors' => true) {
class { 'base_windows::testpp':
filename => $filename,
contents => $contents,
}
}
}
# Output the whole result_set in the PE console
return $apply_results
I found this out using the logs.
Turn on debug level logging in /etc/puppetlabs/puppetserver/logback.xml (root level="debug")
Tail the following logs:
tail -f /var/log/puppetlabs/bolt-server/bolt-server.log
tail -f /var/log/puppetlabs/puppetserver/puppetserver.log | grep -B 5 -A 5 'testplan'
tail -f /var/log/puppetlabs/orchestration-services/orchestration-services.log

Spring Cloud Server serving multiple property files for the same application

Lets say I have applicationA that has 3 property files:
-> applicationA
- datasource.properties
- security.properties
- jms.properties
How do I move all properties to a spring cloud config server and keep them separate?
As of today I have configured the config server that will only read ONE property file as this seems to be the standard way. This file the config server picks up seems to be resolved by using the spring.application.name. In my case it will only read ONE file with this name:
-> applicationA.properties
How can I add the other files to be resolved by the config server?
Not possible in the way how you requested. Spring Cloud Config Server uses NativeEnvironmentRepository which is:
Simple implementation of {#link EnvironmentRepository} that uses a SpringApplication and configuration files located through the normal protocols. The resulting Environment is composed of property sources located using the application name as the config file stem (spring.config.name) and the environment name as a Spring profile.
See: https://github.com/spring-cloud/spring-cloud-config/blob/master/spring-cloud-config-server/src/main/java/org/springframework/cloud/config/server/environment/NativeEnvironmentRepository.java
So basically every time when client request properties from Config Server it creates ConfigurableApplicationContext using SpringApplicationBuilder. And it is launched with next configuration property:
String config = application;
if (!config.startsWith("application")) {
config = "application," + config;
}
list.add("--spring.config.name=" + config);
So possible names for property files will be only application.properties(or .yml) and config client application name that is requesting configuration - in your case applicationA.properties.
But you can "cheat".
In config server configuration you can add such property
spring:
cloud:
config:
server:
git:
search-paths: '{application}, {application}/your-subdirectory'
In this case Config Server will search for same property file names but in few directories and you can use subdirectories to keep your properties separate.
So with configuration above you will be able to load configuration from:
applicationA/application.properies
applicationA/your-subdirectory/application.properies
This can be done.
You need to create your own EnvironmentRepository, which loads your property files.
org.springframework.cloud.config.server.support.AbstractScmAccessor#getSearchLocations
searches for the property files to load :
for (String prof : profiles) {
for (String app : apps) {
String value = location;
if (app != null) {
value = value.replace("{application}", app);
}
if (prof != null) {
value = value.replace("{profile}", prof);
}
if (label != null) {
value = value.replace("{label}", label);
}
if (!value.endsWith("/")) {
value = value + "/";
}
output.addAll(matchingDirectories(dir, value));
}
}
There you could add custom code, that reads the required property files.
The above code matches exactly the behaviour described in the spring docs.
The NativeEnvironmentRepository does NOT access GIT/SCM in any way, so you should use
JGitEnvironmentRepository as base for your own implementation.
As #nmyk pointed out, NativeEnvironmentRepository boots a mini app in order to collect the properties by providing it with - sort of speak - "hardcoded" {appname}.* and application.* supported property file names. (#Stefan Isele - prefabware.com JGitEnvironmentRepository ends up using NativeEnvironmentRepository as well, for that matter).
I have issued a pull request for spring-cloud-config-server 1.4.x, that supports defining additional file names, through a spring.cloud.config.server.searchNames environment property, in the same sense one can do for a single springboot app, as defined in the Externalized Configuration.Application Property Files section of the documentation, using the spring.config.name enviroment property. I hope they review it soon, since it seems many have asked about this feature in stack overflow, and surely many many more search for it and read the currently advised solutions.
It worths mentioning that many ppl advise "abusing" the profile feature to achieve this, which is a bad practice, in my humble opinion, as I describe in this answer

How do I find the version and authority of a Perl 6 module?

In Bar.pm, I declare a class with an authority (author) and a version:
class Bar:auth<Camelia>:ver<4.8.12> {
}
If I use it in a program, how do I see which version of a module I'm using, who wrote it, and how the module loader found it? As always, links to documentation are important.
This question was also asked on perl6-users but died before a satisfactory answer (or links to docs) appeared.
Another wrinkle in this problem is that many people aren't adding that information to their class or module definitions. It shows up in the META.json file but not the code.
(Probably not a satisfying answer, because the facts of the matter are not very satisfying, especially regarding the state of the documentation, but here it goes...)
If the module or class was versioned directly in the source code à la class Bar:auth<Camelia>:ver<4.8.12>, then any code that imports it can introspect it:
use Bar;
say Bar.^ver; # v4.8.12
say Bar.^auth; # Camelia
# ...which is short for:
say Bar.HOW.ver(Bar); # v4.8.12
say Bar.HOW.auth(Bar); # Camelia
The ver and auth methods are provided by:
Metamodel::ClassHOW (although that documentation page doesn't mention them yet)
Metamodel::ModuleHOW (although that documentation page doesn't exist at all yet)
Unfortunately, I don't think the meta-object currently provides a way to get at the source path of the module/class.
By manually going through the steps that use and require take to load compilation units, you can at least get at the prefix path (i.e. which location from $PERL6LIB or use lib or -I etc. it was loaded from):
my $comp-spec = CompUnit::DependencySpecification.new: short-name => 'Bar';
my $comp-unit = $*REPO.resolve: $comp-spec;
my $comp-repo = $comp-unit.repo;
say $comp-repo.path-spec; # file#/home/smls/dev/lib
say $comp-repo.prefix; # "/home/smls/dev/lib".IO
$comp-unit is an object of type CompUnit.
$comp-repo is a CompUnit::Repository::FileSystem.
Both documentations pages don't exist yet, and $*REPO is only briefly mentioned in the list of dynamic variables.
If the module is part of a properly set-up distribution, you can get at the meta-info defined in its META6.json (as posted by Lloyd Fournier in the mailing list thread you mentioned):
if try $comp-unit.distribution.meta -> %dist-meta {
say %dist-meta<ver>;
say %dist-meta<auth>;
say %dist-meta<license>;
}

Protractor - How to separate each test to one file and separate variabiles

I have some komplex protractor test written but everything is in one file.
Where I'm on top of it loading all variabiles like:
var userLogin = "John";
and after that somewhere in code I use it together.
What I need to do is
1. Separate all variabiles to aditional file (some config file)
2. Each test to one file
1- I try to make config.js where I add all variabiles and i required it in protractor.conf.js it load correctly problem is that when i use any of this variabiles in some test it's not working (test fail with "userName is not defined")
I know there is a way where i requre config.file in each test script but that's really not best option in my eyes.
2- How can I know what I did in last script if it's separate, like for example how to know I am logged in?
Thanks.
There are multiple things you can make use of.
2) How can I know what I did in last script if it's separate, like for example how to know I am logged in?
This is where beforeEach(), afterEach() can help:
To help a test suite DRY up any duplicated setup and teardown code,
Jasmine provides the global beforeEach and afterEach functions. As the
name implies, the beforeEach function is called once before each spec
in the describe is run, and the afterEach function is called once
after each spec.
There are also beforeAll(), afterAll() available in jasmine 2, or via jasmine-beforeAll third-party for jasmine 1:
The beforeAll function is called only once before all the specs in
describe are run, and the afterAll function is called after all specs
finish. These functions can be used to speed up test suites with
expensive setup and teardown.
1) I try to make config.js where I add all variabiles and i required
it in protractor.conf.js it load correctly problem is that when i use
any of this variabiles in some test it's not working (test fail with
"userName is not defined") I know there is a way where i requre
config.file in each test script but that's really not best option in
my eyes.
One option which I've personally used would be to create a config.js file with all the reusable configuration variables you would need in multiple tests and require the file once - in the protractor config - then set it as a params configuration key value:
var config = require("./config.js");
exports.config = {
...
params: config,
...
};
where config.js is, for example:
var config;
config = {
user: {
login: "user",
password: "password"
}
};
module.exports = config;
Then, you would not need to require config.js in every test, but instead, you'll use browser.params. For example:
expect(browser.params.user.login).toEqual("user");
Also, if you need some sort of a global test preparation step, you can do it in onPrepare() function, see Setting Up the System Under Test. Example configuration that performs a "global" login step is available here.
And an another quick note: you can have custom globally defined variables (like built-in browser or protractor), set them using global in onPrepare. For example, I've defined protractor.ExpectedConditions as a custom global variable:
onPrepare: function () {
global.EC = protractor.ExpectedConditions;
}
Then, in tests, don't require anything, `EC variable would be available in the scope, e.g.:
browser.wait(EC.invisibilityOf(scope.page.dropdown), 5000)
Also, organizing your tests using "Page Object Pattern" would also help to solve the reusability and modularity problem.

How does R3 use the Needs field of a script header? Is there any impact on namespaces?

I'd like to know the behaviour of R3 when processing the Needs field of a script header and what implications for word binding it has.
Background. I'm currently trying to port some R2 scripts to R3 in order to learn R3. In R2 the Needs field of a script header was essentially just documentation, though I made use of it with a custom function to reference scripts that are required to make my script run.
R3 appears to call the Needs referenced scripts itself, but the binding seems different to DOing the other scripts.
For example when %test-parent.r is:
REBOL [
title: {test parent}
needs: [%test-child.r]
]
parent: now
?? parent
?? child
and %test-child is:
REBOL [
title: {test child}
]
child: now
?? child
R3 Alpha (Saphiron build 22-Feb-2013/11:09:25) returns:
>> do %test-parent.r
Script: "test parent" Version: none Date: none
child: 9-May-2013/22:51:52+10:00
parent: 9-May-2013/22:51:52+10:00
** Script error: child has no value
** Where: get ajoin case ?? catch either either -apply- do
** Near: get :name
I don't understand why test-parent cannot access Child set by %test-child.r
If I remove the Needs field from test-parent.r header and instead insert a line to just DO %test-child.r then there is no error and the script performs as expected.
Ah, you've run into Rebol 3's policy to "do what you say, it can't read your mind". R3's Needs header is part of its module system, so anything you load with Needs is actually imported as a module, even if it isn't declared as such.
Loading scripts with Needs is a quick way to get them treated as modules even in the original author didn't declare them as such. Modules get their own contexts where their words are defined. Loading a script as a module is a great way to use a script that isn't that tidy, that leaks words into the shared script context. Like your %test-child.r script, it leaks the word child into the script context, what if you didn't want that to happen? Load it with Needs or import and that will clean that right up.
If you want a script treated as a script, use do to run it. Regular scripts use a (mostly) shared context, so when you do a script it has effect on the same context as the script you called it from. That is why the child: now statement affected child in the parent script. Sometimes that's what you want to do, which is why we worked so hard to make scripts work that way in R3.
If you are going to use Needs or import to load your own scripts, you might as well make them modules and export what you want, like this:
REBOL [
type: module
title: {test child}
exports: [child]
]
child: now
?? child
As before, you don't even have to include the type: module if you are going to be using Needs or import anyway, but it would help just in case you run your module with do. R3 assumes that if you declare your module to be a module, that you wrote it to be a module and depend on it working that way even if it's called with do. At the least, declaring a type header is a stronger statement than not declaring a type header at all, so it takes precedence in the conflicting "do what you say" situation.
Look here for more details about how the module system works: How are words bound within a Rebol module?