Puppet - how to set up a conditional to check if a package is installed and in case of not installed, I need to skipped it? - conditional-statements

Sorry but I'm starting my jorny in puppet and I'm facing my first challenge with that tool.
I want to check in puppet if a package is insttalled on server and in case of yes, guarantee it is running and in case of NOT installed, just skipped to the next task.
class profile::windows::rapid7 {
$manage_rapid7 = lookup('manage_rapid7', Optional[Boolean], 'first', true)
$rapid7_filepath = 'C:\Program Files\Rapid7\Insight Agent\ir_agent.exe'
$rapid7_service_exists = find_file($rapid7_filepath)
if $facts['kernel'] == 'Windows' {
if $manage_rapid7 {
if($rapid7_service_exists){
service { 'ir_agent':
ensure => 'running',
enable => 'true',
}
}
}
}
}
As you can see above, in case of rapid7 is installed, I'll have sure it is running but now I have some servers that don't have this package and because of this, I'm getting error.
So my question is:
Is it possible to just skipp this task in case of this pkg isn't installed?
Best Regards,

I want to check in puppet if a package is insttalled on server and in case of yes, guarantee it is running and in case of NOT installed, just skipped to the next task.
The most appropriate approach is for Puppet to know based on node identity whether the package is supposed to be installed. This is the realm of node classification, generally realized via node blocks, via external node classifier, and / or via node-specific external data. Using these tools, you would inform Puppet during catalog building whether to apply the class that manages the service in question, with what class parameters. It would be typical to also choose on the same basis whether to apply a class that manages the package itself. You can find many examples of this approach on the Forge.
But where you do need to feed data about the current state of a node into the catalog building process, the Puppet mechanism that serves is facts. It would be possible to write a custom or external fact that tested whether the package was installed. You could then write Puppet conditional statements to influence the contents of the node's catalog based on the value of that fact. Note well, however, that this speaks to node state before the run, and also that it is superfluous if you want Puppet to manage the installation status of the package.

Related

How to run Odoo with OCA repositories' modules in Odoo.sh?

I am testing Odoo.sh, trying to run an Odoo 15 Enterprise. I read all the documentation and see several webinars about it, but I am not able to run an instance with any OCA module.
To do that, I followed these steps:
In the Odoo.sh interface, I created a new branch in the Development category, forking from main branch (the one in the Production category). Note: the main branch is the one created by default by Odoo.sh, I didn't make any modification on it and in fact it works OK, I can connect to it.
Also in the Odoo.sh interface, I clicked on the button Submodule and then on Run on Odoo.sh. In the opened pop-up, I added the OCA repository l10n-spain, (version 15.0 of course). The repository works perfectly in a local server. In fact you can try with other OCA repository, the result is going to be the same.
After doing that, Odoo.sh adds the repo to the project with a new [ADD] commit, and tries to make a build of it. However, the tests always fail.
If I go to the log, first, in the install.log section, I can see errors with Pip libraries, so I open a shell and try to fix them, with pip3 check and then adjusting the versions of the libraries it complains of.
After that, when I try to connect to the new build, the odoo.log starts being filled but also with errors, particularly this one:
WARNING xxx odoo.addons.base.models.ir_cron: Tried to poll an undefined table on database xxx.
ERROR xxx odoo.sql_db: bad query:
SELECT latest_version
FROM ir_module_module
WHERE name='base'
ERROR: relation "ir_module_module" does not exist
LINE 3: FROM ir_module_module
^
This error uses to appear when you do a wrong installation of Odoo, but the installation is done by Odoo.sh, so... how can I fix this?
Does anyone experienced the same? Any ideas? May be the Python libraries are the problem?
One problem can be that the requirements file brokest the installation. odoo.sh tries to install it automatically, and because odoo.sh is using outdated python modules, the installation usually breaks.
https://github.com/OCA/l10n-spain/blob/15.0/requirements.txt
You can try to copy the required modules directly to your repository.
Well, in the end I managed to connect to the build after open a shell and writing these commands:
odoosh-restart http
odoo-update all
Still didn't check which of them did the trick.

Unable to get image details : Environment version Autosave_(date)T(time)Z_******** provided in request doesn't match environ

On AzureML Batchendpoint, I'm recently hitting the following error:
Unable to get image details : Environment version Autosave_(date)T(time)Z_******** provided in request doesn't match environ.
when I setup the batch-endpoint with a yml config:
environment: azureml:env-name:env-version
So, AzureML creates and builds the environment with the version I specify env-version, which is just a number (in my case = 3).
and then for some weird reason, AzureML creates an extra environment version called Autosave_(date)T(time)Z_********, which is not built, but based on the previous one just created, and then it becomes the latest version of that environment.
In summary, AzureML instead of looking for the version that I specified as env-name:3 it seems to be looking for env-name:Autosave_(date)T(time)Z_******** and then throws the error message mentioned above.
I found the problem was that when creating an environment from a YAML specification file, one of my conda dependencies was cmake, which I needed to allow installation of another python module. The docker image is exactly the same as a previously created environment.
Removing the cmake dependency from the YAML file, eliminated the issue. So the workaround is to install it using a Dockerfile.
The error message was very misleading to start with, but got there in the end after understanding that AzureML reuses a cached image, based on the hash value, from the environment definition accordingly to this
So for that reason, the automatically created Autosave docker image references to that same build, which only happens once when the first job is sent.

setting NODE_EXTRA_CA_CERTS with dotenv does not work as an export

I feel puzzled by the following behavior. In the very beginning of my main index.js, I am using
require('dotenv').config();
console.log(process.env); // everything seems in order
I know that the rest of my code successfully access all the relevant process.env.${VARS}. However, I get SSL exceptions; exceptions that I can easily solve by
export NODE_EXTRA_CA_CERTS=/some/absolute/path/to/ca.pem
npm start
Is there something special about NODE_EXTRA_CA_CERTS that would explained why this specific variable set with require('dotenv').config() does not work while the others work like a charm?
Does it need to be set before running npm? If it does, why is it the case and are there any workaround so I could keep thing simple?
environement:
dotenv 16.0.0
node v16.13.2
neardupe How to properly configure node.js to use Self Signed root certificates? .
Your problem is not in npm. npm start runs your application, typically (but not necessarily) by running node (or whatever spelling on your platform) to run your js code. When you use node to run js, NODE_EXTRA_CA_CERTS is read and saved in the C-code part of node at startup, before beginning to execute js, and subsequent changes in js variables like process.env do not affect it.
The clean way to do this in js is to pass the desired CAlist -- which can consist of the standard list (from tls.rootCertificates) plus any additions (or replacements or deletions) you choose -- in the (relevant) TLS socket creation, or any https request that implicitly creates a TLS socket; or alternatively to use --use-openssl-ca and select an OpenSSL-format store provided by your system (modified if necessary by system means like update-ca-certificates on Debian/Ubuntu) or one you create.
Or when using npm as you do, it should be possible to configure your package.json to set the envvar before running the application in node.
If you can't do either/any of those, especially where you control the toplevel (and startup) but call libraries you can't [safely] change, see the Q I linked above. For https connections that use the default https.globalAgent you can (documentedly) set that per the A. For all connections, you can monkeypatch tls.createSecureContext to use the undocumented context.addCACert as in the Q, which OP confirmed in the A does actually work if using a correct cert.

Sitecore databases missing after SIFLess install

I am using SIFLess to install Sitecore 9.1 Update 1 on my local machine in order to get started with development with my team. However, the install is not creating certain databases on my system that are needed to get up and running, most notably the Reporting database. This of course causes problems when I deploy code from my team's repo to my local instance as it references these databases. I see that the SIFLess-generated PowerShell script has calls to a 'RemoveDatabase' function that references these databases in the uninstall method, but no code to create them in the first place during an install. The missing databases are:
MarketingAutomation
Messaging
Processing.Pools
ProcessingEngineStorage
ProcessingEngineTasks
ReferenceData
Reporting
Xdb.Collection.Shard0 and 1
Xdb.Collection.ShardMapManager
These are what I have gleaned from the uninstall logic in the PowerShell script generated by SIFLess. Again, no logic exists to create them in the first place in the install section. My team members all have these databases on their systems. What am I doing wrong? I am a Sitecore novice here.
Please make sure you are using the good package. You have to download the XP package, not XM. (Just to be sure here). Afterwards, the Database installation are done with DacPac found within the Sitecore Web Deploy Package ( *.scwdp).
Please, also make sure that within this scdwp you can see (can double click or extract) the database missing :
MarketingAutomation
Messaging
Processing.Pools
ProcessingEngineStorage
ProcessingEngineTasks
ReferenceData
And do the same with the xConnect SCWDP, and make sure you see the databse missing there :
Xdb.Collection.Shard0
Xdb.Collection.Shard1
Reporting
Sometimes, if you have tried the installation script more than once, you can have some undesired behavior. You are possibly trying to go forward with the wrong certificates. Also, some services were actually created on previous installation attempts.
Here is what I think should help you get through.
Clean your workspace
Remove your databases that are related to the installation if exists.
Remove your certificates (using certlm -> you can type in your windows search bar "cert" and then you should be able to pick "Manage computer Certificate".
On the left sidebar, Click on Personal > Certificates.
Remove your installation-related certificates
nameOfYourInstallation.identityserver
nameOfYourInstallation.sc
nameOfYourInstallation.xconnect
Open your Windows Services Manager (you can type in your windows search bar "services" and select the services app)
You should be able to see those services :
Sitecore Marketing Automation Engine - nameOfYourInstallation(might be one of your previous install)
Sitecore Processing Engine - nameOfYourInstallation
Sitecore XConnect Search Indexer - nameOfYourInstallation.
Write those down. Keep your service app open.
Using NSSM (probably installed already from some of your previous installed, if not, can use chocolatey ( https://chocolatey.org/packages/NSSM ) remove those services.
in a cmd : nssm remove serviceName
Note that you can remove them by right clicking etc. I just prefer the nssm way.
When its done, restart your computer (some services and in a state of removal, that needs a restart to be completely removed)
Try to install again.
Hope it helps, cheers !

Ignore packagist.org on composer install | update

I'm using composer internally for managing internal software dependencies. Our repository server is on our private network and we aren't using any other package from any other repository than ours.
Every time you run
composer.phar [install | update]
It checks on packagist.org repositories after check our own repository. Beyond unnecessary, it takes longer when packagist is slow (or even down) or our internet connection is having a bad day.
Is there any way to tell composer to ignore checking for packagist repositories?
Yes, and it is even documented on https://getcomposer.org/doc/05-repositories.md#disabling-packagist-org
You may try to use this command:
$ composer config repositories.packagist false
You probably want to have a look at Satis: http://getcomposer.org/doc/articles/handling-private-packages-with-satis.md
It will make your life easier if you deal with a bit more of local/private packages, because otherwise you'd have to mention EVERY repository that might host required code. And you can use Satis to grab a copy of the versions into a ZIP file that can be hosted locally as well. See http://www.naderman.de/slippy/src/?file=2012-11-22-You-Thought-Composer-Couldnt-Do-That.html#13 for some hints of how to do it (press cursor keys left/right to skip through the presentation)
For extra bonus points, you'd add packagist.org as a Composer repository to Satis, require some needed packages, and set { "require-dependencies": true } to grab their dependencies as well. In your own code, you'd only set your Satis repository and disable Packagist.