How to purge unmanaged NFS mounts in Puppet? - nfs

How can I purge all unmanaged NFS mounts with Puppet?
Example 1: The following Puppet code purges all users not explicitly managed by Puppet:
resources { "user":
purge => true,
}
Example 2: The following code purges all unmanaged Nginx virtual hosts:
file { "/etc/nginx/sites-enabled/":
recurse => true,
purge => true,
}
But how can I purge all unmanaged NFS mounts?
Here's what I tried. I have my own definition for NFS mounts:
define nfs-client::mount() {
...
}
However, the following did not work:
resources { "nfs-client::mount":
purge => true,
}

Based on this bug report I don't believe that this works on defines (which are collections of resources) - only types (built in or custom) that have an initialize() method implemented (these are individual resources).
However, as "mount" is a built in resource, you should be able to just do:
resources{'mount': purge => true}

Related

How can I use environment specific variables in micronaut?

I'm new to micronaut and server side programming in general. The micronaut documentation, unfortunately, does not make a lot of sense to me, as I do not have a Java background. A lot of the terms like "ApplicationContext" make sense in english, but I have no idea how to use them in practice.
Trying to start with a very basic app that prints different configurations ("localhost", "dev", "prod") depending on the environment it is in.
Here's my controller
#Controller("/")
class EnvironmentController {
// this should return "localhost", "DEV", "PROD" depending on the environment
#Get("/env")
#Produces(MediaType.TEXT_PLAIN)
fun env() = "???" // what should I put here ?
// this should return the correct mongodb connection string for the environment
#Get("/mongo")
#Produces(MediaType.TEXT_PLAIN)
fun mongo() = "???" // what should I put here ?
}
Here's the application.yml. Ideally I'd have 1 yml file for each environment
micronaut:
application:
name: myApp
server:
port: 8090
environment: localhost
mongodb:
uri: 'mongodb://localhost:27017'
Application.kt is untouched with the rest of the files generated by the mn cli tool. How can I set per environment parameters, or pass the yml file as a parameter when starting micronaut?
Are there any conventions around this?
You can specify an environment with -Dmicronaut.environments, or by specifying them in the context builder Micronaut.run in your Application class.
https://docs.micronaut.io/latest/guide/index.html#environments
Then for example application-env.yml will be loaded.
https://docs.micronaut.io/latest/guide/index.html#propertySource
The docs are pretty clear on this
By default Micronaut only looks for application.yml. Then, for tests,dev and prod, it loads application.yml and overrides any values there with the ones defined in application-test.yml, application-dev.yml and application-prod.yml
If you want to enable any other environment, you need to do it manually
public static void main(String[] args) {
Micronaut.build(args)
.mainClass(Application.class)
.defaultEnvironments("dev")
.start();
}
https://docs.micronaut.io/latest/guide/index.html#_default_environment

How to handle JWT authentication with RxDB?

I have a local RxDB database and I want to connect it with CouchDB. Everything seems to works fine except for authentication. I have no idea how to add it differently then inserting credentials in database url:
database.tasks.sync({
remote: `http://${username}:${pass}#127.0.0.1:5984/tododb`,
});
I would like to use JWT auth but can't find how to add a token to sync request. I found only some solutions for PouchDB (pouchdb-authentication plugin) but can't get it working with RxDB.
RxDB is tightly coupled with PouchDB and uses its sync implementation under the hood. To my understanding, the only way to add custom headers to a remote PouchDB instance (which is what is created for you when you pass a url as the remote argument in sync), is to intercept the HTTP request:
var db = new PouchDB('http://example.com/dbname', {
fetch: function (url, opts) {
opts.headers.set('X-Some-Special-Header', 'foo');
return PouchDB.fetch(url, opts);
}
});
PouchDB replication documentation (sync) also states that:
The remoteDB can either be a string or a PouchDB object. If you have a fetch override on a remote database, you will want to use PouchDB objects instead of strings, so that the options are used.
Luckily, RxDB's Rx.Collection.sync does not only accept an server url as the remote argument, but also another RxCollection or a PouchDB-instance.
RxDB even reexport the internally used PouchDB module, so you do not have to install PouchDB as a direct dependency.
import { ..., PouchDB } from 'rxdb';
// ...
const remotePouch = new PouchDB('http://27.0.0.1:5984/tododb', {
fetch: function (url, opts) {
opts.headers.set('Authorization', `Bearer ${getYourJWTToken()}`)
return PouchDB.fetch(url, opts);
}
})
database.tasks.sync({
remote: remotePouch,
});

In Ratpack, how can I configure loading configuration from an external file?

I have a Ratpack app written with the Groovy DSL. (Embedded in Java, so not a script.)
I want to load the server's SSL certificates from a config file supplied in the command line options. (The certs will directly embedded in the config, or possibly in a PEM file referenced somewhere in the config.)
For example:
java -jar httpd.jar /etc/app/sslConfig.yml
sslConfig.yml:
---
ssl:
privateKey: file:///etc/app/privateKey.pem
certChain: file:///etc/app/certChain.pem
I seem to have a chicken-and-egg problem using the serverConfig's facilities for reading the config file in order to configure the SslContext later in the serverConfig. The server config isn't created at the point I want to load the SslContext.
To illustrate, the DSL definition I have is something like this:
// SSL Config POJO definition
class SslConfig {
String privateKey
String certChain
SslContext build() { /* ... */ }
}
// ... other declarations here...
Path configPath = Paths.get(args[1]) // get this path from the CLI options
ratpack {
serverConfig {
yaml "/defaultConfig.yaml" // Defaults defined in this resource
yaml configPath // The user-supplied config file
env()
sysProps('genset-server')
require("/ssl", SslConfig) // Map the config to a POJO
ssl sslConfig // HOW DO I GET AN INSTANCE OF that SslConfig POJO HERE?
baseDir BaseDir.find()
}
handlers {
get { // ...
}
}
}
Possibly there is a solution to this (loading the SSL context in a later block?)
Or possibly just a better way to go about the whole thing..?
You could create a separate ConfigDataBuilder to load up a config object to deserialize your ssl config.
Alternatively, you can bind directly to server.ssl. All of the ServerConfig properties bind to the server space within the config.
The solution I am currently using is this, with an addition of a #builder method to SslConfig which returns a SslContextBuilder defined using its other fields.
ratpack {
serverConfig {
// Defaults defined in this resource
yaml RatpackEntryPoint.getResource("/defaultConfig.yaml")
// Optionally load the config path passed via the configFile parameter (if not null)
switch (configPath) {
case ~/.*[.]ya?ml/: yaml configPath; break
case ~/.*[.]json/: json configPath; break
case ~/.*[.]properties/: props configPath; break
}
env()
sysProps('genset-server')
require("/ssl", SslConfig) // Map the config to a POJO
baseDir BaseDir.find()
// This is the important change.
// It apparently needs to come last, because it prevents
// later config directives working without errors
ssl build().getAsConfigObject('/ssl',SslConfig).object.builder().build()
}
handlers {
get { // ...
}
}
}
Essentially this performs an extra build of the ServerConfig in order to redefine the input to the second build, but it works.

How To Create REST Endpoints and Remote Methods in Loopback at Runtime Without Restarting Node?

We have successfully created REST remote methods in code in Loopback using a boot script which has has allowed us to completely eliminate the JSON schema files. However, our ultimate goal is be able to create a new REST endpoint and remote methods at runtime, on the fly after startup at any time. In the example below the new endpoint ('newEndPoint') should be created by calling /api/example/createNewMethod, but 'newMethod' is not being exposed in the REST API. Here's the code:
// **************
// Initialize 'createNewMethod' in a boot script. (THIS IS WORKING)
// **************
model.createNewMethod = function createNewMethod(data, callback) {
// **************
// Initialize 'newMethod' # runtime by calling createNewMethod
// **************
console.log("Initializing 'newMethod'...");
// This is called by calling /api/example/newMethod
model.newMethod = function newMethod(data, callback) {
// THIS IS NOT WORKING
console.log("'newMethod' works!")
// Return from newMethod()
callback({return: true});
}
model.remoteMethod(
'newMethod',
{
http: { verb: 'get' },
returns: [
{ arg: 'eventinfo', type: 'data' },
]
}
);
// Return from createNewMethod()
callback({return: true});
}
model.remoteMethod(
'createNewMethod',
{
http: { verb: 'get' },
returns: [
{ arg: 'eventinfo', type: 'data' }
]
}
);
This issue has been resolved. We can allow users to add new tables or add new columns to existing tables without restarting node and the exposing the changes via a Loopback REST endpoint by simply recreating the end-point. See

yii-user extension include(WebUser.php) No such file or directory

i'm trying to install the extension of yii-user following this official tutorial
http://www.yiiframework.com/extension/yii-user/#hh2
But i'm having some problems, specially when i'm adding this
user'=>array(
// enable cookie-based authentication
'class' => 'WebUser',
'allowAutoLogin'=>true,
'loginUrl' => array('/user/login'),
to the configuration main. When i add this code, i have this message error
include(WebUser.php) [function.include]: failed to open stream: No such file or directory
Any clue? I need to do something before?
Thanks in advance
I searched a little bit and i found the solution. But it wasn't in the documentation.
So, we should create the WebUser.php in protected/components like this :
<?php
// this file must be stored in:
// protected/components/WebUser.php
class WebUser extends CWebUser {
// Store model to not repeat query.
private $UserLogin;
// Return first name.
// access it by Yii::app()->user->first_name
function getFirst_Name(){
$user = $this->loadUserLogin(Yii::app()->user->user_id);
return $user->first_name;
}
// This is a function that checks the field 'role'
// in the User model to be equal to 1, that means it's admin
// access it by Yii::app()->user->isAdmin()
function isAdmin(){
$user = $this->loadUser(Yii::app()->user->user_id);
return intval($user->user_role_id) == 1;
}
// Load user model.
protected function loadUserLogin($id=null)
{
if($this->UserLogin===null)
{
if($id!==null)
$this->UserLogin=UserLogin::model()->findByPk($id);
}
return $this->UserLogin;
}
}?>
and should work.
Did you follow the instructions at http://www.yiiframework.com/extension/yii-user/#hh2?
You probably forgot to specify import paths to the user module in config.php
'import'=>array(
...
'application.modules.user.models.*',
'application.modules.user.components.*',
),
I had the same problem and found that it's the permission problem. Apache user (www-data in my case) couldn't access protected/modules/users/* files.