auth_ldap.log is not appearing when auth_ldap.log = network in RabbitMQ - rabbitmq

I've created a rabbitmq.conf and advanced.config for RabbitMQ intended to allow LDAP authentication with internal fallback. Because RabbitMQ is dumb and tries to use the installing user's appdata which is a terrible design for a Windows service, I've also redirected locations with environment variables:
RABBITMQ_BASE = D:\RabbitMQData\
RABBITMQ_CONFIG_FILE = D:\RabbitMQData\config\rabbitmq.conf
RABBITMQ_ADVANCED_CONFIG_FILE = D:\RabbitMQData\config\advanced.config
The config locations appear to be working correctly as they are referenced in the startup information and cause no errors on startup.
rabbitmq.conf (trimmed to relevant portions)
auth_backends.1 = ldap
auth_backends.2 = internal
auth_ldap.servers.1 = domain.local
auth_ldap.use_ssl = true
auth_ldap.port = 636
auth_ldap.dn_lookup_bind = as_user
auth_ldap.log = network
log.dir = D:\\RabbitMQData\\log
log.file.level = info
log.file.rotation.date = $D0
log.file.rotation.size = 10485760
advanced.config
[
{rabbitmq_auth_backend_ldap, [
{ssl_options, [{cacertfile,"D:\\RabbitMQData\\SSL\\ca.pem"},
{certfile,"D:\\RabbitMQData\\SSL\\server_certificate.pem"},
{keyfile,"D:\\RabbitMQData\\SSL\\server_key.pem"},
{verify, verify_peer},
{fail_if_no_peer_cert, true}
]},
{user_bind_pattern, ""},
{user_dn_pattern, ""},
{dn_lookup_attribute, "sAMAccountName"},
{dn_lookup_base, "DC=domain,DC=local"},
{group_lookup_base,"OU=Groups,DC=domain,DC=local"},
{vhost_access_query, {in_group, "cn=RabbitUsers,OU=Groups,DC=domain,DC=local"}},
{tag_queries, [
{administrator, {in_group, "CN=RabbitAdmins,OU=Groups,DC=domain,DC=local"}},
{management, {in_group, "CN=RabbitAdmins,OU=Groups,DC=domain,DC=local"}}
]}
]}
].
I'm using auth_ldap.log = network so there should be an ldap_auth.log file in my log directory which would help me troubleshoot but it's not there. Why would this occur? I've not seen any documented settings for auth_ldap logging other than .log so I would assume it would be with the other logs.
I'm currently running into issues with LDAP, specifically the error LDAP bind error: "xxxx" anonymous_auth. As I'm using simple bind via auth_ldap.dn_lookup_bind = as_user I should not be getting anonymous authentication. Without the detailed log however, I can't get additional information.

Okay looks I made two mistakes here:
Going back and re-reading, looks like I misinterpreted the documentation and believed auth_ldap.log was a separate file rather than just a setting. All the LDAP logging goes into the normal RabbitMQ log.
I had pulled Luke Bakken's config from https://groups.google.com/g/rabbitmq-users/c/Dby1OWQKLs8/discussion but the following lines ended up as:
{user_bind_pattern, ""},
{user_dn_pattern, ""}
instead of
    {user_bind_pattern, "${username}"},
    {user_dn_pattern, "${ad_user}"},
I had used a Powershell script with a herestring to create the config file which erroneously interpreted those variables as empty strings. Fixing that let me log on with "domain\username".

Related

Groovy URL getText() returns a PasswordAuthentication instance

I am trying to download the content of a password-protected Gerrit URL in a Jenkins pipeline Groovy script. HTTPBuilder is not accessible so I am using the URL class with Authenticator:
// To avoid pipline bailing out since data PasswordAuthentication is non-serializable
#NonCPS
def getToString(data) {
data.toString()
}
def fetchCommit(host, project, version) {
withCredentials([usernamePassword(credentialsId: 'my-credentials',
usernameVariable: 'user',
passwordVariable: 'PASSWORD')]) {
proj = java.net.URLEncoder.encode(project, 'UTF-8')
echo "Setting default authentication"
Authenticator.default = {
new PasswordAuthentication(env.user, env.PASSWORD as char[])
} as Authenticator
echo "https://${host}/a/projects/${proj}/commits/${version}"
url = "https://${host}/a/projects/${proj}/commits/${version}".toURL()
result = getToString(url.getText())
echo "${result}"
}
}
The result is a PasswordAuthentication instance, and not the expected data:
[Pipeline] echo
java.net.PasswordAuthentication#3938b0f1
I have been wrestling with this for a while. I have tried different ways to setup the authentication and reading the data, but those mostly end up with an exception. Using eachLine() on the url does not enter the closure at all. The job also exits far to quickly, giving the impression it not even tries to make a connection.
Refs:
https://kousenit.org/2012/06/07/password-authentication-using-groovy/

Turning off Deployd dashboard authentication

I have a Deployd application that uses the standard built-in authentication to access the "DEPLOYD DASHBOARD", the one where you enter the key that is revealed by dpd showkey.
The whole website is now secured with a username/password requirement to access it.
How do I turn off the authentication required to access the deployd dashboard?
I've tried deleting the ./.dpd/keys.json file.
I haven't yet found anything useful in the docs.
This doesn't seem like the best solution, but it does do exactly what is required:
From : http://docs.deployd.com/docs/server/
Note: If options.env is "development", the dashboard will not require authentication and configuration will not be cached. Make sure to
change this to "production" or something similar when deploying.
Example
('env': 'development' has been added):
var deployd = require('deployd')
, options = {
'port': 7777,
'db': {
'host': '127.0.0.1',
'name': 'my-database'
},
'env': 'development'
};
var dpd = deployd(options);
dpd.listen();
I won't mark this as the correct answer in case there is a solution that doesn't require doing something explicitly discouraged (ie. "make sure to change [this] when deploying").

deepstream error listen EADDRINUSE 127.0.0.1:6020

i try to run my first deepstream.io server from this link but i get this error :
error:
CONNECTION_ERROR | Error: listen EADDRINUSE 127.0.0.1:3003
PLUGIN_ERROR | connectionEndpoint wasn't initialised in time
f:\try\deep\node_modules\deepstream.io\src\utils\dependency-
initialiser.js:96
throw error
^
Error: connectionEndpoint wasn't initialised in time
at DependencyInitialiser._onTimeout
(f:\try\deep\node_modules\deepstream.io\src\utils\dependency-
initialiser.js:94:17)
at ontimeout (timers.js:386:14)
at tryOnTimeout (timers.js:250:5)
at Timer.listOnTimeout (timers.js:214:5)
and this is my code:
const DeepStreamServer = require("deepstream.io")
const C = DeepStreamServer.constants;
const server = new DeepStreamServer({
host:'localhost',
port:3003
})
server.start();
In deepstream 3.0 we released our HTTP endpoint, by default this runs alongside our websocket endpoint.
Because of this, passing the port option at the root level of the config no longer works (it overrides both the HTTP and websocket port options, as you can see in the screen capture provided, both endpoints are trying to start on the same port).
You can override each of these ports as follows:
const deepstream = require('deepstream.io')
const server = new deepstream({
connectionEndpoints: {
http: {
options: {
port: ...
}
},
websocket: {
options: {
port: ...
}
}
}
})
server.start()
Or you can define your config in a file and point to that while initialising deepstream[1].
[1] deepstream server configuration
One solution that i find is passing empty config object so inseted of :
const server = new DeepStreamServer({
host:'localhost',
port:3003
})
i'm just using this :
const server = new DeepStreamServer({})
and now everything work's well.
All the bellow is for Version 4.2.2 (last version by now)
I was having the same Port in use or config file not found errors. And i was using typescript and i didn't pay attention too to the output dir and build (which can be a problem when one use typescript and build). I was able to run the server in the end. And i had a lot of analysis.
I checked up the code source and i have seen how the config is loaded
const SUPPORTED_EXTENSIONS = ['.yml', '.yaml', '.json', '.js']
const DEFAULT_CONFIG_DIRS = [
path.join('.', 'conf', 'config'), path.join('..', 'conf', 'config'),
'/etc/deepstream/config', '/usr/local/etc/deepstream/config',
'/usr/local/etc/deepstream/conf/config',
]
DEFAULT_CONFIG_DIRS.push(path.join(process.argv[1], '..', 'conf', 'config'))
DEFAULT_CONFIG_DIRS.push(path.join(process.argv[1], '..', '..', 'conf', 'config'))
Also i tested different things and all. Here what i came up with:
First of all if we don't precise any parameter in the constructor. A config from the default directories will get to load. If there isn't then the server fail to run.
And one of the places where we can put a config is ./conf in the same folder as the server node script.
Secondly we can precise a config as a string path (parameter in the constructor). config.yml or one of the supported extensions. That will allow the server to load the server config + the permission.yml and users.yml configs. Which too are supposed to be in the same folder. If not in the same folder there load will fail, and therefor the permission plugin will not load. And so does the users config. And no fall back to default will happen.
Thirdly the supported extensions for the config files are: yml, yaml, json, js.
In nodejs context. If nothing precised. There is no fallback to some default config. The config need to be provided in one of the default folders, or by precising a path to it. Or by passing a config object. And all the optional options will default to some values if not provided ( a bit bellow there is an example that can show that ). Know however that precising an end point is very important and required.
To precise the path, we need to precise the path to the config.yml file (the server config) [example: path.join(__dirname, './conf/config.yml')]. Then from the same dir permission.yml and users.yml will be retrieved (the extension can be any of the supported one). We can not precise a path to a directory, it will fail.
We can precise the path to permission config or user config separatly within config.yaml as shown bellow:
# Permissioning example with default values for config-based permissioning
permission:
type: config
options:
path: ./permissions.yml
maxRuleIterations: 3
cacheEvacuationInterval: 60000
Finally we can pass an object to configure the server, or by passing null as a parameter and use .set methods (i didn't test the second method). For configuring the server we need to follow the same structure as the yml file. With sometimes a bit different naming. The typescript declaration files or types show us the way. With an editor like vscode. Even if we are not using typescript we can keep get the auto completion and type definitions.
And the simplest for equivalent to the previous version is :
const webSocketServer = new Deepstream({
connectionEndpoints: [
{
type: 'ws-websocket',
options: {
port: 6020,
host: '127.0.0.1',
urlPath: '/deepstream'
}
}
]
});
webSocketServer.start();
the above is the new syntax and way.
const server = new DeepStreamServer({
host:'localhost',
port:3003
})
^^^^^^^ is completely deprecated and not supported in version 4 (the doc is not updated).

Where do I find amqp_userinfo of amqp_URI for connecting to RabbitMQ

So I'm trying connect to my local RabbitMQ server through a java application using the amqp_URI which is of the following format
amqp_URI = "amqp://" amqp_authority [ "/" vhost ] [ "?" query ]
amqp_authority = [ amqp_userinfo "#" ] host [ ":" port ]
amqp_userinfo = username [ ":" password ]
the question is where do I find the amqp_userinfo in my server for the connection.
the user: guest and pwd: guest of the http://localhost:15672/ doesn't work. I have also tried creating a new user in http://localhost:15672/ and using it, it doesn't help.
Thanks in advance
Got it, apparently the password length was too short, was successful when I created longer password for the user

Spring Cloud Config (Vault backend) teminating too early

I am using Spring Cloud Config Server to serve configuration for my client apps. To facilitate secrets configuration I am using HashiCorp Vault as a back end. For the remainder of the configuration I am using a GIT repo. So I have configured the config server in composite mode. See my config server bootstrap.yml below:-
server:
port: 8888
spring:
profiles:
active: local, git, vault
application:
name: my-domain-configuration-server
cloud:
config:
server:
git:
uri: https://mygit/my-domain-configuration
order: 1
vault:
order: 2
host: vault.mydomain.com
port: 8200
scheme: https
backend: mymount/generic
This is all working as expected. However, the token I am using is secured with a Vault auth policy. See below:-
{
"rules": "path "mymount/generic/myapp-app,local" {
policy = "read"
}
path "mymount/generic/myapp-app,local/*" {
policy = "read"
}
path "mymount/generic/myapp-app" {
policy = "read"
}
path "mymount/generic/myapp-app/*" {
policy = "read"
}
path "mymount/generic/application,local" {
policy = "read"
}
path "mymount/generic/application,local/*" {
policy = "read"
}
path "mymount/generic/application" {
policy = "read"
}
path "mymount/generic/application/*" {
policy = "read"
}"
}
My issue is that I am not storing secrets in all these scopes. I need to specify all these paths just so I can authorize the token to read one secret from mymount/generic/myapp-app,local. If I do not authorize all the other paths the VaultEnvironmentRepository.read() method returns a 403 HTTP status code (Forbidden) and throws a VaultException. This results in complete failure to retrieve any configuration for the app, including GIT based configuration. This is very limiting as client apps may have multiple Spring profiles that have nothing to do with retrieving configuration items. The issue is that config server will attempt to retrieve configuration for all the active profiles provided by the client.
Is there a way to enable fault tolerance or lenience on the config server, so that VaultEnvironmentRepository does not abort and returns any configuration that it is actually authorized to return?
Do you absolutely need the local profile? Would you not be able to get by with just the 'vault' and 'git' profiles in Config Server and use the 'default' profile in each Spring Boot application?
If you use the above suggestion then the only two paths you'd need in your rules (.hcl) file are:
path "mymount/generic/application" {
capabilities = ["read", "list"]
}
and
path "mymount/generic/myapp-app" {
capabilities = ["read", "list"]
}
This assumes that you're writing configuration to
vault write mymount/generic/myapp-app
and not
vault write mymount/generic/myapp-app,local
or similar.