ELK - X-Pack Custom realm - authentication

I've developed a custom realm for my ELK cluster.
This module works well on a on node elasticsearch but when I install it on my production cluster, nothing works.
Elasticsearch starting logs :
- nothing special, everything seems to work and xpack module is loaded (generates a log in stdout)
Elasticsearch cluster diagnostic (custom seems to be disabled and not available) :
{
"security" : {
"available" : true,
"enabled" : true,
"realms" : {
"file" : {
"available" : true,
"enabled" : false
},
"ldap" : {
"available" : true,
"enabled" : false
},
"native" : {
"name" : [
"realm2"
],
"available" : true,
"size" : [
2
],
"enabled" : true,
"order" : [
1
]
},
"custom" : {
"available" : false,
"enabled" : false
},
...
}
Elasticsearch configuration :
cluster.name: "production-cluster-1"
network.host: 0.0.0.0
bootstrap.memory_lock: true
xpack.security.enabled: true
xpack.security.audit.enabled: true
xpack.monitoring.enabled: true
xpack.graph.enabled: false
xpack.watcher.enabled: false
xpack.ml.enabled: false
discovery.zen.ping.unicast.hosts: "-------------------"
network.publish_host: "----"
discovery.zen.minimum_master_nodes: 3
xpack.security:
authc:
realms:
realm1:
type: custom
order: 0
realm2:
type: native
order: 1
The native authentication works fine.
How can I troubleshoot this correctly ? :)
Thanks

Let's start by turning up the logging:
curl -u<user> -XPUT '<host>:<http-port>/_cluster/settings?pretty' -H 'Content-Type: application/json' -d'
{
"transient": {
"logger.org.elasticsearch.xpack.security.authc": "DEBUG"
}
}
'
That will give you any authentication logs on debug. I think the right package for the native realm should be logger.org.elasticsearch.xpack.security.authc.esnative in case you want to limit it down just to that.

Related

Context not set in time after fetching it from the basesites request after Upgrading to 2.1

we have our Spartacus project set up to fetch the context from the basesites request. A sample response can be seen here:
{
"baseSites" : [ {
"defaultLanguage" : {
"isocode" : "sl"
},
"geoRecommended" : false,
"showTeaser" : true,
"stores" : [ {
"currencies" : [ {
"isocode" : "EUR"
} ],
"defaultCurrency" : {
"isocode" : "EUR"
},
"defaultLanguage" : {
"isocode" : "sl"
},
"languages" : [ {
"isocode" : "sl"
} ],
} ],
"uid" : "ung-site-si",
"urlEncodingAttributes" : [ "languageCountry" ],
"urlPatterns" : [ "(?i)^https?://localhost(:[\\d]+)?/rest/.*$", "(?i)^https?://[^/]+/(sl-SI)/?.*$" ]
}, ...
]
We have two basesites set up at the moment. The urlPatterns are used to find the correct baseSite. Then the context(baseSite, language, currency) is set in our custom occ-loaded-config-converter. So we are not using any static context or fetching it from the URL, but getting the context from the response of the basesites request.
The site-context-interceptor then subscribes to e.g. this.languageService.getActive() and then sets the correct context (language, currency) for the backend requests:
/rest/v2/ung-site-rs/cms/pages?fields=DEFAULT&pageType=ContentPage&pageLabelOrId=/shoppster-akcija&lang=sr&curr=RSD
Before the Spartacus Upgrade to 2.0 this worked fine. Right after the context was set from the basesites request, the subscription in the site-context-interceptor was triggered an the right context sent with the subsequent backend requests. Now after upgrading to 2.1, the context is not set on time anymore. So the first few backend requests are sent with the wrong context (default USD, en) and then at some point in time, the subscriptions are triggered an the correct context is set.
This may be related to this change:
https://sap.github.io/spartacus-docs/technical-changes-version-2/#context-change-action-not-dispatched-on-the-initial-setting-of-the-value
Is it now not possible anymore to use the basesites request to set the context?
As discussed in comments: bumping to the latest patch version 2.1.4 has fixed the issue.

Why TSLint problems are not shown in IntelliJ Idea?

I'm using IntelliJ Idea (Ultimate) and I'm trying to setup TSLint. What I have done so far:
created a new project
installed TS & TSLint npm i tslint typescript -D
enabled TSLint in Languages and Frameworks | TypeScript | TSLint | Enable (like suggested here)
added additional (rather random) tslint.json (see below)
applied it via right click | Apply TSLint Code Style Rules
created some ugly .ts file (see below)
but still none of the problems are highlighted. When I run tslint ugly.ts in console it shows many problems. TS problems are highlighted:
I've also checked highlighting settings:
And here's how TSLint settings look now (Enable checked, Automatic search is set):
Any ideas what could be wrong?
ugly.ts:
/* tslint:enable */
// npm i tslint typescript -D
function a (): void {
var b = 1;
if(b){var c=2; var d=5;}
var a =4
a+b
var s1 = 'haha'
, s2 ="ololo"
,s3 = `wow`
return ;;
}
function b(){return 9}
tslint.json:
{
"extends": ["tslint:recommended"],
"linterOptions": {
"exclude": [
"node_modules/**/*"
]
},
"rules": {
"quotemark": {
"single": true,
"severity": "error"
}, //"jsx-double"
"interface-name": false,
"whitespace": [ true, "check-module" ],
"max-classes-per-file": [ false ],
"member-access": false,
"object-literal-sort-keys": false,
"member-ordering": false,
"semicolon": [ true, "always", "ignore-bound-class-methods" ],
"variable-name": [ true, "check-format", "allow-leading-underscore", "allow-pascal-case" ],
"no-console": false,
"indent": [ true, "spaces", 2 ],
"no-empty-interface": false
}
}

Defining a queue with a config file in RabbitMQ

Is there a way to define a queue in a configuration file like in ActiveMQ :
http://activemq.apache.org/configure-startup-destinations.html
Yes, it is possible.
The easiest way:
Add queue manually, from webUI
By default webUI is exposed on port 15672.
Add queue accessing http://localhost:15672/#/queues
Export config file from webUI.
Access main page http://localhost:15672/#/. At the bottom you have section Import / export definitions, and button download broker definitions.
Just download the file and it will contain all defined queues.
Sample config file, with users, virtual host and queue:
I have formatted the file using JStool plugin, JSFormat option from Notepad++.
By default, file is single line and not very readable.
Next to 'download broker definitions' there is button 'upload broker definitions'. You may upload your file (it will work with pretty-formatted file).
{
"rabbit_version" : "3.5.7",
"users" : [{
"name" : "guest",
"password_hash" : "42234423423",
"tags" : "administrator"
}
],
"vhosts" : [ {
"name" : "/uat"
}
],
"permissions" : [{
"user" : "guest",
"vhost" : "/uat",
"configure" : ".*",
"write" : ".*",
"read" : ".*"
}
],
"parameters" : [],
"policies" : [],
"queues" : [{
"name" : "sms",
"vhost" : "/uat",
"durable" : false,
"auto_delete" : false,
"arguments" : {}
}
],
"exchanges" : [],
"bindings" : []
}

SailsJs - problems with lifting (orm hook failed to load)

I am having problems with running my app under windows. Normally, I am developing on Macbook but temporarly I had to switch. The thing is, that the app was already working on windows without problems. Here is an error message:
error: A hook (orm) failed to load!
verbose: Lowering sails...
verbose: Sent kill signal to child process (8684)...
verbose: Shutting down HTTP server...
verbose: HTTP server shut down successfully.
error: TypeError: Cannot read property 'config' of undefined
at validateModelDef (C:\projects\elearning-builder\node_modules\sails\node_modules\sails-hook-orm\lib
\validate-model-def.js:109:84)
at C:\projects\elearning-builder\node_modules\sails\node_modules\sails-hook-orm\lib\initialize.js:218
:36
at arrayEach (C:\projects\elearning-builder\node_modules\sails\node_modules\lodash\index.js:1289:13)
at Function. (C:\projects\elearning-builder\node_modules\sails\node_modules\lodash\index.j
s:3345:13)
at Array.async.auto._normalizeModelDefs (C:\projects\elearning-builder\node_modules\sails\node_module
s\sails-hook-orm\lib\initialize.js:216:11)
at listener (C:\projects\elearning-builder\node_modules\sails\node_modules\sails-hook-orm\node_module
s\async\lib\async.js:605:42)
at C:\projects\elearning-builder\node_modules\sails\node_modules\sails-hook-orm\node_modules\async\li
b\async.js:544:17
at _arrayEach (C:\projects\elearning-builder\node_modules\sails\node_modules\sails-hook-orm\node_modu
les\async\lib\async.js:85:13)
at Immediate.taskComplete (C:\projects\elearning-builder\node_modules\sails\node_modules\sails-hook-o
rm\node_modules\async\lib\async.js:543:13)
at processImmediate [as _immediateCallback] (timers.js:383:17)
PS C:\projects\elearning-builder>
I tried to check it out, what exactly is happening in \node_modules\sails\node_modules\sails-hook-orm\lib\validate-model-def.js:109:84
so I added simple console.log temporarly:
console.log("error in line below", hook);
var normalizedDatastoreConfig = hook.datastores[normalizedModelDef.connection[0]].config;
And as a result I see:
error in line below Hook {
load: [Function: wrapper],
defaults:
{ globals: { adapters: true, models: true },
orm: { skipProductionWarnings: false, moduleDefinitions: [Object] },
models: { connection: 'localDiskDb' },
connections: { localDiskDb: [Object] } },
configure: [Function: wrapper],
loadModules: [Function: wrapper],
initialize: [Function: wrapper],
config: { envs: [] },
middleware: {},
routes: { before: {}, after: {} },
reload: [Function: wrapper],
teardown: [Function: wrapper],
identity: 'orm',
configKey: 'orm',
models:
{ /* models here, I removed this as it was too long /*},
adapters: {},
datastores: {} }
So, the normalizedModelDef.connection[0] has value development. But hook.datastores is empty? That is why there is no config property.
But the thing is, I do have connections in my config/connections.js
Like here:
development: {
module : 'sails-mysql',
host : 'localhost',
port : 3306,
user : 'ebuilder',
password : 'ebuilder',
database : 'ebuilder'
},
production: {
/* details hidden ;) */
},
testing: {
/* details hidden ;) */
}
Any suggestions/tips highly appreciated.
You have some connections defined, but do you have the default connection defined that might be specified in config/models.js? If for example you have:
module.exports.models = {
connection: 'mysql',
...
then 'mysql' needs to be defined in your connections.js
As I see in your config/connections.js
development: {
module : 'sails-mysql',
host : 'localhost',
port : 3306,
user : 'ebuilder',
password : 'ebuilder',
database : 'ebuilder'
},
You have given module : 'sails-mysql which is not correct. It should be adapter:'sails-mysql'
development: {
adapter : 'sails-mysql',
host : 'localhost',
port : 3306,
user : 'ebuilder',
password : 'ebuilder',
database : 'ebuilder'
},
check your controller or models contains any error code. like any symbol. i had face same problem while my controller contain any character before or after api started

Elasticsearch: Auto Indices Deletion/Expiry

I want to configure my elasticsearch 0.19.11 to delete indexes every 60s. My elasticsearch config has these 3 lines:
node.name: "Saurajeet"
index.ttl.disable_purge: false
index.ttl.interval: 60s
indices.ttl.interval: 60s
And its not working
I have 2 default docs indexed. And would be expecting it to go after 60s
$ curl -XGET http://localhost:9200/twitter/_settings?pretty=true
{
"twitter" : {
"settings" : {
"index.version.created" : "191199",
"index.number_of_replicas" : "1",
"index.number_of_shards" : "5"
}
}
Also if i m trying to do the following it doesnot have any effect
$ curl -XPUT http://localhost:9200/twitter/_settings -d '
> { "twitter": {
> "settings" : {
> "index.ttl.interval": "60s"
> }
> }
> }
> '
{"ok":true}~/bin/elasticsearc
$ curl -XGET http://localhost:9200/twitter/_settings?pretty=true
{
"twitter" : {
"settings" : {
"index.version.created" : "191199",
"index.number_of_replicas" : "1",
"index.number_of_shards" : "5"
}
}
}
I have indexes 2 documents and its still showing up after 1hr
$ curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elastic Search, so far so good?"
}'
$ curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elastic Search, so far so good?"
}'
WHAT DID I DO WRONG
P.S. I want to deploy this config with logstash. So any other alternative can be suggested.
for scaling reasons i dont want this autopurge to be a script.
I believe the indices.ttl.interval setting is only to tweak the cleanup process timing.
You would need to set the _ttl field for the index/type in order to expire it. It looks like this:
{
"tweet" : {
"_ttl" : { "enabled" : true, "default" : "60s" }
}
}
http://www.elasticsearch.org/guide/reference/mapping/ttl-field/
Finally Figured out myself. Upgraded elasticsearch version to 1.2.0. You can put in TTLs from the Mapping API. -> Put Mapping -> TTL.
Enabling TTL on type level on an index
$ curl -XPOST http://localhost:9200/abc/a/_mapping -d '
{
"a": {
"_ttl": {
"enabled": true,
"default": "10000ms"
}
}
}'
$ curl -XPOST http://localhost:9200/abc/a/a1 -d '{"test": "true"}'
$ $ curl -XGET http://localhost:9200/abc/a/a1?pretty
{
"_index" : "abc",
"_type" : "a",
"_id" : "a1",
"_version" : 1,
"found" : true,
"_source":{"test": "true"}
}
$ # After 10s
$ curl -XGET http://localhost:9200/abc/a/a1?pretty
{
"_index" : "abc",
"_type" : "a",
"_id" : "a1",
"found" : false
}
Note:
Mapping applies to docs created after creation of mapping.
Also mapping was created for type a. So if you post to type b and
expect it expired on TTL, thats not gonna happen.
If you need to expire index, you can also create index level mappings during the create index to precreate indexes from your application logic.