Defining a queue with a config file in RabbitMQ - rabbitmq

Is there a way to define a queue in a configuration file like in ActiveMQ :
http://activemq.apache.org/configure-startup-destinations.html

Yes, it is possible.
The easiest way:
Add queue manually, from webUI
By default webUI is exposed on port 15672.
Add queue accessing http://localhost:15672/#/queues
Export config file from webUI.
Access main page http://localhost:15672/#/. At the bottom you have section Import / export definitions, and button download broker definitions.
Just download the file and it will contain all defined queues.
Sample config file, with users, virtual host and queue:
I have formatted the file using JStool plugin, JSFormat option from Notepad++.
By default, file is single line and not very readable.
Next to 'download broker definitions' there is button 'upload broker definitions'. You may upload your file (it will work with pretty-formatted file).
{
"rabbit_version" : "3.5.7",
"users" : [{
"name" : "guest",
"password_hash" : "42234423423",
"tags" : "administrator"
}
],
"vhosts" : [ {
"name" : "/uat"
}
],
"permissions" : [{
"user" : "guest",
"vhost" : "/uat",
"configure" : ".*",
"write" : ".*",
"read" : ".*"
}
],
"parameters" : [],
"policies" : [],
"queues" : [{
"name" : "sms",
"vhost" : "/uat",
"durable" : false,
"auto_delete" : false,
"arguments" : {}
}
],
"exchanges" : [],
"bindings" : []
}

Related

Context not set in time after fetching it from the basesites request after Upgrading to 2.1

we have our Spartacus project set up to fetch the context from the basesites request. A sample response can be seen here:
{
"baseSites" : [ {
"defaultLanguage" : {
"isocode" : "sl"
},
"geoRecommended" : false,
"showTeaser" : true,
"stores" : [ {
"currencies" : [ {
"isocode" : "EUR"
} ],
"defaultCurrency" : {
"isocode" : "EUR"
},
"defaultLanguage" : {
"isocode" : "sl"
},
"languages" : [ {
"isocode" : "sl"
} ],
} ],
"uid" : "ung-site-si",
"urlEncodingAttributes" : [ "languageCountry" ],
"urlPatterns" : [ "(?i)^https?://localhost(:[\\d]+)?/rest/.*$", "(?i)^https?://[^/]+/(sl-SI)/?.*$" ]
}, ...
]
We have two basesites set up at the moment. The urlPatterns are used to find the correct baseSite. Then the context(baseSite, language, currency) is set in our custom occ-loaded-config-converter. So we are not using any static context or fetching it from the URL, but getting the context from the response of the basesites request.
The site-context-interceptor then subscribes to e.g. this.languageService.getActive() and then sets the correct context (language, currency) for the backend requests:
/rest/v2/ung-site-rs/cms/pages?fields=DEFAULT&pageType=ContentPage&pageLabelOrId=/shoppster-akcija&lang=sr&curr=RSD
Before the Spartacus Upgrade to 2.0 this worked fine. Right after the context was set from the basesites request, the subscription in the site-context-interceptor was triggered an the right context sent with the subsequent backend requests. Now after upgrading to 2.1, the context is not set on time anymore. So the first few backend requests are sent with the wrong context (default USD, en) and then at some point in time, the subscriptions are triggered an the correct context is set.
This may be related to this change:
https://sap.github.io/spartacus-docs/technical-changes-version-2/#context-change-action-not-dispatched-on-the-initial-setting-of-the-value
Is it now not possible anymore to use the basesites request to set the context?
As discussed in comments: bumping to the latest patch version 2.1.4 has fixed the issue.

ELK - X-Pack Custom realm

I've developed a custom realm for my ELK cluster.
This module works well on a on node elasticsearch but when I install it on my production cluster, nothing works.
Elasticsearch starting logs :
- nothing special, everything seems to work and xpack module is loaded (generates a log in stdout)
Elasticsearch cluster diagnostic (custom seems to be disabled and not available) :
{
"security" : {
"available" : true,
"enabled" : true,
"realms" : {
"file" : {
"available" : true,
"enabled" : false
},
"ldap" : {
"available" : true,
"enabled" : false
},
"native" : {
"name" : [
"realm2"
],
"available" : true,
"size" : [
2
],
"enabled" : true,
"order" : [
1
]
},
"custom" : {
"available" : false,
"enabled" : false
},
...
}
Elasticsearch configuration :
cluster.name: "production-cluster-1"
network.host: 0.0.0.0
bootstrap.memory_lock: true
xpack.security.enabled: true
xpack.security.audit.enabled: true
xpack.monitoring.enabled: true
xpack.graph.enabled: false
xpack.watcher.enabled: false
xpack.ml.enabled: false
discovery.zen.ping.unicast.hosts: "-------------------"
network.publish_host: "----"
discovery.zen.minimum_master_nodes: 3
xpack.security:
authc:
realms:
realm1:
type: custom
order: 0
realm2:
type: native
order: 1
The native authentication works fine.
How can I troubleshoot this correctly ? :)
Thanks
Let's start by turning up the logging:
curl -u<user> -XPUT '<host>:<http-port>/_cluster/settings?pretty' -H 'Content-Type: application/json' -d'
{
"transient": {
"logger.org.elasticsearch.xpack.security.authc": "DEBUG"
}
}
'
That will give you any authentication logs on debug. I think the right package for the native realm should be logger.org.elasticsearch.xpack.security.authc.esnative in case you want to limit it down just to that.

Modeshape binary store in where?

First i config my modeshape configuration file like this:
"storage" : {
"persistence" : {
"type" : "db",
"connectionUrl": "${database.url}",
"driver": "${database.driver}",
"username": "${database.user}",
"password": "${database.password}",
"tableName": "GOVERNANCE_MODESHAPE",
"poolSize" : 5,
"createOnStart" : true,
"dropOnExit" : false
}
}
After I create a node and set a property for it and save it in my local environment, I can still find the node and the property in my local environment. But it will can't be found in my colleague local environment.
Then I change the configuration like this:
"storage" : {
"persistence" : {
"type" : "db",
"connectionUrl": "${database.url}",
"driver": "${database.driver}",
"username": "${database.user}",
"password": "${database.password}",
"tableName": "GOVERNANCE_MODESHAPE",
"poolSize" : 5,
"createOnStart" : true,
"dropOnExit" : false
},
"binaryStorage" : {
"type" : "file",
"directory": "/var/thinkbig/modeshape",
"minimumBinarySizeInBytes" : 5000000
}
}
I can find the node and property which created in my local environment, and my colleague also can find it in his local environment. But i can't find the directory of path /var/thinkbig/modeshape.
So I want to know the modeshape binary store from where? Why I add the "binaryStorage" config in the configuration file, everybody can find the node and property? Thanks in advance!
Per the doc for minimumBinarySizeInBytesthe minimum size (in bytes) above which binary values will be stored in the store. Any binary value lower in size will be stored together with the other node information..
This means that binaries smaller than the specified size are stored in the database, rather than the file system. You could change this to a value of 1 byte if you want to ensure that all binaries get stored in the file system .

Druid RabbitMQ Firehose

I'm trying to setup druid to work with rabbitmq firehose but getting the following error from Tranquility
java.lang.IllegalArgumentException: Could not resolve type id 'rabbitmq' into a subtype of [simple type, class io.druid.data.input.FirehoseFactory]
I did the following
1. Installed Druid
2. Downloaded extension druid-rabbitmq
3. Copied druid-rabbitmq into druid extensions
4. Copied amqp-client jar to druid lib
5. Added druid-rabbitmq into druid.extensions.loadList in common.runtime.properties
6. In Tranquility server.json configuration added the firehose config
"ioConfig" : {
"type" : "realtime",
"firehose" : {
"type" : "rabbitmq",
"connection" : {
"host": "localhost",
"port": "5672",
"username": "blackbox",
"password": "blackbox",
"virtualHost": "blackbox-vhost",
"uri": "amqp://localhost:5672/blackbox-vhost"
},
"config" : {
"exchange": "test-exchange",
"queue" : "test-q",
"routingKey": "#",
"durable": "true",
"exclusive": "false",
"autoDelete": "false",
"maxRetries": "10",
"retryIntervalSeconds": "1",
"maxDurationSeconds": "300"
}
}
}
I'm using imply 1.3.0 but I think Tranquility is for stream pushing while a firehose is used for stream pulling so I think this was the problem. So now I created a realtime node and it's running fine. I also had to copy lyra jar file into druid lib directory. Now I can publish data from rabbit and its been inserted into druid and I can query the data but problem is that in rabbit the message is still showing as unacked. Any idea?

Making storage plugin on Apache Drill to HDFS

I'm trying to make storage plugin for Hadoop (hdfs) and Apache Drill.
Actually I'm confused and I don't know what to set as port for hdfs:// connection, and what to set for location.
This is my plugin:
{
"type": "file",
"enabled": true,
"connection": "hdfs://localhost:54310",
"workspaces": {
"root": {
"location": "/",
"writable": false,
"defaultInputFormat": null
},
"tmp": {
"location": "/tmp",
"writable": true,
"defaultInputFormat": null
}
},
"formats": {
"psv": {
"type": "text",
"extensions": [
"tbl"
],
"delimiter": "|"
},
"csv": {
"type": "text",
"extensions": [
"csv"
],
"delimiter": ","
},
"tsv": {
"type": "text",
"extensions": [
"tsv"
],
"delimiter": "\t"
},
"parquet": {
"type": "parquet"
},
"json": {
"type": "json"
},
"avro": {
"type": "avro"
}
}
}
So, is ti correct to set localhost:54310 because I got that with command:
hdfs -getconf -nnRpcAddresses
or it is :8020 ?
Second question, what do I need to set for location? My hadoop folder is in:
/usr/local/hadoop
, and there you can find /etc /bin /lib /log ... So, do I need to set location on my datanode, or?
Third question. When I'm connecting to Drill, I'm going through sqlline and than connecting on my zookeeper like:
!connect jdbc:drill:zk=localhost:2181
My question here is, after I make storage plugin and when I connect to Drill with zk, can I query hdfs file?
I'm very sorry if this is a noob question but I haven't find anything useful on internet or at least it haven't helped me.
If you are able to explain me some stuff, I'll be very grateful.
As per Drill docs,
{
"type" : "file",
"enabled" : true,
"connection" : "hdfs://10.10.30.156:8020/",
"workspaces" : {
"root" : {
"location" : "/user/root/drill",
"writable" : true,
"defaultInputFormat" : null
}
},
"formats" : {
"json" : {
"type" : "json"
}
}
}
In "connection",
put namenode server address.
If you are not sure about this address.
Check fs.default.name or fs.defaultFS properties in core-site.xml.
Coming to "workspaces",
you can save workspaces in this. In the above example, there is a workspace with name root and location /user/root/drill.
This is your HDFS location.
If you have files under /user/root/drill hdfs directory, you can query them using this workspace name.
Example: abc is under this directory.
select * from dfs.root.`abc.csv`
After successfully creating the plugin, you can start drill and start querying .
You can query any directory irrespective to workspaces.
Say you want to query employee.json in /tmp/data hdfs directory.
Query is :
select * from dfs.`/tmp/data/employee.json`
I have similar problem, Drill cannot read dfs server. Finally, the problem is cause by namenode port.
The default address of namenode web UI is http://localhost:50070/.
The default address of namenode server is hdfs://localhost:8020/.