I am trying to automate the setting and removal of downtimes on icinga hosts.
I am currently using the following command:
(note that I'm running this in an ansible playbook so {{item}} is the hostname and any other double brackets are filled in with ansible variables)
curl -k -s -u {{username}}:{{password}} -H 'Accept: application/json' -X POST "https://localhost:5665/v1/actions/schedule-downtime?filter=host.name==%22{{item}}%22&type=Host" -d "{ \"start_time\": \"{{now}}\", \"end_time\": \"{{end}}\", \"duration\": 1000, \"author\": \"{{username}}\", \"comment\": \"auto set downtime on {{item}}\" }"
This is able to put the host into a downtime. However, it doesn't put any services on that host into that downtime. This is as if I went into the web ui and put the host into a downtime without selecting the "all services" checkbox.
How can I change this command to put the host into a downtime, while also putting all services on that host into a downtime?
I would also be interested if there was an ansible task that could also preform this function.
The answer is to change the &type=Host bit at the end of the url to &type=Service to do service downtimes instead of host downtimes.
curl -k -s -u {{username}}:{{password}} -H 'Accept: application/json' -X POST "https://localhost:5665/v1/actions/schedule-downtime?filter=host.name==%22{{item}}%22&type=Service" -d "{ \"start_time\": \"{{now}}\", \"end_time\": \"{{end}}\", \"duration\": 1000, \"author\": \"{{username}}\", \"comment\": \"auto set downtime on {{item}}\" }"
I am using OpenDaylight Carbon release and the openflow plugin. I am writing code to install a flow. The flow gets written to MDSAL and picked up and installed by the Southbound plugin. I want to see what is in the config database for the switch. How can I do this? Thanks.
With the MDSAL Openflow plugin (and general MDSAL usage overall), the flows get written to the config datastore (which is effectively the intention of what you want) then if there is a switch connected for these flows, the flows will be written to the switch and to the operational data store (which is where the result is stored).
Lets assume you're using OVS and have set the manager and controller to Opendaylight, you can query the flows in the config and operational data stores as follows:
Get the OVS datapath ID:
(needed below in the queries)
curl -H "Content-Type: application/json" -X GET --user admin:admin http://localhost:8181/restconf/config/opendaylight-inventory:nodes/ | python -m json.tool | grep "openflow:"
"id": "openflow:156930464280132",
"id": "openflow:156930464280132:1",
"id": "openflow:156930464280132:LOCAL",
Query the flows in the configuration data store:
curl -H "Content-Type: application/json" -X GET --user admin:admin http://localhost:8181/restconf/config/opendaylight-inventory:nodes/node/openflow:156930464280132 | python -m json.tool
Query the flows in the operational data store:
curl -H "Content-Type: application/json" -X GET --user admin:admin http://localhost:8181/restconf/operational/opendaylight-inventory:nodes/node/openflow:156930464280132 | python -m json.tool
Notice, you can go into more detail with the URL to get flows in specific tables, for instance, do this to get table 4 flows:
curl -H "Content-Type: application/json" -X GET --user admin:admin http://localhost:8181/restconf/config/opendaylight-inventory:nodes/node/openflow:156930464280132/table/4 | python -m json.tool
Also notice that using "python -m json.tool" formats the output so its not all on one line. Its not mandatory to use.
I do know rabbitmq_tracing, which is a plugin of RabbitMQ, can provides a GUI to capture traced messages and log them in text or JSON format files. But the plugin is performance costing, is there a way to log all messages without this plugin?
Or is there a eclectic way to log messages automatically without using the management plugin? Because configuring traces on the GUI is not tolerant for some customers.
Any response would be appreciated.
I can't find a good solution to log all messages without rabbitmq_management. But with this plugin turned on, add and delete rabbitmq trace via command line:
Add a new trace:
[windows:] curl -i -u guest:guest -H "content-type:application/json" -XPUT ^ http://localhost:15672/api/traces/%2f/my-trace ^ -d"{""format"":""json"",""pattern"":""#"",""max_payload_bytes"":1000}"
[linux:] curl -i -u guest:guest -H "content-type:application/json" -XPUT \ http://localhost:15672/api/traces/%2f/my-trace \ -d'{"format":"text","pattern":"#", "max_payload_bytes":1000}'
Delete a trace:
[windows:] curl -i -u guest:guest -H "content-type:application/json" -XDELETE ^ http://localhost:15672/api/traces/%2f/my-trace
[linux:] curl -i -u guest:guest -H "content-type:application/json" -XDELETE \ http://localhost:15672/api/traces/%2f/my-trace
I am on artifactory version 4.6 and have the following requirement on the docker registry.
Allow anonymous pulls on docker repository
Force authentication on the SAME docker repository
I know this is avaliable out of the box on the later versions of artifactory. However upgrading isnt an option for us for a while.
Does the following work around work?
Create a virtual docker repository on port 8443 and don't force authentication , call it docker-virtual
Create a local docker repository and force authentication, call it docker-local on port 8444
Configure 'docker-virtual' with the default deployment directory as 'docker-local'
docker pull docker-virtual should work
docker push docker-virtual should ask for credentials
Upon failure , I should be able to docker login docker-virtual
and docker push docker-virtual/myImage
Not sure about the artifactory side, but perhaps the following Docker advice helps.
You can start run two registries, one RW with authentication, and a second RO without any authentication, in Docker:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs:ro \
-v `pwd`/auth/htpasswd:/auth/htpasswd:ro \
-v `pwd`/registry:/var/lib/registry \
-e "REGISTRY_HTTP_TLS_CERTIFICATE=/certs/host-cert.pem" \
-e "REGISTRY_HTTP_TLS_KEY=/certs/host-key.pem" \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=My Registry" \
-e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \
-e "REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry" \
registry:2
docker run -d -p 5001:5000 --restart=always --name registry-ro \
-v `pwd`/certs:/certs:ro \
-v `pwd`/auth/htpasswd:/auth/htpasswd:ro \
-v `pwd`/registry:/var/lib/registry:ro \
-e "REGISTRY_HTTP_TLS_CERTIFICATE=/certs/host-cert.pem" \
-e "REGISTRY_HTTP_TLS_KEY=/certs/host-key.pem" \
-e "REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry" \
registry:2
Note the volume settings for /var/lib/registry in each container. Then to pull from the anonymous registry, you'd just need to change the port. Since the filesystem is RO, any attempt to push to 5001 will fail.
The closest thing you can achieve is failing on docker push without credentials (while succeeding with pull).
No idea if this works with artifactory sorry.... you could try this handy project for docker registry auth.
Configure the registry to use this https://hub.docker.com/r/cesanta/docker_auth/
# registry config.yml
...
auth:
token:
# can be the same as your docker registry if you use nginx to proxy /auth to docker_auth
# https://docs.docker.com/registry/recipes/nginx/
realm: "example.com:5001/auth"
service: "Docker registry"
issuer: "Docker Registry auth server"
rootcertbundle: /certs/domain.crt
And allow anonymous with the corresponding ACL
# cesanta/docker_auth auth_config.yml
...
users:
# Password is specified as a BCrypt hash. Use htpasswd -B to generate.
"admin":
password: "$2y$05$LO.vzwpWC5LZGqThvEfznu8qhb5SGqvBSWY1J3yZ4AxtMRZ3kN5jC" # badmin
"": {} # Allow anonymous (no "docker login") access.
ldap_auth:
# See: https://github.com/cesanta/docker_auth/blob/master/examples/ldap_auth.yml
acl:
# See https://github.com/cesanta/docker_auth/blob/master/examples/reference.yml#L178
- match: {account: "/.+/"}
actions: ["*"]
comment: "Logged in users do anything."
- match: {account: ""}
actions: ["pull"]
comment: "Anonymous users can pull anything."
# Access is denied by default.
If I have RabbitMQ installed on my machine, is there a way to create a message queue from the command line and bind it to a certain exchange without using a client?
I think it is not possible, but I want to be sure.
Summary:
Other answers are good alternatives to what was asked for. Below are commands you can use from the command line.
First, do all the necessary prep work, e.g. install rabbit, rabbitmqadmin, and rabbitctl. The idea is to use commands from rabbitmqctl and rabbitmqadmin. You can see some command examples: https://www.rabbitmq.com/management-cli.html
Example Commands/Setup:
The following commands should give you the majority if not all of what you need:
# Get the cli and make it available to use.
wget http://127.0.0.1:15672/cli/rabbitmqadmin
chmod +x rabbitmqadmin
mv rabbitmqadmin /etc/rabbitmq
Add a user and permissions
rabbitmqctl add_user testuser testpassword
rabbitmqctl set_user_tags testuser administrator
rabbitmqctl set_permissions -p / testuser ".*" ".*" ".*"
Make a virtual host and Set Permissions
rabbitmqctl add_vhost Some_Virtual_Host
rabbitmqctl set_permissions -p Some_Virtual_Host guest ".*" ".*" ".*"
Make an Exchange
./rabbitmqadmin declare exchange --vhost=Some_Virtual_Host name=some_exchange type=direct
Make a Queue
./rabbitmqadmin declare queue --vhost=Some_Virtual_Host name=some_outgoing_queue durable=true
Make a Binding
./rabbitmqadmin --vhost="Some_Virtual_Host" declare binding source="some_exchange" destination_type="queue" destination="some_incoming_queue" routing_key="some_routing_key"
Alternative Way to Bind with Python
The following is an alternative to command line binding, as I've had issues with it sometimes and found the following python code to be more reliable.
#!/usr/bin/env python
import pika
rabbitmq_host = "127.0.0.1"
rabbitmq_port = 5672
rabbitmq_virtual_host = "Some_Virtual_Host"
rabbitmq_send_exchange = "some_exchange"
rabbitmq_rcv_exchange = "some_exchange"
rabbitmq_rcv_queue = "some_incoming_queue"
rabbitmq_rcv_key = "some_routing_key"
outgoingRoutingKeys = ["outgoing_routing_key"]
outgoingQueues = ["some_outgoing_queue "]
# The binding area
credentials = pika.PlainCredentials(rabbitmq_user, rabbitmq_password)
connection = pika.BlockingConnection(pika.ConnectionParameters(rabbitmq_host, rabbitmq_port, rabbitmq_virtual_host, credentials))
channel = connection.channel()
channel.queue_bind(exchange=rabbitmq_rcv_exchange, queue=rabbitmq_rcv_queue, routing_key=rabbitmq_rcv_key)
for index in range(len(outgoingRoutingKeys)):
channel.queue_bind(exchange=rabbitmq_send_exchange, queue=outgoingQueues[index], routing_key=outgoingRoutingKeys[index])
The above can be run as part of a script using python. Notice I put the outgoing stuff into arrays, which will allow you to iterate through them. This should make things easy for deploys.
Last Thoughts
I think the above should get you moving in the right direction, use google if any specific commands don't make sense or read more with rabbitmqadmin help subcommands. I tried to use variables that explain themselves.
Install the RabbitMQ management plugin. It comes with a command line tool which you can use to configure all of your queues/exchanges/etc.
Create Exchange:
rabbitmqadmin -u {user} -p {password} -V {vhost} declare exchange name={name} type={type}
Create Queue:
rabbitmqadmin -u {user} -p {password} -V {vhost} declare queue name={name}
Bind Queue to Exchange:
rabbitmqadmin -u {user} -p {password} -V {vhost} declare binding source={Exchange} destination={queue}
Maybe a little late to the party but I've done so using CURL.
For queues:
curl -i -u RABBITUSER:RABBITPASSWORD -H "content-type:application/json" \
-XPUT -d'{"durable":true}' \
http://192.168.99.100:15672/api/queues/%2f/QUEUENAME
And for bindings
curl -i -u RABBITUSER:RABBITPASSWORD -H "content-type:application/json" \
-XPOST -d"{\"routing_key\":\"QUEUENAME\"}" \
http://192.168.99.100:15672/api/bindings/%2f/e/EXCHANGENAME/q/QUEUENAME
Note 192.168.99.100:15672 points to my RMQ Management
If you are using Linux Debian, there's a package called "amqp-tools". Install it with
apt-get install amqp-tools
You can then use command line such as amqp-publish to send messages to your queue
amqp-publish -e exchange_name -b "your message"
Then you can collect message(s) from the queue using
amqp-get -q queue_name
or
amqp-consume -q queue_name
There are also (command line) examples from rabbitmq-c package / library. After you build it, you can send messages through command line such as
amqp_sendstring localhost 5672 amq.direct test "hello world"
Have fun ...
rabbitmqctl, the provided command line interface, doesn't expose the ability to create a queue and bind it.
It, however, is quite trivial to do it with a quick script though, and the RabbitMQ getting started guide shows several examples of it, both on the publisher as well as the consumer side.
#do some work to connect
#do some work to open a channel
channel.queue_declare(queue='helloworld')
I'm glossing over connecting, but it's a literal one liner to create a queue. The operation is also idempotent, meaning you can include the statement in a script and be safe, knowing that it won't keep recreating the queue or blowing out an existing one of the same name.
Create RabbitMq Exchange, Queue and Bindings dynamically from CLI on Windows
I already had a RabbitMQ Server installed and running with multiple queue and exchange and now wanted to create it on the fly from command line. I know it is an old question but I thought giving out this information will be helpful.
Following is what I did:
Setup
Downloaded and installed Python 2.6.6-201008-24 Windows x86-64 MSI installer , any version of python,
Download RabbitMqAdmin: RabbitMq Web User Interface has a link Command Line which navigates to http://server-name:15672/cli/ (server-name: server on which rabbitmq is installed) alternatively,use the above url and save the file as rabbitmqadmin.exe in the python exe location
eg: C:\Python26
C:\Python26\python
C:\Python26\rabbitmqadmin.exe
Code:in a batch file used the below commands
Create exchange:
c:\python26\python.exe rabbitmqadmin.exe declare exchange name=*ExchangeName1* type=topic durable=true
Create queue:
c:\python26\python.exe rabbitmqadmin.exe declare queue name=*NameofQueue1* durable=true
Create binding:
c:\python26\python.exe rabbitmqadmin.exe declare binding source=ExchangeName1 destination_type=queue destination=*NameofQueue1* routing_key=*RoutingKey1*
by executing rabbitmqadmin.exe -help -subcommands it lists all the available commands
eg: c:\python26\python.exe rabbitmqadmin.exe -help -subcommands
For me, my RabbitMQ Management deal kept trying to redirect to the https version... everything in my setup is vanilla, I don't even have a config file... anyways, my work around was to manually create rabbitmqadmin.py in the sbin folder, then fill it with https://raw.githubusercontent.com/rabbitmq/rabbitmq-management/v3.8.1/bin/rabbitmqadmin
Then, make sure that python is in your PATH and run this to, for example, add an exchange:
python rabbitmqadmin.py declare exchange --vhost=/ name=CompletedMessageExchange type=direct
Here is a more minimal Python example, taken from the RabbitMQ Python tutorial.
First, install pika:
sudo easy_install pika
# (or use pip)
This is all you need to send a message to localhost:
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.queue_declare(queue='test-queue')
channel.basic_publish(exchange='', routing_key='test-queue', body='Hello World!')
If any windows user looking for powershell based solution then there is the function I have written.
Function createQueue([string]$QueueName){
$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
$headers.Add("content-type", "application/json")
$headers.Add("Authorization", "Basic Z3Vlc3Q6Z3Vlc3Q=")
$body = "{
`n `"vhost`": `"/`",
`n `"name`": `"$QueueName`",
`n `"durable`": `"true`",
`n `"arguments`": {}
`n}"
# Write-Host $body
$url='http://localhost:15672/api/queues/%2f/'+$QueueName
# Write-Host $url
$response = Invoke-RestMethod $url -Method 'PUT' -Headers $headers -Body $body
$response | ConvertTo-Json
}
Save this into helper.ps1 file and include it into your script like this
$queueNames = 'my-queue-name'
. .\helper.ps1
createQueue($queueName)
Walkthrough to Create and delete a queue in RabbitMQ:
I couldn't find a commandline command to do it. Here is how I did it in code with java.
Rabbitmq-server version 3.3.5 on Ubuntu.
List the queues, no queues yet:
sudo rabbitmqctl list_queues
[sudo] password for eric:
Listing queues ...
...done.
Put this in CreateQueue.java
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;
import java.util.*;
public class CreateQueue {
public static void main(String[] argv) throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-message-ttl", 60000);
channel.queueDeclare("kowalski", false, false, false, args);
channel.close();
connection.close();
}
}
Supply the jar file that came with your rabbitmq installation:
I'm using rabbitmq-client.jar version 0.9.1, use the one that comes with your version of rabbitmq.
Compile and run:
javac -cp .:rabbitmq-client.jar CreateQueue.java
java -cp .:rabbitmq-client.jar CreateQueue
It should finish without errors, check your queues now:
sudo rabbitmqctl list_queues
Listing queues ...
kowalski 0
...done.
the kowalski queue exists.
helps to bind the exchange while you're at it:
channel.queue_bind(queueName, exchange)
C-;