How to make RabbitMQ API calls with vhost "/"? - api

The following API call to RabbitMQ:
http -a USER:PASS localhost:15001/api/queues/
Returns a list of queues:
[
{
...
"messages_unacknowledged_ram": 0,
"name": "foo_queue",
"node": "rabbit#queue-monster-01",
"policy": "",
"state": "running",
"vhost": "/"
},
...
]
Note that the vhost parameter is /.
How do I use a / vhost for the /api/queues/vhost/name call, which returns the details for a specific queue?
I have tried:
localhost:15001/api/queues/\//foo_queue
localhost:15001/api/queues///foo_queue
But both failed with 404 Object Not Found:

URL Encoding did the trick. The URL should be:
localhost:15001/api/queues/%2F/foo_queue
⬆⬆⬆
For the record, I think that REST resources should not be named /, especially not by default.

Related

GoDaddy API to create subdoman returns "The given domain is not registered, or does not have a zone file"

I'm trying to use GoDaddy's API to create a subdomain using the following http request:
PATCH /v1/domains/domainName.com/records
Host: api.ote-godaddy.com
Authorization: sso-key API_KEY:API_SECRET
Content-Type: application/json
Content-Length: 100
[
{
"data": "111.111.111.111",
"name": "subdomainName",
"ttl": 6000,
"type": "A"
}
]
but I get the following response:
{
"code": "UNKNOWN_DOMAIN",
"message": "The given domain is not registered, or does not have a zone file"
}
Please changes the host name as https://api.godaddy.com. your request will be work only production URL.
Please generate Production level API Key & SECRET.
Body: (Raw - Json Type)
[
{
"data": "YourServerIp",
"name": "subdomainName",
"port": 80,
"priority": 10,
"protocol": "string",
"service": "string",
"ttl": 600,
"type": "A"
}
]
Note:
It's only happened when primary domain already exists on your go daddy account
I figured out these requests only work using the production authorization against the production url but won't work using ote-authorization against the ote url. Maybe the url has to be set as an ote domain and not a production domain. Not sure.

Ocelot microgateway hosted in IIS

Hi I was trying to implement ocelot for our experimental tests on dev.
Here is end-point of api that I want to reach by via ocelot. using 443 port for both of project.
but getting 502 bad gateway all the time.
end point => https://localhost/document/api/v1/Documents/XYZ
"ReRoutes": [
{
"DownstreamPathTemplate": "/document/api/v1/Documents/{name}",
"DownstreamScheme": "https",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": 443
}
],
"UpstreamPathTemplate": "/apigateway/{name}/document",
"UpstreamHttpMethod": [ "Post" ],
"Priority": 0
}
],
"GlobalConfiguration": {
"BaseUrl": "https://localhost:443"
}
}
Microgateway alias name =>"apigateway"
Api alias name => "document"
In addition this I was able to debug on visiual studio but whenever I host both app on my local IIS getting 502 bad gateway
It appears that the configuration you have used is redirecting the request to the gateway itself, resulting in a circle.
i.e. the upstream call to the base URL of "localhost:443" is redirecting to the downstream "localhost:443" - the same.
Furthermore, the later versions of Ocelot appear to look for Routes in the configuration instead of ReRoutes documentation

How to configure Sensu with RMQ and InfluxDB

I am trying to get started with a monitoring server solution. I got the Sensu Clients, RabbitMQ and Uchiwa configured but then I tried using Graphite but there were so many parts to configure I tried InfluxDB instead. I am stuck configuring Sensu to InfluxDB.
Is there a part missing in the below configuration?
Client [Sensu] > RabbitMQ <> Sensu Server <> InfluxDB <> Grafana
Any suggestions?
cat influx.json
{
"influxdb": {
"hosts" : ["192.168.1.1"],
"host" : "192.168.1.1",
"port" : "8086",
"database" : "sensumetrics",
"time_precision": "s",
"use_ssl" : false,
"verify_ssl" : false,
"initial_delay" : 0.01,
"max_delay" : 30,
"open_timeout" : 5,
"read_timeout" : 300,
"retry" : null,
"prefix" : "",
"denormalize" : true,
"status" : true
}
}
cat handler.json
{
"handlers": {
"influxdb": {
"type": "pipe",
"command": "/opt/sensu/embedded/bin/metrics-influxdb.rb"
}}}
checks1,
{
"checks": {
"check_memory_linux": {
"handlers": ["influxdb","default"],
"command": "/opt/sensu/embedded/bin/check-memory-percent.rb -w 90 -c 95",
"interval": 60,
"occurrences": 5,
"subscribers": [ "TEST" ]
}}}
checks2,
{
"checks": {
"check_cpu_linux-elkctrl-pipe": {
"type": "metric",
"command": "/opt/sensu/embedded/bin/check-cpu.rb -w 80 -c 90",
"subscribers": ["TEST"],
"interval": 10,
"handlers": ["debug","influxdb"]
}}}
To use InfluxDB to persist your data, you must have:
InfluxDB plugin installed (also, installation and usage instructions here)
Definitions for the plugin (an influxdb.json containin at least the host, port, user, password and database to be used by Sensu)
The definition, as other config files, must be in /etc/sensu/conf.d/
Handler configuration set properly (also in conf.d)
Mutator for InfluxDB (extensions)
Your checks must send results to the handler, so their definition must contain:
"handlers": [
"influxdb"
]
Or whatever name you gave your handler.
Case, if the influxdb config you provided above is the full extent of your configuration, it would seem to be missing the username/password attributes required by the influxdb configuration. If they're present, but not provided in the post, no big deal. However, I'd recommend doing the following for your Sensu logs:
grep -i influxdb /var/logs/sensu/sensu-server.log
And seeing if the check result is getting sent to your influxdb instance. If they are, you should be receiving an error that might be pointing a bit more to what's going on.
You can also check your influxdb logs to see if they're getting a post from your Sensu server:
journalctl -u influxdb.service -f
But yeah, if the username/password is missing from the configuration, that'd be the first place that I start.

Can't Connect to Service via Marathon-lb using DCOS

I recently went through the tutorial for load balancing apps in DCOS using marathon-lb (in the example they balance some nginx containers: https://dcos.io/docs/1.9/networking/marathon-lb/marathon-lb-advanced-tutorial/). I am trying to use this approach to internally load balance my own custom application. The custom app I am using is a play scala app. I have the internal marathon-lb set up and can successfully use it for the nginx container but when I try to use my own docker image I cannot get this to work. I start up my service with my custom image and I can access the service fine by using the IP and port that gets assigned to it (i.e. if the service gets deployed on 10.0.0.0 and is available on port 1234 then curl http://10.0.0.0:1234/ works as expected and I can also make my api calls as defined in my application routes). However, when I try to access the app through the load balancer (curl -i http://marathon-lb-internal.marathon.mesos:10002, where 10002 is the service port) then I get this message:
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
For reference, here is my json file I'm using to start my custom service:
{
"id": "my-app",
"container": {
"type": "DOCKER",
"docker": {
"image": "my_repo/my_image:1.0.0",
"network": "BRIDGE",
"portMappings": [
{ "hostPort": 0, "containerPort": 9000, "servicePort": 10002, "protocol": "tcp" }
],
"parameters": [
{ "key": "env", "value": "USER_NAME=user" },
{ "key": "env", "value": "USER_PASSWORD=password" }
],
"forcePullImage": true
}
},
"instances": 1,
"cpus": 1,
"mem": 1000,
"healthChecks": [{
"protocol": "HTTP",
"path": "/v1/health",
"portIndex": 0,
"timeoutSeconds": 10,
"gracePeriodSeconds": 10,
"intervalSeconds": 2,
"maxConsecutiveFailures": 10
}],
"labels":{
"HAPROXY_GROUP":"internal"
},
"uris": [ "https://s3.amazonaws.com/my_bucket/my_docker_credentials" ]
}
I had the same problem and found the solution here
marathon-lb health check failing on all spray.io containers
Need to add
"HAPROXY_0_BACKEND_HTTP_HEALTHCHECK_OPTIONS": " http-send-name-header Host\n timeout check {healthCheckTimeoutSeconds}s\n"
To your config so that the REST layer doesn't bark on the health check from marathon

OpenShift Aggregated Logging: Parse Apache access log

When using OpenShift Aggregated Logging I get logs nicely fed into elasticsearch. However, the line as logged by apache ends up in a message field.
I'd like to create queries in Kibana where I can access the url, the status code and other fields individually. For that the special apache access log parsing needs to be done.
How can I do that?
This is an example entry as seen in kibana:
{
"_index": "42-steinbruchsteiner-staging.3af0bedd-eebc-11e6-af4b-005056a62fa6.2017.03.29",
"_type": "fluentd",
"_id": "AVsY3aSK190OXhxv4GIF",
"_score": null,
"_source": {
"time": "2017-03-29T07:00:25.595959397Z",
"docker_container_id": "9f4fa85a626d2f5197f0028c05e8e42271db7a4c674cc145204b67b6578f3378",
"kubernetes_namespace_name": "42-steinbruchsteiner-staging",
"kubernetes_pod_id": "56c61b65-0b0e-11e7-82e9-005056a62fa6",
"kubernetes_pod_name": "php-app-3-weice",
"kubernetes_container_name": "php-app",
"kubernetes_labels_deployment": "php-app-3",
"kubernetes_labels_deploymentconfig": "php-app",
"kubernetes_labels_name": "php-app",
"kubernetes_host": "itsrv1564.esrv.local",
"kubernetes_namespace_id": "3af0bedd-eebc-11e6-af4b-005056a62fa6",
"hostname": "itsrv1564.esrv.local",
"message": "10.1.3.1 - - [29/Mar/2017:01:59:21 +0200] "GET /kwf/status/health HTTP/1.1" 200 2 "-" "Go-http-client/1.1"\n",
"version": "1.3.0"
},
"fields": {
"time": [
1490770825595
]
},
"sort": [
1490770825595
]
}
Disclaimer: I did not test this out in openshift. I don't know which tech stack you are using for your microservice.
This is how I do this in a spring boot application (with logback) deployed in Kubernetes.
1. Use logstash encoder for logback (This will write logs in Json format which is more ELK stack friendly)
I have a gradle dependency to enable this
compile "net.logstash.logback:logstash-logback-encoder:3.5"
Then configure LogstashEncoder as encoder in the appender, in logback-spring.groovy/logback-spring.xml (or logabck.xml)
2. Have some filters or libraries to write the access log
For 2. Either use
A. Use "net.rakugakibox.springbootext:spring-boot-ext-logback-access:1.6" library
(This is what I am using)
It gives in a nice json format, as follows
{
"#timestamp":"2017-03-29T09:43:09.536-05:00",
"#version":1,
"#message":"0:0:0:0:0:0:0:1 - - [2017-03-29T09:43:09.536-05:00] \"GET /orders/v1/items/42 HTTP/1.1\" 200 991",
"#fields.method":"GET",
"#fields.protocol":"HTTP/1.1",
"#fields.status_code":200,
"#fields.requested_url":"GET /orders/v1/items/42 HTTP/1.1",
"#fields.requested_uri":"/orders/v1/items/42",
"#fields.remote_host":"0:0:0:0:0:0:0:1",
"#fields.HOSTNAME":"0:0:0:0:0:0:0:1",
"#fields.content_length":991,
"#fields.elapsed_time":48,
"HOSTNAME":"ABCD"
}
OR
B. Use Logback's Tee Filter
OR
C. Spring's CommonsRequestLoggingFilter (Did not really test this out)
Add a bean definition
#Bean
public CommonsRequestLoggingFilter requestLoggingFilter() {
CommonsRequestLoggingFilter crlf = new CommonsRequestLoggingFilter();
crlf.setIncludeClientInfo(true);
crlf.setIncludeQueryString(true);
crlf.setIncludePayload(true);
return crlf;
}
Then set org.springframework.web.filter.CommonsRequestLoggingFilter to DEBUG, this can be done using the application.properties by adding:
logging.level.org.springframework.web.filter.CommonsRequestLoggingFilter=DEBUG