I would like to expose internal metrics of traefik.
After reading the documentation I created the following configuration file:
logLevel = "INFO"
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.dashboard]
address = ":16081"
# API definition
[api]
entryPoint = "dashboard"
dashboard = true
debug = false
[api.statistics]
recentErrors = 10
# Metrics definition
[metrics]
# DataDog metrics exporter type
[metrics.datadog]
address = "172.17.0.1:8125"
pushInterval = "10s"
################################################################
# Mesos/Marathon Provider
################################################################
# Enable Marathon Provider.
[marathon]
endpoint = "http://mesos.lan:8080/"
watch = true
domain = "service.lan"
exposedByDefault = false
When I query the dashboard entrypoint I got a 404 error on /metrics:
curl -s http://localhost:16081/health | jq
{
"pid": 1,
"uptime": "3h31m3.5252748s",
"uptime_sec": 12663.5252748,
"time": "2018-09-04 16:53:17.7128687 +0000 UTC m=+12663.602939001",
"unixtime": 1536079997,
"status_code_count": {},
"total_status_code_count": {
"404": 5
},
"count": 0,
"total_count": 5,
"total_response_time": "390.7µs",
"total_response_time_sec": 0.0003907,
"average_response_time": "78.14µs",
"average_response_time_sec": 7.814e-05,
"recent_errors": [
{
"status_code": 404,
"status": "Not Found",
"method": "GET",
"host": "localhost:16081",
"path": "/metrics",
"time": "2018-09-04T16:53:12.0232879Z"
},
{
"status_code": 404,
"status": "Not Found",
"method": "GET",
"host": "localhost:16081",
"path": "/metrics",
"time": "2018-09-04T13:18:52.7206202Z"
},
{
"status_code": 404,
"status": "Not Found",
"method": "GET",
"host": "localhost:16081",
"path": "/metrics",
"time": "2018-09-04T13:18:51.853093Z"
},
{
"status_code": 404,
"status": "Not Found",
"method": "GET",
"host": "localhost:16081",
"path": "/metrics",
"time": "2018-09-04T13:18:50.9894516Z"
},
{
"status_code": 404,
"status": "Not Found",
"method": "GET",
"host": "localhost:16081",
"path": "/metrics",
"time": "2018-09-04T13:18:49.8598176Z"
}
]
}
curl -s http://localhost:16081/metrics
404 page not found
Did I miss something ?
My main objective is to be able to get metrics per frontend/backend.
I would like to be able to know the number of requests and returned status code per frontend.
Thanks,
Renaud
This is solved, long story short, /metrics is only exposed when promotheus provider is enable. When Datadog provider is enable all the metrics are sent to datadog.
Details can be found here: github.com/containous/traefik/issues/3877
Related
I have recently joined a company that is using Vonage and we have WhatsApp communications which was working fine in both dev and production that have both suddenly stopped working.
The shape of the incoming json looks to have changed significantly, but I have changed the code that is reading this and am able to read the messages into the system.
The problem that I have now, is outgoing message is not being accepted.
If I send this json body over
{
"from": "4474183xxxxx",
"to": "4474719xxxxx",
"message": {
"Type": "text",
"content": "Hello! I’m CAI, A Virtual Chat-bot assistant. blurb, more blurb.... .\n\n1. Okay I understand\n\nChoose any one option. Type \"1\" to choose first option."
}
}
I get the following response
{
"Status": "fail", // custom one to my company
"e3": {
"Error": {
"body": {
"type": "https://developer.vonage.com/api-errors",
"title": "Your request parameters didn't validate.",
"detail": "Found errors validating 1 of your submitted parameters.",
"invalid_parameters": [
{
"name": "to",
"reason": "Malformed JSON body."
}
],
"instance": "cf4bce73-2db5-4102-b7af-xxxxxx"
},
"headers": {
"date": "Wed, 02 Nov 2022 14:02:08 GMT",
"content-type": "application/problem+json",
"content-length": "287",
"connection": "close",
"x-envoy-upstream-service-time": "3",
"x-frame-options": "deny",
"x-xss-protection": "1; mode=block",
"strict-transport-security": "max-age=31536000; includeSubdomains",
"x-content-type-options": "nosniff",
"server": "envoy"
},
"statusCode": 400
},
"StatCode": 400,
"Response": null
},
"a": {
"label": 6,
"trys": [
[
0,
6,
null,
7
]
],
"ops": []
}
}
However, looking at the shape of the message that is on the website (https://dashboard.nexmo.com/messages/sandbox) and send this message
{
"from": "xxxx",
"to": "xxxx",
"message_type": "text",
"text": "Hello! I’m CAI, blurb... .\n\n1. Okay I understand\n\nChoose any one option. Type \"1\" to choose first option.",
"channel" : "whatsapp"
}
I get this response
{
"Status": "fail",
"e3": {},
"a": {
"label": 6,
"trys": [
[
0,
6,
null,
7
]
],
"ops": []
},
"Message": "Cannot read property 'Type' of undefined"
}
I would be grateful if someone could help me shape the message that needs to be sent to vonage to that this can be sent out to the end user \ recipient correct.
Thanks
Simon
Using test listing information I'm trying to send a http put request to eBay using powerautomate. To set up my HTTP action I've followed the API instructions for the createOrReplaceInventoryItem action.
Ebay API Documentation
I continue to get the below error:
{"errors":[{"errorId":25709,"domain":"API_INVENTORY","subdomain":"Selling","category":"REQUEST","message":"Invalid
value for header Content-Language."}]}
I have read many articles where they have suggested changing Content-Language to Accepted-Language but that has not resolved the issue. Another article said to assign it to the HTTPContent instance but I'm unaware how to do this within powerapps.
Link to Suggestion on this thread
Here is the Request:
{
"uri": "https://api.ebay.com/sell/inventory/v1/inventory_item/TestMonkey",
"method": "PUT",
"headers": {
"Authorization": "*sanitized*",
"Content-Language\t": "en_US",
"Content-Type": "application/json",
"X-EBAY-C-MARKETPLACE-ID": "EBAY_US"
},
"body": {
"availability": {
"shipToLocationAvailability": {
"quantity": 50
}
},
"condition": "NEW",
"product": {
"title": "GoPro Hero4 Helmet Cam",
"description": "New GoPro Hero4 Helmet Cam. Unopened box.",
"aspects": {
"Brand": [
"GoPro"
],
"Type": [
"Helmet/Action"
],
"Storage Type": [
"Removable"
],
"Recording Definition": [
"High Definition"
],
"Media Format": [
"Flash Drive (SSD)"
],
"Optical Zoom": [
"10x"
]
},
"brand": "GoPro",
"mpn": "CHDHX-401",
"imageUrls": [
"https://coolbackgrounds.io/images/backgrounds/white/pure-white-background-85a2a7fd.jpg"
]
}
}
How should I configure the Content-Language header value in powerautomate?
Thank you in advance.
I am currently using Krakend (https://krakend.io) API Gateway to proxy request to my backend service. One of my backend service API response is a redirect response with http 303. The redirect response looks like this below :
HTTP/1.1 303 See Other
content-length: 48
content-type: text/plain; charset=utf-8
date: Thu, 16 Jul 2020 10:25:41 GMT
location: https://www.detik.com/
vary: Accept
x-powered-by: Express
x-envoy-upstream-service-time: 17
server: istio-envoy
The problem is that, instead of returning the http 303 response to client (with location response header) as-is, Krakend is actually following the http redirect and return the response of the redirect Url, which is the html response of https://www.detik.com/.
My current krakend configuration looks like this below :
{
"version": 2,
"extra_config": {
"github_com/devopsfaith/krakend-cors": {
"allow_origins": [],
"expose_headers": [
"Content-Length",
"Content-Type",
"Location"
],
"allow_headers": [
"Content-Type",
"Origin",
"X-Requested-With",
"Accept",
"Authorization",
"secret",
"Host"
],
"max_age": "12h",
"allow_methods": [
"GET",
"POST",
"PUT"
]
},
"github_com/devopsfaith/krakend-gologging": {
"level": "ERROR",
"prefix": "[GATEWAY]",
"syslog": false,
"stdout": true,
"format": "default"
},
"github_com/devopsfaith/krakend-logstash": {
"enabled": false
}
},
"timeout": "10000ms",
"cache_ttl": "300s",
"output_encoding": "json",
"name": "api-gateway",
"port": 8080,
"endpoints": [
{
"endpoint": "/ramatestredirect",
"method": "GET",
"extra_config": {},
"output_encoding": "no-op",
"concurrent_calls": 1,
"backend": [
{
"url_pattern": "/",
"encoding": "no-op",
"sd": "static",
"extra_config": {},
"method": "GET",
"host": [
"http://ramatestredirect.default.svc.cluster.local"
],
"disable_host_sanitize": false
}
]
}
]
}
So how can I make krakend to return original http 303 response unaltered from my backend service to the client ?
Thank You
I assume that you're calling this endpoint /ramatestredirect
To get backend http status code (as you said it return 303 http status code), you can use this way:
{
"endpoint": "/ramatestredirect",
"method": "GET",
"extra_config": {},
"output_encoding": "no-op",
"concurrent_calls": 1,
"backend": [
{
"url_pattern": "/",
"encoding": "no-op",
"sd": "static",
"extra_config": {
"github.com/devopsfaith/krakend/http": {
"return_error_details": "authentication"
}
},
"method": "GET",
"host": [
"http://ramatestredirect.default.svc.cluster.local"
],
"disable_host_sanitize": false
}
]
}
So, basically with this plugin you can get the original backend http status code
"github.com/devopsfaith/krakend/http": {
"return_error_details": "authentication"
}
If you use Lura Framework (formerly known as Kraken framework), then you may have to disable redirects for your http client.
client := &http.Client{
CheckRedirect: func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
},
}
When trying to use IPFS from my localhost I am having trouble accessing the IPFS service. I tried setting my config to accept the localhost and all server stuff, but nothing seems to work.
The error:
Failed to load http://127.0.0.1:5001/api/v0/files/stat?arg=0x6db883c6f3b2824d26f3b2e9c30256b490d125b10a3942f49a1ac715dd2def89&stream-channels=true: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:63342' is therefore not allowed access. The response had HTTP status code 403. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
IPFS Config:
{
"API": {
"HTTPHeaders": {
"Access-Control-Allow-Origin": [
"*"
]
}
},
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5001",
"Announce": [],
"Gateway": "/ip4/127.0.0.1/tcp/8080",
"NoAnnounce": [],
"Swarm": [
"/ip4/0.0.0.0/tcp/4001",
"/ip6/::/tcp/4001"
]
},
"Bootstrap": [
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
"/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
"/ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
"/ip4/128.199.219.111/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
"/ip4/104.236.76.40/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
"/ip4/178.62.158.247/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd",
"/ip6/2604:a880:1:20::203:d001/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
"/ip6/2400:6180:0:d0::151:6001/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
"/ip6/2604:a880:800:10::4a:5001/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
"/ip6/2a03:b0c0:0:1010::23:1001/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd"
],
"Datastore": {
"BloomFilterSize": 0,
"GCPeriod": "1h",
"HashOnRead": false,
"Spec": {
"mounts": [
{
"child": {
"path": "blocks",
"shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
"sync": true,
"type": "flatfs"
},
"mountpoint": "/blocks",
"prefix": "flatfs.datastore",
"type": "measure"
},
{
"child": {
"compression": "none",
"path": "datastore",
"type": "levelds"
},
"mountpoint": "/",
"prefix": "leveldb.datastore",
"type": "measure"
}
],
"type": "mount"
},
"StorageGCWatermark": 90,
"StorageMax": "10GB"
},
"Discovery": {
"MDNS": {
"Enabled": true,
"Interval": 10
}
},
"Experimental": {
"FilestoreEnabled": false,
"Libp2pStreamMounting": false,
"ShardingEnabled": false
},
"Gateway": {
"HTTPHeaders": {
"Access-Control-Allow-Headers": [
"X-Requested-With",
"Range"
],
"Access-Control-Allow-Methods": [
"GET"
],
"Access-Control-Allow-Origin": [
"localhost:63342"
]
},
"PathPrefixes": [],
"RootRedirect": "",
"Writable": false
},
"Identity": {
"PeerID": "QmRgQdig4Z4QNEqs5kp45bmq6gTtWi2qpN2WFBX7hFsenm"
},
"Ipns": {
"RecordLifetime": "",
"RepublishPeriod": "",
"ResolveCacheSize": 128
},
"Mounts": {
"FuseAllowOther": false,
"IPFS": "/ipfs",
"IPNS": "/ipns"
},
"Reprovider": {
"Interval": "12h",
"Strategy": "all"
},
"Swarm": {
"AddrFilters": null,
"ConnMgr": {
"GracePeriod": "20s",
"HighWater": 900,
"LowWater": 600,
"Type": "basic"
},
"DisableBandwidthMetrics": false,
"DisableNatPortMap": false,
"DisableRelay": false,
"EnableRelayHop": false
}
}
Ben, try replacing 127.0.0.1 with localhost. go-ipfs whitelists localhost only. Also check https://github.com/ipfs/js-ipfs-api/#cors
my answer might come very late, however I am trying to solve some CORS issues with IPFS on my end; therefore I might have a solution for you:
by running:
# please update origin according to your setup...
origin=http://localhost:63342
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin '["'"$origin"'", "http://127.0.0.1:8080","http://localhost:3000", "http://127.0.0.1:48084", "https://gateway.ipfs.io", "https://webui.ipfs.io"]'
ipfs config API.HTTPHeaders.Access-Control-Allow-Origin
and restarting your ipfs daemon it might fix it
if the "fetch" button in the following linked page works : you are all set ! https://gateway.ipfs.io/ipfs/QmXkhGQNruk3XcGsidCzQbcNQ5a8oHWneHZXkPvWB26RbP/
This Command Works for me
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin
'["'"$origin"'", "http://127.0.0.1:8080","http://localhost:3000"]'
you can allow the request from multiple origins
I am referring information on following url.
https://ibm-public-cos.github.io/crs-docs/about-compatibility-api#operations-on-buckets
As mentioned in above URL,there are 3 Objects related IBM cloud storage object API which are accessible.However,some APIs of IBM CSO API are not accessible e.g. PUT Bucket ACL & GET Bucket ACL and getting 403 error message while accessing them with POSTMAN.I need information that how can we access these APIs? Please provide any information related to it.
Any help is greatly appreciated.
As of today, GET/PUT Bucket ACL are supported in IBM COS. If you are using Postman, here's an example of a Postman dump for these (obviously you'll need to calculate the Authorization header and use all of your own info, this is just an educational sample):
{
"version": 1,
"collections": [
{
"id": "ab20b534-025a-4be2-b90f-a980d4a81632",
"name": "Operations on buckets",
"folders": [],
"requests": [
{
"id": "b89a66a4-0183-43cf-9712-7c07ba615b0b",
"url": "http://endpoint/bucket-name?acl=",
"method": "PUT",
"collectionId": "ab20b534-025a-4be2-b90f-a980d4a81632",
"name": "Add a bucket ACL (canned)",
"description": "",
"headers": "x-amz-date: {timestamp}\nAuthorization: {authorization-string}\nx-amz-acl: public-read\nContent-Type: text/plain",
"dataMode": "raw",
"data": ""
},
{
"id": "df2a9b2a-d66b-4ea6-8cfb-ab341ec23bc2",
"url": "http://endpoint/bucket-name?acl=",
"method": "PUT",
"collectionId": "ab20b534-025a-4be2-b90f-a980d4a81632",
"name": "Add a bucket ACL (custom)",
"description": "",
"headers": "x-amz-date: {timestamp}\nx-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\nAuthorization: {authorization-string}\nContent-Type: text/plain; charset=utf-8",
"dataMode": "raw",
"data": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<AccessControlPolicy xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\">\n <Owner>\n <ID>d4d11b981e6e489486a945d640d41c4d</ID>\n <DisplayName>OwnerDisplayName1</DisplayName>\n </Owner>\n <AccessControlList>\n <Grant>\n <Grantee xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"CanonicalUser\">\n <ID>d4d11b981e6e489486a945d640d41c4d</ID>\n <DisplayName>some-name</DisplayName>\n </Grantee>\n <Permission>FULL_CONTROL</Permission>\n </Grant>\n <Grant>\n <Grantee xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"CanonicalUser\">\n <ID>a5c26620b0704967a72d7aeb90cf57b2</ID>\n <DisplayName>some-name</DisplayName>\n </Grantee>\n <Permission>WRITE</Permission>\n </Grant>\n </AccessControlList>\n</AccessControlPolicy>"
},
{
"id": "7f79523f-b9aa-46ad-abda-9ad7e37d3980",
"url": "http://bucket-name.endpoint/?acl=",
"method": "GET",
"collectionId": "ab20b534-025a-4be2-b90f-a980d4a81632",
"name": "Get a bucket ACL",
"description": "",
"headers": "x-amz-date: {timestamp}\nAuthorization: {authorization-string}\nContent-Type: text/plain",
"dataMode": "raw",
"data": ""
}
],
"order": [
"b89a66a4-0183-43cf-9712-7c07ba615b0b",
"df2a9b2a-d66b-4ea6-8cfb-ab341ec23bc2",
"7f79523f-b9aa-46ad-abda-9ad7e37d3980"
]
}
],
"environments": [
{
"id": "fa619322-0a84-47b5-958b-84fa4a6286ba",
"name": "API-Flow Imports",
"values": [],
"timestamp": 1497532748923
}
]
}
Based on your reference to the 'Cloud Storage Object API', I can guess you might be working with an on-premises installation of IBM COS. Please send me an email at nicholas.lange[at]ibm.com so we can discuss what you are looking for.