Trying to set up distributed event tracing - traefik

I am trying to set up distributed event tracing throughout out microservice architecture.
Here is some preamble about our architecture:
Traefik load balancer that forwards request to the appropriate backend service based on the route pathname.
Frontend application on a "catchall" route that is served whenever a route is not caught by another microservice.
Various backend services in node/dotnetcore listening on /api/<serviceName>
traefik is setup with the traceContextHeaderName set to "trace-id".
How I imagine this would work is that the frontend application receives a header "trace-id" from the load balancer with a value that can be used to "link" the spans together for requests that are related.
Example scenario:
When a customer loads attempts to sign in, they make a request for the web application, receive and render the HTML/CSS/JS, then the subsequent requests to /api/auth/login can be POSTed with the login data and the value of the "trace-id" header supplied by traefik.
The backend service that handles the /api/auth/login endpoint can capture this "trace-id" header value and publish some spans to jaeger related to the work that it is doing to validate the user.
What is happening:
When the request is made for the frontend HTML, no "trace-id" header is received so any subsequent spans that are published are all considered individual traces and are not linked together.
traefik.toml:
...
[tracing]
backend = "jaeger"
serviceName = "traefik"
spanNameLimit = 0
[tracing.jaeger]
samplingServerURL = "http://jaeger:5778/sampling"
samplingType = "const"
samplingParam = 1.0
localAgentHostPort = "jaeger:6831"
traceContextHeaderName = "trace-id"
...
I understand that StackOverflow is not a "code it for me" service. I am looking for guidance on what could possibly be going wrong as I am new to distributed event tracing.
I have tried googling and searching for answers but I have come to a dead end.
Any help/suggestions on where to look would be greatly appreciated.
Please let me know if I am barking up the wrong tree, approaching this incorrectly, or if my understanding of how the traceContextHeaderName should work is incorrect.

Ugh. I am an idiot.
Here is what was going wrong for anyone else who might be stuck something like this:
The frontend application is receiving a header, I was just looking in the wrong place.
The request comes from the load balancer to the node frontend microservice which sends its response to the browser.
I was checking the browser for the header, but the node frontend microservice was not forwarding this header to the browser.

Related

Sending xAPI statement to a web application instead of LRS

I have an xAPI content made by storyline I want for the statement to be sent to a webapp instead of the LRS.
this webapp is developped using laravel, and user should be authenticated with email and password to use it.
what I did to send the statement to this app:
1.in the webapp I created an API endpoint route that use POST method.
2.in the xAPI wrapper I changed the endpoint in the configuration to the route I made in the webapp.
const conf = {
"endpoint":"here I added my api endpoint route of the webapp",
"auth":"Basic " + toBase64(""),
}
now whith any interaction with the content where a statement should be sent the request making cors error like in the picture down, I think this is authentication error, how can I add my authentication credentials to the xAPI wrapper?
Your non-LRS LRS is probably not handling preflight requests which are necessary for CORS handling. Most common LRSs will handle those requests appropriately since they expect to be accessed from additional origins. See https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#preflighted_requests
Also note that you'll likely run into issues unless you also handle state requests.
Additionally unless you are requesting the credentials from the user during runtime then hard coding the credentials into the package isn't a great idea from a security perspective.

How to make REST API deployed to heroku accessible only through rapidAPI

Salutations!
I have just completed my first REST API, deployed on heroku, and I decided it would be cool to make $0 a month through rapidAPI.
The rapidAPI testing dashboard passes the tests successfully - with one of their keys being a requirement for an API call.
However when I access the site on a browser or on Postman, there is no need for an API key and therefore no restrictions in get requests.
I have noticed that the test code makes a fetch request to the rapidAPI url for the project but how can I make the heroku url accessible only from rapidAPI?
I know it's extremely unlikely someone will find my heroku app url but it is technically possible.
I appreciate your time and insights.
RapidAPI provides 2 security features to support this:
set X-RapidAPI-Proxy-Secretin the API Dashboard: this token is added in the X-RapidAPI-Proxy-Secret HTTP header for each request. You should validate this for every API call. This is the default measure in place.
the list of IP addresses used by RapidAPI is provided: you can check/validate for every API call.
There might be Heroku Addon to help with the IP filtering, but those are typically enterprise-plugin (with associated cost).
RapidAPI allows you to add secret headers and/or query string parameters to API requests. The RapidAPI proxy adds these secrets to every request, but are hidden from the API consumers.
Find more details in this page: https://docs.rapidapi.com/docs/secret-headers-parameters

Service Worker Response Cache Headers

I'm trying to figure out how the service worker works in regards to cache headers in responses. I have implemented a couple of service workers now but have never had to bother worrying about caching headers, how long items should be cached for etc. I'm now implementing it on an enterprise production site, whereby this stuff actually really matters.
Basically when using a service worker, is the http cache completely bypassed?
Do I then need to build a framework to handle resource expiration/invalidation like the http cache used to do for us? Or am I talking rubbish?
Would be super helpful if someone could provide some clarification of this. The way I see it there are 3 potential scenarios:
A). Network request => Service worker fetch => (Browser cache?) <=> Server
B). Network request <=> (Browser cache?) <=> Service worker fetch <=> Server
C). Network request => Service worker fetch <=> Server
I've tested this locally and it seems that C). is the correct implementation, whereby we the developer have sacrificed cache header/duration abstraction for control.
I'm fine with this, just want it clarifying before I run off and build a framework for reading and honouring caching headers in the service worker.
A) is the correct model. If a service worker controls a page, all network requests will trigger the fetch event handler of the service worker prior to consulting the browser cache or the network.
In turn, any time the service worker makes a network request, either explicitly via fetch() or implicitly via cache.add()/cache.addAll(), the browser's "traditional" cache is consulted for a response first. A network request to the server will only be made if there isn't a valid response in the browser cache.
This sometimes works in your favor, and sometimes can expose you to subtle bugs if you don't expect that behavior.
There's a very detailed explanation of this interaction at https://jakearchibald.com/2016/caching-best-practices/, specifically covering things to avoid and ways to take advantage of this behavior.
That depends on how you configure request. With Fetch API you can control how request interacts with browser HTTP Cache.
For example you can set request's cache mode to no-store so it will bypass HTTP Cache. Or you can set request's cache mode to force-cache so browser will return cached response even if it is stale:
fetch("some.json", {cache: "force-cache"})
.then(function(response) { /* consume the response */ });
By default request's cache mode is default. In this case request will act as usual. Obviously that is only if service worker actually performs request instead of returning some hardcoded response.
For more information check Fetch Standard, Request.cache MSN page and Using Service Workers MDN page.

How to receive webhook signal from 3rd party service

I'm using a SaaS for my AWS instance monitoring and Mandrill for email sending/campaigns.
I had created a simple chart with Zapier but I'd rather like to host it myself. So my question is:
How can I receive a webhook signal from Mandrill and then send it to Datadog from my server? Then again I guess hosting this script right on the same server I'm monitoring would be a terrible idea...
Basically I don't know how to "receive the webhook" so I can report it back to my Datadog service agent so it gets updated on their website.
I get how to actually report the data to Datadog as explained here http://docs.datadoghq.com/api/ but I just don't have a clue how to host a listener for web hooks?
Programming language isn't important, I don't have a preference for that case.
Here you can find how to add a new webhook to your mandrill account: https://mandrillapp.com/api/docs/webhooks.php.html#method=add
tha main thing here is this:
$url = 'http://example/webhook-url';
this is your webhook URL what will process the data sent by mandrill and forward the information to Datadog.
and this is a description about what mandrill will send to your webhook URL: http://help.mandrill.com/entries/21738186-Introduction-to-Webhooks
a listener for webhooks is nothing else then a website/app which triggers an action if a request comes in. Usually you keep it secret or secure it with (http basic) authentication. E.g. create a website called http://yourdomain.com/hooklistener.php. You can then call it with HTTP POST or GET and pass some data like hooklistener.php?event=triggerDataDog or with POST and send data along with the body. You then run a script or anything you want to process that event.
A "listener" is just any URL that you host where you can receive data that is posted to it. Keep in mind, since you mentioned Zapier, you can set up a trigger that receives the webhook data - in this case the listener URL is provided by Zapier, and you can then send that data into any application (or even post to another webhook). Using Zapier is nice because it doesn't require you to write the listener code that receives the hook data and does something with it.

Strategies to block an external webservice to simulate "down" during for a testing scenario?

I am working to integrate data from an external web service in the client side of my appliction. Someone asked me to test the condition when the service is unavailable or down. Anyone have any tips on how to block this site temporarily while we run the test to see how the service degrades?
For those curious we are testing against Virtual Earth, but Google Maps but this would apply to any equally complicated external service.
any thoughts and suggestions are welcome
Create some Mock-Webservice class or interface (and inject it). In there, you could test the response of your system to webservice failures and also what happens, if a web-service request take longer than expected or actually time-out.
DeveloperWorks article on mock testing: http://www.ibm.com/developerworks/library/j-mocktest.html
You need to be sure to test the most common failure modes for this:
DNS lookup fails
IP connection fails (once DNS lookup succeeds)
HTTP response other than 200
HTTP response incomplete or timeout
HTTP response 200 but RPC or document returned is invalid
Those are just a few common failure modes I could think of that will all manifest themselves with different behaviors that you may wish to have your application handle explicitly.
If you set up a computer between the caller and service that routes between them, you can simulate each of these failure modes distinctly and modify your application to handle them.
How about blocking the domain name(s) in question by putting a nonsense entry into the hosts file?