Standard header for public address with an API Gateway - api

Let's say you are using an API gateway like Apigee or Amazon API Gateway and your public address for your API is http://my.public.dns.com/path/v1/articles. This gets routed through your gateway to an internal host http://some.internal.host.com/v1/articles. Now if your internal API returns relative links to itself then when they get served to the client they will be incorrect as they are based on its path not the actual public path. I know I can resolve this by transforming the response using the tools available in the respective gateway.
The question I have is; is there a standard way for a gateway to communicate the public path to the downstream component? I was thinking there might be a similar HTTP header to X-Forwarded-For.

There is no standard similar header.
Note that I think you mean if the back-end returns absolute links. If the links were relative, they'd already be correct. Relative links use no leading / thus are "relative" to the current directory -- whatever it may be - and so any external prefix is retained by the browser and the links transparently remain valid. Relative links can also backtrack a directory level with a ../ path prefix for each level.
Note also that API Gateway doesn't require any path prefix if you're using a custom domain name for your API This necessarily limits you to deploying a single "stage," but that's a reasonable tradeoff for a more flexible path... so the easiest solution might be to use a path in your API matching those internal paths.

Related

Why do we have to put api in front of routes?

I am learning express and the http methods, but I cannot find any documentation on it. Is /api/value just for the json data, like an address just for that data? Just any extra info on it would be appreciated. Like what exactly does it do and if there is any documentation from express about it. Or is this a global term used in urls throughout frameworks and the internet?
For example:
app.get('/api/jackets'(req, res) => {res.send('logic')})
Why do we need to add the api before jackets and what does it do?
It's not necessary, it's used only for a better understanding
The /api request is not required, but putting a prefix in front of the API requests such as:
/api
or, in some cases including a version number:
/api/v1
Allows you to use the same web server for more than one type of request because the /api prefix uniquely identifies each API request as an API request and it can easily be routed to where you handle API requests. You could have plain web page requests from a browser served by the same web server. While, you don't have to use such a prefix, using it gives you the most flexibility in how you deploy and use your server.
Similarly, putting the version in the prefix such as /api/v1 allows you to evolve your API in the future in a non-backward-compatible way by adding a new version designation without breaking prior API clients (you support both versions at the same time - at least for a transition period).

S3-backed CloudFront and signed URLs

Originally I set up an S3 bucket "bucket.mydomain.com" and used a CNAME in my DNS so I could pull files from there as if it was a subdomain. This worked for http with:
bucket.mydomain.com/image.jpg
or with https like:
s3.amazonaws.com/bucket.mydomain.com/image.jpg
Some files in this bucket were public access but some were "authenticated read" so that I would have to generate a signed URL with expiration in order for them to be read/downloaded.
I wanted to be able to use https without the amazon name in the URL, so I setup a CloudFront distribution with the S3 bucket as the origin. Now I can use https like:
bucket.mydomain.com/image.jpg
The problem I have now is that it seems either all my files in the bucket have to be public read, or they all have to be authenticated read.
How can I force signed URLs to be used for some files, but have other files be public read?
it seems either all my files in the bucket have to be public read, or they all have to be authenticated read
That is -- sort of -- correct, at least in a simple configuration.
CloudFront has a feature called an Origin Access Identity (OAI) that allows it to authenticate requests that it sends to your bucket.
CloudFront also supports controlling viewer access to your resources using CloudFront signed URLs (and signed cookies).
But these two features are independent of each other.
If an OAI is configured, it always sends authentication information to the bucket, regardless of whether the object is private or public.
Similarly, if you enable Restrict Viewer Access for a cache behavior, CloudFront will always require viewer requests to be signed, regardless of whether the object is private or public (in the bucket), because CloudFront doesn't know.
There are a couple of options.
If your content is separated logically by path, the solution is simple: create multiple Cache Behaviors, with Path Patterns to match, like /public/* or /private/* and configure them with individual, appropriate Restrict Viewer Access settings. Whether the object is public in the bucket doesn't matter, since CloudFront will pass-through requests for (e.g.) /public/* without requiring a signed URL if that Cache Behavior does not "Restrict Viewer Access." You can create 25 unique Cache Behavior Path Patterns by default.
If that is not a solution, you could create two CloudFront distributions. One would be without an OAI and without Restrict Viewer Acccess enabled. This distribution can only fetch public objects. The second distribution would have an OAI and would require signed URLs. You would use this for private objects (it would work for public objects, too -- but they would still need signed URLs). There would be no price difference here, but you might have cross-origin issues to contend with.
Or, you could modify your application to sign all URLs for otherwise public content when HTML is being rendered (or API responses, or whatever the context is for your links).
Or, depending on the architecture of your platform, there are probably other more complex approaches that might make sense, depending on the mix of public and private and your willingness to add some intelligence at the edge with Lambda#Edge triggers, which can do things like inspect/modify requests in flight, consult external logic and data sources (e.g. look up a session cookie in DynamoDB), intercept errors, and generate redirects.
Michael's description is good. Amazon has also stated (link below) "Signature Version 2 is being deprecated, and the final support for Signature Version 2 will end on June 24, 2019."
https://docs.aws.amazon.com/AmazonS3/latest/dev/auth-request-sig-v2.html

HTTP-like treatment for custom URI scheme, possible?

I defined a new URI scheme on my Windows system (following this thread: how do I create my own URL protocol? (e.g. so://...))
I want the custom URI protocol to act like HTTP within Chrome/Firefox...
That is, I want: myprotocol://localhost/test.html
to act exactly like:
http://localhost/test.html
Is it possible, or does the browser insist on valid URI schemes, even if they are fully defined in the registry?
(This pertains to a local server and is required for personal application testing; I realise custom URI's are a bad standard and should not be used in production)
It is certainly possible to link a custom scheme to the browser of your choice. The challenge is to get the browser to treat your scheme exactly like http:// as it cannot possibly know it has to speak HTTP to the target resource. However, this answer suggests using an <iframe/> is a viable workaround.

Does the Content Security Policy Standard support wildcard paths? If not, why doesn't it?

From reading the CSP Standard specification and examples it seems that it does not support wildcards in the path portion of a given URL. This seems like an oversight, as many CDNs and static file hosting providers share the root domain names between their users and only differentiate access on URL paths rather than the entire domain.
For example, when using S3 or Google Cloud Storage as a CDN, you might want a CSP to allow scripts/assets to be loaded from just your bucket with a wildcard URL like "https://storage.googleapis.com/my-apps-bucket/*" but disallow them for the rest of https://storage.googleapis.com, as it would be rather trivial for a malicious actor to create their own account and serve content from that root domain.
This seems like a pretty common use case, am I misunderstanding the spec? If not, what is the syntax to use wildcard paths, as utilizing a header like Content-Security-Policy: script-src 'self' https://example.com/* does not seem to work.
The "matching source expressions" part of the spec (http://www.w3.org/TR/CSP/#match-source-expression) describes the URL matching algorithm in detail. It does support what you're asking for, but you don't use the wildcard character.
The spec discusses the optional "path-part" of the allowed sources, and says if the allowed URL ends in a slash "/", it is a prefix match rather than an exact match.
So, in your example, if you allow
https://storage.googleapis.com/my-apps-bucket/
with a slash but without the asterisk on the end, it will match files below that URL, for example
https://storage.googleapis.com/my-apps-bucket/file1.js

Can I use a custom domain name for API in Parse.com?

I’ve read the Parse documentation regarding custom domain names, and I understand that I can host web content at a custom domain name.
Is it also possible to use the Parse REST API, or Cloud Functions, via a custom domain name?
Example:
https://api.myshoewarehouse.com/v1/shoes/
instead of…
https://api.parse.com/1/shoes/
Looks like it is not allowed by parse.com directly. The only way to do it is to configure proxy server somewhere on your machine, like here.
Another way is to set api.myshoewarehouse.com domain record to CNAME pointing at api.parse.com. But then you need to:
ignore certificate check, e.g. with curl -k option, which obviously isn't safe thing
forget about changing URL path (e.g. /1/shoes/ can't be changed to /v1/shoes/)