Let's say I have access to an https weather API.
Let's say I query its health status on thursday 17/08/2017 23h30 and the API replies OK (simple OK http code).
As a client, I need to prove in the future that the service actually responded me this data.
I'm thinking to asking the API to add a crypto signature of the data sent + timestamp, in order to prove they actually responded OK at that specific time.
Is it overkill? Is there a more simple way of doing it?
A digital signature containing current date/time or even adding a time stamp issued by a third party time stamp authority is an appropriate way to ensure that the content was actually issued on a date
In general, implementing a digital signature system on HTTP requests is not so simple and you have to consider many elements:
What content will you sign: headers, payload, attachments?
Is it binary content or text? Because the algorithms and signature formats will be different
In case of text you must canonicalize the content to avoid encoding problems when you verify the signature on the client side. Also you need to define a signature algorithm to compute the content to sign
Do you also need to sign the attachments when they are sent via streams?. How are you going to handle big files?
How are you going to attach the signature to the https response: special header, additional attribute in the payload?
How is the server going to distribute the signing certificate? You should include it in a truststore on the client
But, if you only want to proof that a service response was OK/FAIL, then the server just need to add a digital signature over the payload (or computed on a concatenation of the headers) but if you want to implement something more complex I suggest you take a look at the Json Web Signature (JWS)
Related
Standard practice is for an event-notification-service to give you a secret when you register your endpoint with them, and then the service signs the messages it sends to your endpoint with that shared secret, so that your server can verify the messages are legitimate.
However why is this necessary? Assuming your endpoint and the event-notification-service are both using HTTPS, shouldn't HTTPS take care of everything you need anyway, making this entire secret and signing process redundant? Is the idea to not rely on SSL-certificates, or allow clients to use endpoints that are not HTTPS?
The signing secret is here to ensure the event does come from Stripe. The signature is also associated with a specific timestamp to avoid "replay attacks".
Without the secret, I could figure out or guess the webhook handler that you built that expects for example the checkout.session.completed event and then I would send you a fake event evt_123 making it look like the payment succeeded for you to give me access to the product for example. There are some ways around this (hard to guess endpoint, allow list for Stripe's IP addresses, secret in the URL, etc.) but they all have downsides.
Similarly, if I can find the payload of an event that works, I could re-use the same exact payload (that I know is valid since you accepted it) and replay it say every day to continue getting daily access to some content. With the webhook signature logic that Stripe built, the signature is associated to a specific timestamp and you can for example reject events if the signature is more than 10 minutes old. Stripe covers this in their docs here.
This article in the AWS Developer Blog describes how to generate pre-signed urls for S3 files that will be encrypted on the server side: https://aws.amazon.com/blogs/developer/generating-amazon-s3-pre-signed-urls-with-sse-kms-part-2/ . The part that describes how to generate a url makes sense, but then the article goes on to describe how to use the url in a put request, and it says that, in addition to the generated url, one must add to the http request a header specifying the encryption algorithm. Why is this necessary when the encryption algorithm was included in the url's generation?
// Generate a pre-signed PUT URL for use with SSE-KMS
GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
myExistingBucket, myKey, HttpMethod.PUT)
.withSSEAlgorithm(SSEAlgorithm.KMS.getAlgorithm());
...
HttpPut putreq = new HttpPut(URI.create(puturl.toExternalForm()));
putreq.addHeader(new BasicHeader(Headers.SERVER_SIDE_ENCRYPTION,
SSEAlgorithm.KMS.getAlgorithm()));
I ask in part due to curiosity but also because the code that has to execute the put request in my case is running on a different machine from the one that generates the url. I won't go into the details, but it's a real hassle to make sure that the header that one machine generates matches the url that the other machine generates.
I don't know how "clear" the justification is, but my assumption is that the encryption parameters are required to be sent as headers in order to keep them from appearing in logs that log the query string.
Why is this necessary when the encryption algorithm was included in the url's generation
This aspect is easier to answer. A signed request is a way of proving to the system that someone in possession of your access-key-secret authorized this exact, specific request, down to the last byte. Change anything about the request that was included in the signature generation, and you have invalidated the signature, because now the request differs from what was authorized.
When S3 receives your request, it looks up your secret key and does exactly what your local code does... it signs the request it received and checks whether its generated signature matches the one you supplied.
A common misconception is that signed URLs are generated by the service, but they aren't. Signed URLs are generated entirely locally. The algorithm is not computationally-feasible to reverse-engineer, and for any given request, there is exactly 1 possible valid signature.
It looks like the information about encryption doesn't get included in the presigned url. I'm guessing the only reason it's included in the GeneratePresignedUrlRequest is for generating a hash that's checked for authentication. After reading up on the question of when to use url parameters vs custom headers, I have to wonder if there is any clear justification for S3's using custom headers instead of url parameters here. As mentioned in the original question, having to include these headers makes using this API difficult. I wouldn't have this problem if url parameters were used instead. Any comments on the matter would be appreciated.
I don't want to leave any chance for hackers to see what I put in URI apart form the domain. For example,
http://www.mynewwebsite.com/ (some text or webpage.html)
In the above, I want to make '(some text or webpage.html)' secure
Now I am confused between two approaches.
1) Should I add a custom http header whose value is "(some text or webpage.html)" and on the server, I read the header to address the request.
2) Should I simply switch to https?
What are the pros and cons of each? (Forget about additional money i need to pay to use https)
Thanks in advance.
Switching to https is simple solution if you don't want hackers to sniff your network and read request params
With HTTPS, the request and headers are encrypted, this should prevent prying eyes as per your requirements. Depending upon your setup, the SSL certificate may be free using Let's Encrypt.
If you simply add a custom header to an HTTP request, you may hide your intentions from a cursory glance but the data could still be accessible to a 3rd party.
1) Should I add a custom http header whose value is "(some text or
webpage.html)" and on the server, I read the header to address the
request.
I think you misunderstood how http works. The header content are sent before the body content. The hacker could simply read the entire stream and focuses on just the header to extract information.
2) Should I simply switch to https?
Switching to HTTPS is a must (to me) if you are going to do user authentication or wanting to keep something secret. It encrypts the information so unintended recipients cannot understand. The recipient have to decide that information with their private key.
There are a number of SSL options that you have.
lets encrypt
It's the biggest free ssl certificate provider. However their certs only have a 3 months life so you need to renew it every now and then. Perhaps you can look up cron job and use it to check with let's encrypt server and renew when expire date near.
Other paid ssl provider
Although both has no differ in encryption level. But features that standout between the two that lets encrypt yet to have is wildcard ssl on *.yoururl.com. This offers the entire sub domain not just a single url as well as in an event of breach, insurance will cover for the damage.
HTTPS encrypts your payload (including headers and URL via SSL/TLS) along the route from point A to point B, preventing hackers from seeing your data if they intercept it between these two points. It's the only way to achieve the security you're looking for.
However if the potential hacker legally sits at point B, HTTPS will not help you. Regardless if the information is tucked away in the header or part of the URL, it will be visible. Anyone can see the headers. If you want to hide specific information from the end-user, manually encrypt it.
I'm building a very basic REST API for my site. The only verb I'm using at the moment is GET which simply outputs a list of posts on my site.
For authentication, I have been reading about HMAC and in particular this article:
http://websec.io/2013/02/14/API-Authentication-Public-Private-Hashes.html
My question centres around what the 'hashed content' should be. As I am not posting any data to the API, I have just been hashing my public key (with a simple salt) using my private key.
Is this a secure method or should I use a different 'content hash'? The data is not sensitive in any way - this was just a learning exercise.
You will want to consider the "replay attacker". When the attacker captures a packet between your API client and the server, what damage can she do when she replays it later?
In your case, if you only use the API key of the user in the HMAC, then the attacker will be able to impersonate that user when she replay the requests. She can call any API request and just set the HMAC to what she captured, as it will validate.
If none of the parameters of the request are included, the attacker will be able to call the request and specify her own parameters. So it's better if the parameters are also included in the HMAC. It doesn't prevent replay of the request with these specific parameters though.
You can include a timestamp parameter to the request and in the HMAC. The server will recompute the HMAC including the timestamp passed in, and it will also verify that the timestamp is recent enough. As the attacker cannot forge new HMAC out of thin air, she will only be able to use ones with matching timestamps that you will reject based on age.
The client is using oauth signing their request and call my server, I know the client's oauth key and secret, then how can I verify the call is from the actual user? should I calculate the signature with all parameters sent along with the request and compare it with the signature within the request? I am using singpost library.
Thank you, any hint will be very helpful!
OK for the future reference - to validate the signature, this is what I did:
Parse all parameters in the incoming request's header and use all these parameters and my own consumer credential to calculate the signature again, then compare with the incoming signature. It's a pain for me since no proper library can do it in a easy way, I have to write it myself...